1
|
Bobowicz M, Badocha M, Gwozdziewicz K, Rygusik M, Kalinowska P, Szurowska E, Dziubich T. Segmentation-based BI-RADS ensemble classification of breast tumours in ultrasound images. Int J Med Inform 2024; 189:105522. [PMID: 38852288 DOI: 10.1016/j.ijmedinf.2024.105522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 05/19/2024] [Accepted: 06/05/2024] [Indexed: 06/11/2024]
Abstract
BACKGROUND The development of computer-aided diagnosis systems in breast cancer imaging is exponential. Since 2016, 81 papers have described the automated segmentation of breast lesions in ultrasound images using artificial intelligence. However, only two papers have dealt with complex BI-RADS classifications. PURPOSE This study addresses the automatic classification of breast lesions into binary classes (benign vs. malignant) and multiple BI-RADS classes based on a single ultrasonographic image. Achieving this task should reduce the subjectivity of an individual operator's assessment. MATERIALS AND METHODS Automatic image segmentation methods (PraNet, CaraNet and FCBFormer) adapted to the specific segmentation task were investigated using the U-Net model as a reference. A new classification method was developed using an ensemble of selected segmentation approaches. All experiments were performed on publicly available BUS B, OASBUD, BUSI and private datasets. RESULTS FCBFormer achieved the best outcomes for the segmentation task with intersection over union metric values of 0.81, 0.80 and 0.73 and Dice values of 0.89, 0.87 and 0.82, respectively, for the BUS B, BUSI and OASBUD datasets. Through a series of experiments, we determined that adding an extra 30-pixel margin to the segmentation mask counteracts the potential errors introduced by the segmentation algorithm. An assembly of the full image classifier, bounding box classifier and masked image classifier was the most accurate for binary classification and had the best accuracy (ACC; 0.908), F1 (0.846) and area under the receiver operating characteristics curve (AUROC; 0.871) in the BUS B and ACC (0.982), F1 (0.984) and AUROC (0.998) in the UCC BUS datasets, outperforming each classifier used separately. It was also the most effective for BI-RADS classification, with ACC of 0.953, F1 of 0.920 and AUROC of 0.986 in UCC BUS. Hard voting was the most effective method for dichotomous classification. For the multi-class BI-RADS classification, the soft voting approach was employed. CONCLUSIONS The proposed new classification approach with an ensemble of segmentation and classification approaches proved more accurate than most published results for binary and multi-class BI-RADS classifications.
Collapse
Affiliation(s)
- Maciej Bobowicz
- 2(nd) Department of Radiology, Medical University of Gdansk, 17 Smoluchowskiego Str., Gdansk 80-214, Poland.
| | - Mikołaj Badocha
- 2(nd) Department of Radiology, Medical University of Gdansk, 17 Smoluchowskiego Str., Gdansk 80-214, Poland.
| | - Katarzyna Gwozdziewicz
- 2(nd) Department of Radiology, Medical University of Gdansk, 17 Smoluchowskiego Str., Gdansk 80-214, Poland.
| | - Marlena Rygusik
- 2(nd) Department of Radiology, Medical University of Gdansk, 17 Smoluchowskiego Str., Gdansk 80-214, Poland.
| | - Paulina Kalinowska
- Department of Thoracic Radiology, Karolinska University Hospital, Anna Steckséns g 41, Solna 17176, Sweden.
| | - Edyta Szurowska
- 2(nd) Department of Radiology, Medical University of Gdansk, 17 Smoluchowskiego Str., Gdansk 80-214, Poland.
| | - Tomasz Dziubich
- Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, 11/12 G. Narutowicza Str., Gdańsk 80-233, Poland.
| |
Collapse
|
2
|
Xie Z, Sun Q, Han J, Sun P, Hu X, Ji N, Xu L, Ma J. Spectral analysis enhanced net (SAE-Net) to classify breast lesions with BI-RADS category 4 or higher. ULTRASONICS 2024; 143:107406. [PMID: 39047350 DOI: 10.1016/j.ultras.2024.107406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/25/2024] [Accepted: 07/15/2024] [Indexed: 07/27/2024]
Abstract
Early ultrasound screening for breast cancer reduces mortality significantly. The main evaluation criterion for breast ultrasound screening is the Breast Imaging-Reporting and Data System (BI-RADS), which categorizes breast lesions into categories 0-6 based on ultrasound grayscale images. Due to the limitations of ultrasound grayscale imaging, lesions with categories 4 and 5 necessitate additional biopsy for the confirmation of benign or malignant status. In this paper, the SAE-Net was proposed to combine the tissue microstructure information with the morphological information, thus improving the identification of high-grade breast lesions. The SAE-Net consists of a grayscale image branch and a spectral pattern branch. The grayscale image branch used the classical deep learning backbone model to learn the image morphological features from grayscale images, while the spectral pattern branch is designed to learn the microstructure features from ultrasound radio frequency (RF) signals. Our experimental results show that the best SAE-Net model has an area under the receiver operating characteristic curve (AUROC) of 12% higher and a Youden index of 19% higher than the single backbone model. These results demonstrate the effectiveness of our method, which potentially optimizes biopsy exemption and diagnostic efficiency.
Collapse
Affiliation(s)
- Zhun Xie
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Qizhen Sun
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Jiaqi Han
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Pengfei Sun
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Xiangdong Hu
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Nan Ji
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Lijun Xu
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Jianguo Ma
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China.
| |
Collapse
|
3
|
Ru J, Zhu Z, Shi J. Spatial and geometric learning for classification of breast tumors from multi-center ultrasound images: a hybrid learning approach. BMC Med Imaging 2024; 24:133. [PMID: 38840240 PMCID: PMC11155188 DOI: 10.1186/s12880-024-01307-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Accepted: 05/27/2024] [Indexed: 06/07/2024] Open
Abstract
BACKGROUND Breast cancer is the most common cancer among women, and ultrasound is a usual tool for early screening. Nowadays, deep learning technique is applied as an auxiliary tool to provide the predictive results for doctors to decide whether to make further examinations or treatments. This study aimed to develop a hybrid learning approach for breast ultrasound classification by extracting more potential features from local and multi-center ultrasound data. METHODS We proposed a hybrid learning approach to classify the breast tumors into benign and malignant. Three multi-center datasets (BUSI, BUS, OASBUD) were used to pretrain a model by federated learning, then every dataset was fine-tuned at local. The proposed model consisted of a convolutional neural network (CNN) and a graph neural network (GNN), aiming to extract features from images at a spatial level and from graphs at a geometric level. The input images are small-sized and free from pixel-level labels, and the input graphs are generated automatically in an unsupervised manner, which saves the costs of labor and memory space. RESULTS The classification AUCROC of our proposed method is 0.911, 0.871 and 0.767 for BUSI, BUS and OASBUD. The balanced accuracy is 87.6%, 85.2% and 61.4% respectively. The results show that our method outperforms conventional methods. CONCLUSIONS Our hybrid approach can learn the inter-feature among multi-center data and the intra-feature of local data. It shows potential in aiding doctors for breast tumor classification in ultrasound at an early stage.
Collapse
Affiliation(s)
- Jintao Ru
- Department of Medical Engineering, Shaoxing Hospital of Traditional Chinese Medicine, Shaoxing, Zhejiang, People's Republic of China.
| | - Zili Zhu
- Department of Radiology, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, People's Republic of China
| | - Jialin Shi
- Rehabilitation Medicine Institute, Zhejiang Rehabilitation Medical Center, Hangzhou, Zhejiang, People's Republic of China
| |
Collapse
|
4
|
Gómez-Flores W, Gregorio-Calas MJ, Coelho de Albuquerque Pereira W. BUS-BRA: A breast ultrasound dataset for assessing computer-aided diagnosis systems. Med Phys 2024; 51:3110-3123. [PMID: 37937827 DOI: 10.1002/mp.16812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 11/09/2023] Open
Abstract
PURPOSE Computer-aided diagnosis (CAD) systems on breast ultrasound (BUS) aim to increase the efficiency and effectiveness of breast screening, helping specialists to detect and classify breast lesions. CAD system development requires a set of annotated images, including lesion segmentation, biopsy results to specify benign and malignant cases, and BI-RADS categories to indicate the likelihood of malignancy. Besides, standardized partitions of training, validation, and test sets promote reproducibility and fair comparisons between different approaches. Thus, we present a publicly available BUS dataset whose novelty is the substantial increment of cases with the above-mentioned annotations and the inclusion of standardized partitions to objectively assess and compare CAD systems. ACQUISITION AND VALIDATION METHODS The BUS dataset comprises 1875 anonymized images from 1064 female patients acquired via four ultrasound scanners during systematic studies at the National Institute of Cancer (Rio de Janeiro, Brazil). The dataset includes biopsy-proven tumors divided into 722 benign and 342 malignant cases. Besides, a senior ultrasonographer performed a BI-RADS assessment in categories 2 to 5. Additionally, the ultrasonographer manually outlined the breast lesions to obtain ground truth segmentations. Furthermore, 5- and 10-fold cross-validation partitions are provided to standardize the training and test sets to evaluate and reproduce CAD systems. Finally, to validate the utility of the BUS dataset, an evaluation framework is implemented to assess the performance of deep neural networks for segmenting and classifying breast lesions. DATA FORMAT AND USAGE NOTES The BUS dataset is publicly available for academic and research purposes through an open-access repository under the name BUS-BRA: A Breast Ultrasound Dataset for Assessing CAD Systems. BUS images and reference segmentations are saved in Portable Network Graphic (PNG) format files, and the dataset information is stored in separate Comma-Separated Value (CSV) files. POTENTIAL APPLICATIONS The BUS-BRA dataset can be used to develop and assess artificial intelligence-based lesion detection and segmentation methods, and the classification of BUS images into pathological classes and BI-RADS categories. Other potential applications include developing image processing methods like despeckle filtering and contrast enhancement methods to improve image quality and feature engineering for image description.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Tamaulipas, Mexico
| | | | | |
Collapse
|
5
|
Li S, Tsui PH, Wu W, Wu S, Zhou Z. Ultrasound k-nearest neighbor entropy imaging: Theory, algorithm, and applications. ULTRASONICS 2024; 138:107256. [PMID: 38325231 DOI: 10.1016/j.ultras.2024.107256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 01/25/2024] [Accepted: 01/26/2024] [Indexed: 02/09/2024]
Abstract
Ultrasound information entropy is a flexible approach for analyzing ultrasound backscattering. Shannon entropy imaging based on probability distribution histograms (PDHs) has been implemented as a promising method for tissue characterization and diagnosis. However, the bin number affects the stability of entropy estimation. In this study, we introduced the k-nearest neighbor (KNN) algorithm to estimate entropy values and proposed ultrasound KNN entropy imaging. The proposed KNN estimator leveraged the Euclidean distance between data samples, rather than the histogram bins by conventional PDH estimators. We also proposed cumulative relative entropy (CRE) imaging to analyze time-series radiofrequency signals and applied it to monitor thermal lesions induced by microwave ablation (MWA). Computer simulation phantom experiments were conducted to validate and compare the performance of the proposed KNN entropy imaging, the conventional PDH entropy imaging, and Nakagami-m parametric imaging in detecting the variations of scatterer densities and visualizing inclusions. Clinical data of breast lesions were analyzed, and porcine liver MWA experiments ex vivo were conducted to validate the performance of KNN entropy imaging in classifying benign and malignant breast tumors and monitoring thermal lesions, respectively. Compared with PDH, the entropy estimation based on KNN was less affected by the tuning parameters. KNN entropy imaging was more sensitive to changes in scatterer densities and performed better visualizable capability than typical Shannon entropy (TSE) and Nakagami-m parametric imaging. Among different imaging methods, KNN-based Shannon entropy (KSE) imaging achieved the higher accuracy in classification of benign and malignant breast tumors and KNN-based CRE imaging had larger lesion-to-normal contrast when monitoring the ablated areas during MWA at different powers and treatment durations. Ultrasound KNN entropy imaging is a potential quantitative ultrasound approach for tissue characterization.
Collapse
Affiliation(s)
- Sinan Li
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan; Institute for Radiological Research, Chang Gung University, Taoyuan, Taiwan; Division of Pediatric Gastroenterology, Department of Pediatrics, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Weiwei Wu
- College of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Shuicai Wu
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, China.
| | - Zhuhuang Zhou
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, China.
| |
Collapse
|
6
|
Han J, Sun P, Sun Q, Xie Z, Xu L, Hu X, Ma J. Quantitative ultrasound parameters from scattering and propagation may reduce the biopsy rate for breast tumor. ULTRASONICS 2024; 138:107233. [PMID: 38171228 DOI: 10.1016/j.ultras.2023.107233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 12/05/2023] [Accepted: 12/24/2023] [Indexed: 01/05/2024]
Abstract
Breast cancer has become the most common cancer worldwide, and early screening improves the patient's survival rate significantly. Although pathology with needle-based biopsy is the gold standard for breast cancer diagnosis, it is invasive, painful, and expensive. Meanwhile it makes patients suffer from misplacement of the needle, resulting in misdiagnosis and further assessment. Ultrasound imaging is non-invasive and real-time, however, benign and malignant tumors are hard to differentiate in grayscale B-mode images. We hypothesis that breast tumors exhibit characteristic properties, which generates distinctive spectral patterns not only in scattering, but also during propagation. In this paper, we propose a breast tumor classification method that evaluates the spectral pattern of the tissues both inside the tumor and beneath it. First, quantitative ultrasonic parameters of these spectral patterns were calculated as the representation of the corresponding tissues. Second, parameters were classified by the K-Nearest Neighbor machine learning model. This method was verified with an open access dataset as a reference, and applied to our own dataset to evaluate the potential for tumors assessment. With both datasets, the proposed method demonstrates accurate classification of the tumors, which potentially makes it unnecessary for certain patients to take the biopsy, reducing the rate of the painful and expensive procedure.
Collapse
Affiliation(s)
- Jiaqi Han
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Pengfei Sun
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Qizhen Sun
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Zhun Xie
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Lijun Xu
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China
| | - Xiangdong Hu
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China.
| | - Jianguo Ma
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100191, China.
| |
Collapse
|
7
|
Pawłowska A, Ćwierz-Pieńkowska A, Domalik A, Jaguś D, Kasprzak P, Matkowski R, Fura Ł, Nowicki A, Żołek N. Curated benchmark dataset for ultrasound based breast lesion analysis. Sci Data 2024; 11:148. [PMID: 38297002 PMCID: PMC10830496 DOI: 10.1038/s41597-024-02984-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 01/17/2024] [Indexed: 02/02/2024] Open
Abstract
A new detailed dataset of breast ultrasound scans (BrEaST) containing images of benign and malignant lesions as well as normal tissue examples, is presented. The dataset consists of 256 breast scans collected from 256 patients. Each scan was manually annotated and labeled by a radiologist experienced in breast ultrasound examination. In particular, each tumor was identified in the image using a freehand annotation and labeled according to BIRADS features and lexicon. The histopathological classification of the tumor was also provided for patients who underwent a biopsy. The BrEaST dataset is the first breast ultrasound dataset containing patient-level labels, image-level annotations, and tumor-level labels with all cases confirmed by follow-up care or core needle biopsy result. To enable research into breast disease detection, tumor segmentation and classification, the BrEaST dataset is made publicly available with the CC-BY 4.0 license.
Collapse
Affiliation(s)
- Anna Pawłowska
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland
| | - Anna Ćwierz-Pieńkowska
- Maria Sklodowska-Curie National Institute of Oncology - National Research Institute Branch in Krakow ul, Garncarska 11, 31-115, Kraków, Poland
| | - Agnieszka Domalik
- Maria Sklodowska-Curie National Institute of Oncology - National Research Institute Branch in Krakow ul, Garncarska 11, 31-115, Kraków, Poland
| | - Dominika Jaguś
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland
| | - Piotr Kasprzak
- Breast Unit, Lower Silesian Oncology, Pulmonology and Hematology Center, pl. Ludwika Hirszfelda 12, 53-413, Wrocław, Poland
| | - Rafał Matkowski
- Breast Unit, Lower Silesian Oncology, Pulmonology and Hematology Center, pl. Ludwika Hirszfelda 12, 53-413, Wrocław, Poland
- Department of Oncology, Wrocław Medical University, Wrocław, Poland
| | - Łukasz Fura
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland
| | - Andrzej Nowicki
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland
| | - Norbert Żołek
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland.
| |
Collapse
|
8
|
Gómez-Flores W, Pereira WCDA. Gray-to-color image conversion in the classification of breast lesions on ultrasound using pre-trained deep neural networks. Med Biol Eng Comput 2023; 61:3193-3207. [PMID: 37713158 DOI: 10.1007/s11517-023-02928-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 08/29/2023] [Indexed: 09/16/2023]
Abstract
Breast ultrasound (BUS) image classification in benign and malignant classes is often based on pre-trained convolutional neural networks (CNNs) to cope with small-sized training data. Nevertheless, BUS images are single-channel gray-level images, whereas pre-trained CNNs learned from color images with red, green, and blue (RGB) components. Thus, a gray-to-color conversion method is applied to fit the BUS image to the CNN's input layer size. This paper evaluates 13 gray-to-color conversion methods proposed in the literature that follow three strategies: replicating the gray-level image to all RGB channels, decomposing the image to enhance inherent information like the lesion's texture and morphology, and learning a matching layer. Besides, we introduce an image decomposition method based on the lesion's structural information to describe its inner and outer complexity. These gray-to-color conversion methods are evaluated under the same experimental framework using a pre-trained CNN architecture named ResNet-18 and a BUS dataset with more than 3000 images. In addition, the Matthews correlation coefficient (MCC), sensitivity (SEN), and specificity (SPE) measure the classification performance. The experimental results show that decomposition methods outperform replication and learning-based methods when using information from the lesion's binary mask (obtained from a segmentation method), reaching an MCC value greater than 0.70 and specificity up to 0.92, although the sensitivity is about 0.80. On the other hand, regarding the proposed method, the trade-off between sensitivity and specificity is better balanced, obtaining about 0.88 for both indices and an MCC of 0.73. This study contributes to the objective assessment of different gray-to-color conversion approaches in classifying breast lesions, revealing that mask-based decomposition methods improve classification performance. Besides, the proposed method based on structural information improves the sensitivity, obtaining more reliable classification results on malignant cases and potentially benefiting clinical practice.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del IPN, Unidad Tamaulipas, Ciudad Victoria, 87138, Tamaulipas, Mexico.
| | | |
Collapse
|
9
|
Fan L, Gong X, Guo Y. General Multiscenario Ultrasound Image Tumor Diagnosis Method Based on Unsupervised Domain Adaptation. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:2291-2301. [PMID: 37532633 DOI: 10.1016/j.ultrasmedbio.2023.06.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 06/18/2023] [Accepted: 06/23/2023] [Indexed: 08/04/2023]
Abstract
OBJECTIVE The utilization of computer-aided diagnosis (CAD) in breast ultrasound image classification has been limited by small sample sizes and domain shift. Current ultrasound classification methods perform inadequately when exposed to cross-domain scenarios, as they struggle with data sets from unobserved domains. In the medical field, there are situations in which all images must share the same networks as they capture the same symptom of the same participant, implying that they share identical structural content. Nevertheless, most domain adaptation methods are not suitable for medical images as they overlook the common features among the images. METHODS To overcome these challenges, we propose a novel diverse-domain 2-D feature selection network (FSN), which uses the similarities among medical images and extracts features with a reconstruction network with shared weights. Additionally, it penalizes the feature domain distance through two adversarial learning modules that align the feature space and select common features. Our experiments illustrate that the proposed method is robust and can be applied to ultrasound images of various diseases. RESULTS Compared with the latest domain adaptive methods, 2-D FSN markedly enhances the accuracy of classification of breast, thyroid and endoscopic ultrasound images, achieving accuracies of 82.4%, 96.4% and 89.7%, respectively. Furthermore, the model was evaluated on an unsupervised domain adaptation task using ultrasound images from multiple sources and achieved an average accuracy of 77.3% across widely varying domains. CONCLUSION In general, 2-D FSN improves the classification ability of the model on multidomain ultrasound data sets through the learning of common features and the combination of multimodule intelligence. The algorithm has good clinical guidance value.
Collapse
Affiliation(s)
- Lin Fan
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, P.R. China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, P.R. China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University, Chengdu 611756, P.R. China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Southwest Jiaotong University, Chengdu 611756, P.R. China
| | - Xun Gong
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, P.R. China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, P.R. China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University, Chengdu 611756, P.R. China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Southwest Jiaotong University, Chengdu 611756, P.R. China.
| | - Ying Guo
- North China University of Science and Technology Affiliated Hospital, Tangshan, Hebei, China
| |
Collapse
|
10
|
Ru J, Lu B, Chen B, Shi J, Chen G, Wang M, Pan Z, Lin Y, Gao Z, Zhou J, Liu X, Zhang C. Attention guided neural ODE network for breast tumor segmentation in medical images. Comput Biol Med 2023; 159:106884. [PMID: 37071938 DOI: 10.1016/j.compbiomed.2023.106884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 01/25/2023] [Accepted: 03/30/2023] [Indexed: 04/05/2023]
Abstract
Breast cancer is the most common cancer in women. Ultrasound is a widely used screening tool for its portability and easy operation, and DCE-MRI can highlight the lesions more clearly and reveal the characteristics of tumors. They are both noninvasive and nonradiative for assessment of breast cancer. Doctors make diagnoses and further instructions through the sizes, shapes and textures of the breast masses showed on medical images, so automatic tumor segmentation via deep neural networks can to some extent assist doctors. Compared to some challenges which the popular deep neural networks have faced, such as large amounts of parameters, lack of interpretability, overfitting problem, etc., we propose a segmentation network named Att-U-Node which uses attention modules to guide a neural ODE-based framework, trying to alleviate the problems mentioned above. Specifically, the network uses ODE blocks to make up an encoder-decoder structure, feature modeling by neural ODE is completed at each level. Besides, we propose to use an attention module to calculate the coefficient and generate a much refined attention feature for skip connection. Three public available breast ultrasound image datasets (i.e. BUSI, BUS and OASBUD) and a private breast DCE-MRI dataset are used to assess the efficiency of the proposed model, besides, we upgrade the model to 3D for tumor segmentation with the data selected from Public QIN Breast DCE-MRI. The experiments show that the proposed model achieves competitive results compared with the related methods while mitigates the common problems of deep neural networks.
Collapse
Affiliation(s)
- Jintao Ru
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Beichen Lu
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Buran Chen
- Department of Thyroid and Breast Surgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Jialin Shi
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Gaoxiang Chen
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Meihao Wang
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Key Laboratory of Intelligent Medical Imaging of Wenzhou, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China.
| | - Zhifang Pan
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China.
| | - Yezhi Lin
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Key Laboratory of Intelligent Treatment and Life Support for Critical Diseases of Zhejiang Province, Wenzhou, 325000, People's Republic of China.
| | - Zhihong Gao
- Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Jiejie Zhou
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Xiaoming Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430065, People's Republic of China
| | - Chen Zhang
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| |
Collapse
|
11
|
Liu Y, He B, Zhang Y, Lang X, Yao R, Pan L. A Study on a Parameter Estimator for the Homodyned K Distribution Based on Table Search for Ultrasound Tissue Characterization. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:970-981. [PMID: 36631331 DOI: 10.1016/j.ultrasmedbio.2022.11.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 11/27/2022] [Accepted: 11/30/2022] [Indexed: 06/17/2023]
Abstract
OBJECTIVE The homodyned K (HK) distribution is considered to be the most suitable distribution in the context of tissue characterization; therefore, the search for a rapid and reliable parameter estimator for HK distribution is important. METHODS We propose a novel parameter estimator based on a table search (TS) for HK parameter estimates. The TS estimator can inherit the strength of conventional estimators by integrating various features and taking advantage of the TS method in a rapid and easy operation. Performance of the proposed TS estimator was evaluated and compared with that of XU (the estimation method based on X and U statistics) and artificial neural network (ANN) estimators. DISCUSSION The simulation results revealed that the TS estimator is superior to the XU and ANN estimators in terms of normalized standard deviations and relative root mean squared errors of parameter estimation, and is faster. Clinical experiments found that the area under the receiver operating curve for breast lesion classification using the parameters estimated by the TS estimator could reach 0.871. CONCLUSION The proposed TS estimator is more accurate, reliable and faster than the state-of-the-art XU and ANN estimators and has great potential for ultrasound tissue characterization based on the HK distribution.
Collapse
Affiliation(s)
- Yang Liu
- Department of Electronic Engineering, Information School, Yunnan University, Kunming, Yunnan, China
| | - Bingbing He
- Department of Electronic Engineering, Information School, Yunnan University, Kunming, Yunnan, China.
| | - Yufeng Zhang
- Department of Electronic Engineering, Information School, Yunnan University, Kunming, Yunnan, China
| | - Xun Lang
- Department of Electronic Engineering, Information School, Yunnan University, Kunming, Yunnan, China
| | - Ruihan Yao
- Department of Electronic Engineering, Information School, Yunnan University, Kunming, Yunnan, China
| | - Lingrui Pan
- Department of Electronic Engineering, Information School, Yunnan University, Kunming, Yunnan, China
| |
Collapse
|
12
|
AMS-PAN: Breast ultrasound image segmentation model combining attention mechanism and multi-scale features. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
13
|
Wang B, Saniie J. Massive Ultrasonic Data Compression Using Wavelet Packet Transformation Optimized by Convolutional Autoencoders. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:1395-1405. [PMID: 34499606 DOI: 10.1109/tnnls.2021.3105367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Ultrasonic signal acquisition platforms generate considerable amounts of data to be stored and processed, especially when multichannel scanning or beamforming is employed. Reducing the mass storage and allowing high-speed data transmissions necessitate the compression of ultrasonic data into a representation with fewer bits. High compression accuracy is crucial in many applications, such as ultrasonic medical imaging and nondestructive testing (NDT). In this study, we present learning models for massive ultrasonic data compression on the order of megabytes. A common and highly efficient compression method for ultrasonic data is signal decomposition and subband elimination using wavelet packet transformation (WPT). We designed an algorithm for finding the wavelet kernel that provides maximum energy compaction and the optimal subband decomposition tree structure for a given ultrasonic signal. Furthermore, the WPT convolutional autoencoder (WPTCAE) compression algorithm is proposed based on the WPT compression tree structure and the use of machine learning for estimating the optimal kernel. To further improve the compression accuracy, an autoencoder (AE) is incorporated into the WPTCAE model to build a hybrid model. The performance of the WPTCAE compression model is examined and benchmarked against other compression algorithms using ultrasonic radio frequency (RF) datasets acquired in NDT and medical imaging applications. The experimental results clearly show that the WPTCAE compression model provides improved compression ratios while maintaining high signal fidelity. The proposed learning models can achieve a compression accuracy of 98% by using only 6% of the original data.
Collapse
|
14
|
A hybrid attentional guidance network for tumors segmentation of breast ultrasound images. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02849-7. [PMID: 36853584 DOI: 10.1007/s11548-023-02849-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 01/31/2023] [Indexed: 03/01/2023]
Abstract
PURPOSE In recent years, breast cancer has become the greatest threat to women. There are many studies dedicated to the precise segmentation of breast tumors, which is indispensable in computer-aided diagnosis. Deep neural networks have achieved accurate segmentation of images. However, convolutional layers are biased to extract local features and tend to lose global and location information as the network deepens, which leads to a decrease in breast tumors segmentation accuracy. For this reason, we propose a hybrid attention-guided network (HAG-Net). We believe that this method will improve the detection rate and segmentation of tumors in breast ultrasound images. METHODS The method is equipped with multi-scale guidance block (MSG) for guiding the extraction of low-resolution location information. Short multi-head self-attention (S-MHSA) and convolutional block attention module are used to capture global features and long-range dependencies. Finally, the segmentation results are obtained by fusing multi-scale contextual information. RESULTS We compare with 7 state-of-the-art methods on two publicly available datasets through five random fivefold cross-validations. The highest dice coefficient, Jaccard Index and detect rate ([Formula: see text]%, [Formula: see text]%, [Formula: see text]% and [Formula: see text]%, [Formula: see text]%, [Formula: see text]%, separately) obtained on two publicly available datasets(BUSI and OASUBD), prove the superiority of our method. CONCLUSION HAG-Net can better utilize multi-resolution features to localize the breast tumors. Demonstrating excellent generalizability and applicability for breast tumors segmentation compare to other state-of-the-art methods.
Collapse
|
15
|
Qiu Y, Lin F, Chen W, Xu M. Pre-training in Medical Data: A Survey. MACHINE INTELLIGENCE RESEARCH 2023. [PMCID: PMC9942039 DOI: 10.1007/s11633-022-1382-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
Medical data refers to health-related information associated with regular patient care or as part of a clinical trial program. There are many categories of such data, such as clinical imaging data, bio-signal data, electronic health records (EHR), and multi-modality medical data. With the development of deep neural networks in the last decade, the emerging pre-training paradigm has become dominant in that it has significantly improved machine learning methods’ performance in a data-limited scenario. In recent years, studies of pre-training in the medical domain have achieved significant progress. To summarize these technology advancements, this work provides a comprehensive survey of recent advances for pre-training on several major types of medical data. In this survey, we summarize a large number of related publications and the existing benchmarking in the medical domain. Especially, the survey briefly describes how some pre-training methods are applied to or developed for medical data. From a data-driven perspective, we examine the extensive use of pre-training in many medical scenarios. Moreover, based on the summary of recent pre-training studies, we identify several challenges in this field to provide insights for future studies.
Collapse
Affiliation(s)
- Yixuan Qiu
- The University of Queensland, Brisbane, 4072 Australia
| | - Feng Lin
- The University of Queensland, Brisbane, 4072 Australia
| | - Weitong Chen
- The University of Adelaide, Adelaide, 5005 Australia
| | - Miao Xu
- The University of Queensland, Brisbane, 4072 Australia
| |
Collapse
|
16
|
Thomas C, Byra M, Marti R, Yap MH, Zwiggelaar R. BUS-Set: A benchmark for quantitative evaluation of breast ultrasound segmentation networks with public datasets. Med Phys 2023; 50:3223-3243. [PMID: 36794706 DOI: 10.1002/mp.16287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 12/30/2022] [Accepted: 12/30/2022] [Indexed: 02/17/2023] Open
Abstract
PURPOSE BUS-Set is a reproducible benchmark for breast ultrasound (BUS) lesion segmentation, comprising of publicly available images with the aim of improving future comparisons between machine learning models within the field of BUS. METHOD Four publicly available datasets were compiled creating an overall set of 1154 BUS images, from five different scanner types. Full dataset details have been provided, which include clinical labels and detailed annotations. Furthermore, nine state-of-the-art deep learning architectures were selected to form the initial benchmark segmentation result, tested using five-fold cross-validation and MANOVA/ANOVA with Tukey statistical significance test with a threshold of 0.01. Additional evaluation of these architectures was conducted, exploring possible training bias, and lesion size and type effects. RESULTS Of the nine state-of-the-art benchmarked architectures, Mask R-CNN obtained the highest overall results, with the following mean metric scores: Dice score of 0.851, intersection over union of 0.786 and pixel accuracy of 0.975. MANOVA/ANOVA and Tukey test results showed Mask R-CNN to be statistically significant better compared to all other benchmarked models with a p-value >0.01. Moreover, Mask R-CNN achieved the highest mean Dice score of 0.839 on an additional 16 image dataset, that contained multiple lesions per image. Further analysis on regions of interest was conducted, assessing Hamming distance, depth-to-width ratio (DWR), circularity, and elongation, which showed that the Mask R-CNN's segmentations maintained the most morphological features with correlation coefficients of 0.888, 0.532, 0.876 for DWR, circularity, and elongation, respectively. Based on the correlation coefficients, statistical test indicated that Mask R-CNN was only significantly different to Sk-U-Net. CONCLUSIONS BUS-Set is a fully reproducible benchmark for BUS lesion segmentation obtained through the use of public datasets and GitHub. Of the state-of-the-art convolution neural network (CNN)-based architectures, Mask R-CNN achieved the highest performance overall, further analysis indicated that a training bias may have occurred due to the lesion size variation in the dataset. All dataset and architecture details are available at GitHub: https://github.com/corcor27/BUS-Set, which allows for a fully reproducible benchmark.
Collapse
Affiliation(s)
- Cory Thomas
- Department of Computer Science, Aberystwyth University, Aberystwyth, UK
| | - Michal Byra
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.,Department of Radiology, University of California, San Diego, California, USA
| | - Robert Marti
- Computer Vision and Robotics Institute, University of Girona, Girona, Spain
| | - Moi Hoon Yap
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester, UK
| | - Reyer Zwiggelaar
- Department of Computer Science, Aberystwyth University, Aberystwyth, UK
| |
Collapse
|
17
|
Zhai D, Hu B, Gong X, Zou H, Luo J. ASS-GAN: Asymmetric semi-supervised GAN for breast ultrasound image segmentation. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.021] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
18
|
End-to-End Convolutional Neural Network Framework for Breast Ultrasound Analysis Using Multiple Parametric Images Generated from Radiofrequency Signals. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12104942] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Breast ultrasound (BUS) is an effective clinical modality for diagnosing breast abnormalities in women. Deep-learning techniques based on convolutional neural networks (CNN) have been widely used to analyze BUS images. However, the low quality of B-mode images owing to speckle noise and a lack of training datasets makes BUS analysis challenging in clinical applications. In this study, we proposed an end-to-end CNN framework for BUS analysis using multiple parametric images generated from radiofrequency (RF) signals. The entropy and phase images, which represent the microstructural and anatomical information, respectively, and the traditional B-mode images were used as parametric images in the time domain. In addition, the attenuation image, estimated from the frequency domain using RF signals, was used for the spectral features. Because one set of RF signals from one patient produced multiple images as CNN inputs, the proposed framework overcame the limitation of datasets in a broad sense of data augmentation while providing complementary information to compensate for the low quality of the B-mode images. The experimental results showed that the proposed architecture improved the classification accuracy and recall by 5.5% and 11.6%, respectively, compared with the traditional approach using only B-mode images. The proposed framework can be extended to various other parametric images in both the time and frequency domains using deep neural networks to improve its performance.
Collapse
|
19
|
Yao R, Zhang Y, Wu K, Li Z, He M, Fengyue B. Quantitative assessment for characterization of breast lesion tissues using adaptively decomposed ultrasound RF images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
20
|
Byra M, Jarosik P, Dobruch-Sobczak K, Klimonda Z, Piotrzkowska-Wroblewska H, Litniewski J, Nowicki A. Joint segmentation and classification of breast masses based on ultrasound radio-frequency data and convolutional neural networks. ULTRASONICS 2022; 121:106682. [PMID: 35065458 DOI: 10.1016/j.ultras.2021.106682] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 12/08/2021] [Accepted: 12/30/2021] [Indexed: 06/14/2023]
Abstract
In this paper, we propose a novel deep learning method for joint classification and segmentation of breast masses based on radio-frequency (RF) ultrasound (US) data. In comparison to commonly used classification and segmentation techniques, utilizing B-mode US images, we train the network with RF data (data before envelope detection and dynamic compression), which are considered to include more information on tissue's physical properties than standard B-mode US images. Our multi-task network, based on the Y-Net architecture, can effectively process large matrices of RF data by mixing 1D and 2D convolutional filters. We use data collected from 273 breast masses to compare the performance of networks trained with RF data and US images. The multi-task model developed based on the RF data achieved good classification performance, with area under the receiver operating characteristic curve (AUC) of 0.90. The network based on the US images achieved AUC of 0.87. In the case of the segmentation, we obtained mean Dice scores of 0.64 and 0.60 for the approaches utilizing US images and RF data, respectively. Moreover, the interpretability of the networks was studied using class activation mapping technique and by filter weights visualizations.
Collapse
Affiliation(s)
- Michal Byra
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.
| | - Piotr Jarosik
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | - Katarzyna Dobruch-Sobczak
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland; Maria Sklodowska-Curie Memorial Cancer Centre and Institute of Oncology, Warsaw, Poland
| | - Ziemowit Klimonda
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | | | - Jerzy Litniewski
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | - Andrzej Nowicki
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
21
|
Breast Tumor Classification Using Intratumoral Quantitative Ultrasound Descriptors. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1633858. [PMID: 35295204 PMCID: PMC8920646 DOI: 10.1155/2022/1633858] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 02/15/2022] [Accepted: 02/23/2022] [Indexed: 12/11/2022]
Abstract
Breast cancer is a global epidemic, responsible for one of the highest mortality rates among women. Ultrasound imaging is becoming a popular tool for breast cancer screening, and quantitative ultrasound (QUS) techniques are being increasingly applied by researchers in an attempt to characterize breast tissue. Several different quantitative descriptors for breast cancer have been explored by researchers. This study proposes a breast tumor classification system using the three major types of intratumoral QUS descriptors which can be extracted from ultrasound radiofrequency (RF) data: spectral features, envelope statistics features, and texture features. A total of 16 features were extracted from ultrasound RF data across two different datasets, of which one is balanced and the other is severely imbalanced. The balanced dataset contains RF data of 100 patients with breast tumors, of which 48 are benign and 52 are malignant. The imbalanced dataset contains RF data of 130 patients with breast tumors, of which 104 are benign and 26 are malignant. Holdout validation was used to split the balanced dataset into 60% training and 40% testing sets. Feature selection was applied on the training set to identify the most relevant subset for the classification of benign and malignant breast tumors, and the performance of the features was evaluated on the test set. A maximum classification accuracy of 95% and an area under the receiver operating characteristic curve (AUC) of 0.968 was obtained on the test set. The performance of the identified relevant features was further validated on the imbalanced dataset, where a hybrid resampling strategy was firstly utilized to create an optimal balance between benign and malignant samples. A maximum classification accuracy of 93.01%, sensitivity of 94.62%, specificity of 91.4%, and AUC of 0.966 were obtained. The results indicate that the identified features are able to distinguish between benign and malignant breast lesions very effectively, and the combination of the features identified in this research has the potential to be a significant tool in the noninvasive rapid and accurate diagnosis of breast cancer.
Collapse
|
22
|
Di X, Zhong S, Zhang Y. Saliency map-guided hierarchical dense feature aggregation framework for breast lesion classification using ultrasound image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106612. [PMID: 35033757 DOI: 10.1016/j.cmpb.2021.106612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Revised: 11/28/2021] [Accepted: 12/29/2021] [Indexed: 06/14/2023]
Abstract
Deep learning methods, especially convolutional neural networks, have advanced the breast lesion classification task using breast ultrasound (BUS) images. However, constructing a highly-accurate classification model still remains challenging due to complex pattern, relatively-low contrast and fuzzy boundary existing between lesion regions (i.e., foreground) and the surrounding tissues (i.e., background). Few studies have separated foreground and background for learning domain-specific representations, and then fused them for improving performance of models. In this paper, we propose a saliency map-guided hierarchical dense feature aggregation framework for breast lesion classification using BUS images. Specifically, we first generate saliency maps for foreground and background via super-pixel clustering and multi-scale region grouping. Then, a triple-branch network, including two feature extraction branches and a feature aggregation branch, is constructed to learn and fuse discriminative representations under the guidance of priors provided by saliency maps. In particular, two feature extraction branches take the original image and corresponding saliency map as input for extracting foreground- and background-specific representations. Subsequently, a hierarchical feature aggregation branch receives and fuses the features from different stages of two feature extraction branches, for lesion classification in a task-oriented manner. The proposed model was evaluated on three datasets using 5-fold cross validation, and experimental results have demonstrated that it outperforms several state-of-the-art deep learning methods on breast lesion diagnosis using BUS images.
Collapse
Affiliation(s)
- Xiaohui Di
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
| | - Shengzhou Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
23
|
Ning Z, Zhong S, Feng Q, Chen W, Zhang Y. SMU-Net: Saliency-Guided Morphology-Aware U-Net for Breast Lesion Segmentation in Ultrasound Image. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:476-490. [PMID: 34582349 DOI: 10.1109/tmi.2021.3116087] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Deep learning methods, especially convolutional neural networks, have been successfully applied to lesion segmentation in breast ultrasound (BUS) images. However, pattern complexity and intensity similarity between the surrounding tissues (i.e., background) and lesion regions (i.e., foreground) bring challenges for lesion segmentation. Considering that such rich texture information is contained in background, very few methods have tried to explore and exploit background-salient representations for assisting foreground segmentation. Additionally, other characteristics of BUS images, i.e., 1) low-contrast appearance and blurry boundary, and 2) significant shape and position variation of lesions, also increase the difficulty in accurate lesion segmentation. In this paper, we present a saliency-guided morphology-aware U-Net (SMU-Net) for lesion segmentation in BUS images. The SMU-Net is composed of a main network with an additional middle stream and an auxiliary network. Specifically, we first propose generation of saliency maps which incorporate both low-level and high-level image structures, for foreground and background. These saliency maps are then employed to guide the main network and auxiliary network for respectively learning foreground-salient and background-salient representations. Furthermore, we devise an additional middle stream which basically consists of background-assisted fusion, shape-aware, edge-aware and position-aware units. This stream receives the coarse-to-fine representations from the main network and auxiliary network for efficiently fusing the foreground-salient and background-salient features and enhancing the ability of learning morphological information for network. Extensive experiments on five datasets demonstrate higher performance and superior robustness to the scale of dataset than several state-of-the-art deep learning approaches in breast lesion segmentation in ultrasound image.
Collapse
|
24
|
Meraj T, Alosaimi W, Alouffi B, Rauf HT, Kumar SA, Damaševičius R, Alyami H. A quantization assisted U-Net study with ICA and deep features fusion for breast cancer identification using ultrasonic data. PeerJ Comput Sci 2021; 7:e805. [PMID: 35036531 PMCID: PMC8725669 DOI: 10.7717/peerj-cs.805] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 11/12/2021] [Indexed: 06/14/2023]
Abstract
Breast cancer is one of the leading causes of death in women worldwide-the rapid increase in breast cancer has brought about more accessible diagnosis resources. The ultrasonic breast cancer modality for diagnosis is relatively cost-effective and valuable. Lesion isolation in ultrasonic images is a challenging task due to its robustness and intensity similarity. Accurate detection of breast lesions using ultrasonic breast cancer images can reduce death rates. In this research, a quantization-assisted U-Net approach for segmentation of breast lesions is proposed. It contains two step for segmentation: (1) U-Net and (2) quantization. The quantization assists to U-Net-based segmentation in order to isolate exact lesion areas from sonography images. The Independent Component Analysis (ICA) method then uses the isolated lesions to extract features and are then fused with deep automatic features. Public ultrasonic-modality-based datasets such as the Breast Ultrasound Images Dataset (BUSI) and the Open Access Database of Raw Ultrasonic Signals (OASBUD) are used for evaluation comparison. The OASBUD data extracted the same features. However, classification was done after feature regularization using the lasso method. The obtained results allow us to propose a computer-aided design (CAD) system for breast cancer identification using ultrasonic modalities.
Collapse
Affiliation(s)
- Talha Meraj
- Department of Computer Science, COMSATS University Islamabad-Wah Campus, Wah Cantt, Pakistan
| | - Wael Alosaimi
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Hafiz Tayyab Rauf
- Department of Computer Science, Faculty of Engineering & Informatics, University of Bradford, Bradford, United Kingdom
| | - Swarn Avinash Kumar
- Department of Information Technology, Indian Institute of Information Technology, Uttar Pradesh, Jhalwa, Prayagraj, India
| | | | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| |
Collapse
|
25
|
Gare GR, Li J, Joshi R, Magar R, Vaze MP, Yousefpour M, Rodriguez RL, Galeotti JM. W-Net: Dense and diagnostic semantic segmentation of subcutaneous and breast tissue in ultrasound images by incorporating ultrasound RF waveform data. Med Image Anal 2021; 76:102326. [PMID: 34936967 DOI: 10.1016/j.media.2021.102326] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 11/25/2021] [Accepted: 11/29/2021] [Indexed: 12/13/2022]
Abstract
We study the use of raw ultrasound waveforms, often referred to as the "Radio Frequency" (RF) data, for the semantic segmentation of ultrasound scans to carry out dense and diagnostic labeling. We present W-Net, a novel Convolution Neural Network (CNN) framework that employs the raw ultrasound waveforms in addition to the grey ultrasound image to semantically segment and label tissues for anatomical, pathological, or other diagnostic purposes. To the best of our knowledge, this is also the first deep-learning or CNN approach for segmentation that analyzes ultrasound raw RF data along with the grey image. We chose subcutaneous tissue (SubQ) segmentation as our initial clinical goal for dense segmentation since it has diverse intermixed tissues, is challenging to segment, and is an underrepresented research area. SubQ potential applications include plastic surgery, adipose stem-cell harvesting, lymphatic monitoring, and possibly detection/treatment of certain types of tumors. Unlike prior work, we seek to label every pixel in the image, without the use of a background class. A custom dataset consisting of hand-labeled images by an expert clinician and trainees are used for the experimentation, currently labeled into the following categories: skin, fat, fat fascia/stroma, muscle, and muscle fascia. We compared W-Net and attention variant of W-Net (AW-Net) with U-Net and Attention U-Net (AU-Net). Our novel W-Net's RF-Waveform encoding architecture outperformed regular U-Net and AU-Net, achieving the best mIoU accuracy (averaged across all tissue classes). We study the impact of RF data on dense labeling of the SubQ region, which is followed by the analyses of the generalization capability of the networks to patients and analysis on the SubQ tissue classes, determining that fascia tissues, especially muscle fascia in particular, are the most difficult anatomic class to recognize for both humans and AI algorithms. We present diagnostic semantic segmentation, which is semantic segmentation carried out for the purposes of direct diagnostic pixel labeling, and apply it to breast tumor detection task on a publicly available dataset to segment pixels into malignant tumor, benign tumor, and background tissue class. Using the segmented image we diagnose the patient by classifying the breast lesion as either benign or malignant. We demonstrate the diagnostic capability of RF data with the use of W-Net, which achieves the best segmentation scores across all classes.
Collapse
Affiliation(s)
| | - Jiayuan Li
- Carnegie Mellon University, Pittsburgh PA 15213, USA
| | - Rohan Joshi
- Carnegie Mellon University, Pittsburgh PA 15213, USA
| | | | - Mrunal Prashant Vaze
- Carnegie Mellon University, Pittsburgh PA 15213, USA; Simple Origin Inc, Pittsburgh, PA 15206, USA
| | - Michael Yousefpour
- Carnegie Mellon University, Pittsburgh PA 15213, USA; University of Pittsburgh Medical Center, Pittsburgh PA 15260, USA
| | | | | |
Collapse
|
26
|
Ilesanmi AE, Chaumrattanakul U, Makhanov SS. Methods for the segmentation and classification of breast ultrasound images: a review. J Ultrasound 2021; 24:367-382. [PMID: 33428123 PMCID: PMC8572242 DOI: 10.1007/s40477-020-00557-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 12/21/2020] [Indexed: 02/07/2023] Open
Abstract
PURPOSE Breast ultrasound (BUS) is one of the imaging modalities for the diagnosis and treatment of breast cancer. However, the segmentation and classification of BUS images is a challenging task. In recent years, several methods for segmenting and classifying BUS images have been studied. These methods use BUS datasets for evaluation. In addition, semantic segmentation algorithms have gained prominence for segmenting medical images. METHODS In this paper, we examined different methods for segmenting and classifying BUS images. Popular datasets used to evaluate BUS images and semantic segmentation algorithms were examined. Several segmentation and classification papers were selected for analysis and review. Both conventional and semantic methods for BUS segmentation were reviewed. RESULTS Commonly used methods for BUS segmentation were depicted in a graphical representation, while other conventional methods for segmentation were equally elucidated. CONCLUSIONS We presented a review of the segmentation and classification methods for tumours detected in BUS images. This review paper selected old and recent studies on segmenting and classifying tumours in BUS images.
Collapse
Affiliation(s)
- Ademola E. Ilesanmi
- School of ICT, Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, 12000 Thailand
| | | | - Stanislav S. Makhanov
- School of ICT, Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, 12000 Thailand
| |
Collapse
|
27
|
Jang YH, Kim W, Kim J, Woo KS, Lee HJ, Jeon JW, Shim SK, Han J, Hwang CS. Time-varying data processing with nonvolatile memristor-based temporal kernel. Nat Commun 2021; 12:5727. [PMID: 34593800 PMCID: PMC8484437 DOI: 10.1038/s41467-021-25925-5] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 09/09/2021] [Indexed: 11/24/2022] Open
Abstract
Recent advances in physical reservoir computing, which is a type of temporal kernel, have made it possible to perform complicated timing-related tasks using a linear classifier. However, the fixed reservoir dynamics in previous studies have limited application fields. In this study, temporal kernel computing was implemented with a physical kernel that consisted of a W/HfO2/TiN memristor, a capacitor, and a resistor, in which the kernel dynamics could be arbitrarily controlled by changing the circuit parameters. After the capability of the temporal kernel to identify the static MNIST data was proven, the system was adopted to recognize the sequential data, ultrasound (malignancy of lesions) and electrocardiogram (arrhythmia), that had a significantly different time constant (10-7 vs. 1 s). The suggested system feasibly performed the tasks by simply varying the capacitance and resistance. These functionalities demonstrate the high adaptability of the present temporal kernel compared to the previous ones.
Collapse
Affiliation(s)
- Yoon Ho Jang
- Department of Materials Science and Engineering College of Engineering, Seoul National University, Seoul, 08826, Republic of Korea
- Inter-university Semiconductor Research Center, Seoul National University, Seoul, 08826, Republic of Korea
| | - Woohyun Kim
- Department of Materials Science and Engineering College of Engineering, Seoul National University, Seoul, 08826, Republic of Korea
- Inter-university Semiconductor Research Center, Seoul National University, Seoul, 08826, Republic of Korea
| | - Jihun Kim
- Department of Materials Science and Engineering College of Engineering, Seoul National University, Seoul, 08826, Republic of Korea
- Inter-university Semiconductor Research Center, Seoul National University, Seoul, 08826, Republic of Korea
| | - Kyung Seok Woo
- Department of Materials Science and Engineering College of Engineering, Seoul National University, Seoul, 08826, Republic of Korea
- Inter-university Semiconductor Research Center, Seoul National University, Seoul, 08826, Republic of Korea
| | - Hyun Jae Lee
- Department of Materials Science and Engineering College of Engineering, Seoul National University, Seoul, 08826, Republic of Korea
- Inter-university Semiconductor Research Center, Seoul National University, Seoul, 08826, Republic of Korea
| | - Jeong Woo Jeon
- Department of Materials Science and Engineering College of Engineering, Seoul National University, Seoul, 08826, Republic of Korea
- Inter-university Semiconductor Research Center, Seoul National University, Seoul, 08826, Republic of Korea
| | - Sung Keun Shim
- Department of Materials Science and Engineering College of Engineering, Seoul National University, Seoul, 08826, Republic of Korea
- Inter-university Semiconductor Research Center, Seoul National University, Seoul, 08826, Republic of Korea
| | - Janguk Han
- Department of Materials Science and Engineering College of Engineering, Seoul National University, Seoul, 08826, Republic of Korea
- Inter-university Semiconductor Research Center, Seoul National University, Seoul, 08826, Republic of Korea
| | - Cheol Seong Hwang
- Department of Materials Science and Engineering College of Engineering, Seoul National University, Seoul, 08826, Republic of Korea.
- Inter-university Semiconductor Research Center, Seoul National University, Seoul, 08826, Republic of Korea.
| |
Collapse
|
28
|
Byra M, Dobruch-Sobczak K, Klimonda Z, Piotrzkowska-Wroblewska H, Litniewski J. Early Prediction of Response to Neoadjuvant Chemotherapy in Breast Cancer Sonography Using Siamese Convolutional Neural Networks. IEEE J Biomed Health Inform 2021; 25:797-805. [PMID: 32749986 DOI: 10.1109/jbhi.2020.3008040] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Early prediction of response to neoadjuvant chemotherapy (NAC) in breast cancer is crucial for guiding therapy decisions. In this work, we propose a deep learning based approach for the early NAC response prediction in ultrasound (US) imaging. We used transfer learning with deep convolutional neural networks (CNNs) to develop the response prediction models. The usefulness of two transfer learning techniques was examined. First, a CNN pre-trained on the ImageNet dataset was utilized. Second, we applied double transfer learning, the CNN pre-trained on the ImageNet dataset was additionally fine-tuned with breast mass US images to differentiate malignant and benign lesions. Two prediction tasks were investigated. First, a L1 regularized logistic regression prediction model was developed based on generic neural features extracted from US images collected before the chemotherapy (a priori prediction). Second, Siamese CNNs were used to quantify differences between US images collected before the treatment and after the first and second course of NAC. The proposed methods were evaluated using US data collected from 39 tumors. The better performing deep learning models achieved areas under the receiver operating characteristic curve of 0.797 and 0.847 in the case of the a priori prediction and the Siamese model, respectively. The proposed approach was compared with a method based on handcrafted morphological features. Our study presents the feasibility of using transfer learning with CNNs for the NAC response prediction in US imaging.
Collapse
|
29
|
Cross-Tissue/Organ Transfer Learning for the Segmentation of Ultrasound Images Using Deep Residual U-Net. J Med Biol Eng 2021. [DOI: 10.1007/s40846-020-00585-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
30
|
Deep-Learning-Based Computer-Aided Systems for Breast Cancer Imaging: A Critical Review. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10228298] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
This paper provides a critical review of the literature on deep learning applications in breast tumor diagnosis using ultrasound and mammography images. It also summarizes recent advances in computer-aided diagnosis/detection (CAD) systems, which make use of new deep learning methods to automatically recognize breast images and improve the accuracy of diagnoses made by radiologists. This review is based upon published literature in the past decade (January 2010–January 2020), where we obtained around 250 research articles, and after an eligibility process, 59 articles were presented in more detail. The main findings in the classification process revealed that new DL-CAD methods are useful and effective screening tools for breast cancer, thus reducing the need for manual feature extraction. The breast tumor research community can utilize this survey as a basis for their current and future studies.
Collapse
|
31
|
Gómez-Flores W, Coelho de Albuquerque Pereira W. A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumors in ultrasound. Comput Biol Med 2020; 126:104036. [PMID: 33059238 DOI: 10.1016/j.compbiomed.2020.104036] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 09/23/2020] [Accepted: 10/03/2020] [Indexed: 12/15/2022]
Abstract
The automatic segmentation of breast tumors in ultrasound (BUS) has recently been addressed using convolutional neural networks (CNN). These CNN-based approaches generally modify a previously proposed CNN architecture or they design a new architecture using CNN ensembles. Although these methods have reported satisfactory results, the trained CNN architectures are often unavailable for reproducibility purposes. Moreover, these methods commonly learn from small BUS datasets with particular properties, which limits generalization in new cases. This paper evaluates four public CNN-based semantic segmentation models that were developed by the computer vision community, as follows: (1) Fully Convolutional Network (FCN) with AlexNet network, (2) U-Net network, (3) SegNet using VGG16 and VGG19 networks, and (4) DeepLabV3+ using ResNet18, ResNet50, MobileNet-V2, and Xception networks. By transfer learning, these CNNs are fine-tuned to segment BUS images in normal and tumoral pixels. The goal is to select a potential CNN-based segmentation model to be further used in computer-aided diagnosis (CAD) systems. The main significance of this study is the comparison of eight well-established CNN architectures using a more extensive BUS dataset than those used by approaches that are currently found in the literature. More than 3000 BUS images acquired from seven US machine models are used for training and validation. The F1-score (F1s) and the Intersection over Union (IoU) quantify the segmentation performance. The segmentation models based on SegNet and DeepLabV3+ obtain the best results with F1s>0.90 and IoU>0.81. In the case of U-Net, the segmentation performance is F1s=0.89 and IoU=0.80, whereas FCN-AlexNet attains the lowest results with F1s=0.84 and IoU=0.73. In particular, ResNet18 obtains F1s=0.905 and IoU=0.827 and requires less training time among SegNet and DeepLabV3+ networks. Hence, ResNet18 is a potential candidate for implementing fully automated end-to-end CAD systems. The CNN models generated in this study are available to researchers at https://github.com/wgomezf/CNN-BUS-segment, which attempts to impact the fair comparison with other CNN-based segmentation approaches for BUS images.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Unidad Tamaulipas, Ciudad Victoria, Tamaulipas, Mexico.
| | | |
Collapse
|
32
|
Korczak I, Romowicz A, Gambin B, Pałko T, Kruglenko E, Dobruch-Sobczak K. Numerical prediction of breast skin temperature based on thermographic and ultrasonographic data in healthy and cancerous breasts. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.10.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
33
|
Byra M, Jarosik P, Szubert A, Galperin M, Ojeda-Fournier H, Olson L, O’Boyle M, Comstock C, Andre M. Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network. Biomed Signal Process Control 2020; 61:102027. [PMID: 34703489 PMCID: PMC8545275 DOI: 10.1016/j.bspc.2020.102027] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this work, we propose a deep learning method for breast mass segmentation in ultrasound (US). Variations in breast mass size and image characteristics make the automatic segmentation difficult. To address this issue, we developed a selective kernel (SK) U-Net convolutional neural network. The aim of the SKs was to adjust network's receptive fields via an attention mechanism, and fuse feature maps extracted with dilated and conventional convolutions. The proposed method was developed and evaluated using US images collected from 882 breast masses. Moreover, we used three datasets of US images collected at different medical centers for testing (893 US images). On our test set of 150 US images, the SK-U-Net achieved mean Dice score of 0.826, and outperformed regular U-Net, Dice score of 0.778. When evaluated on three separate datasets, the proposed method yielded mean Dice scores ranging from 0.646 to 0.780. Additional fine-tuning of our better-performing model with data collected at different centers improved mean Dice scores by ~6%. SK-U-Net utilized both dilated and regular convolutions to process US images. We found strong correlation, Spearman's rank coefficient of 0.7, between the utilization of dilated convolutions and breast mass size in the case of network's expansion path. Our study shows the usefulness of deep learning methods for breast mass segmentation. SK-U-Net implementation and pre-trained weights can be found at github.com/mbyr/bus_seg.
Collapse
Affiliation(s)
- Michal Byra
- Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
- Department of Radiology, University of California, San Diego, USA
| | - Piotr Jarosik
- Department of Information and Computational Science, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | - Aleksandra Szubert
- Maria Sklodowska-Curie Memorial Cancer Centre and Institute of Oncology, Warsaw, Poland
| | | | | | - Linda Olson
- Department of Radiology, University of California, San Diego, USA
| | - Mary O’Boyle
- Department of Radiology, University of California, San Diego, USA
| | | | - Michael Andre
- Department of Radiology, University of California, San Diego, USA
| |
Collapse
|
34
|
Jarosik P, Klimonda Z, Lewandowski M, Byra M. Breast lesion classification based on ultrasonic radio-frequency signals using convolutional neural networks. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.04.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
35
|
Abadi E, Segars WP, Tsui BMW, Kinahan PE, Bottenus N, Frangi AF, Maidment A, Lo J, Samei E. Virtual clinical trials in medical imaging: a review. J Med Imaging (Bellingham) 2020; 7:042805. [PMID: 32313817 PMCID: PMC7148435 DOI: 10.1117/1.jmi.7.4.042805] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 03/23/2020] [Indexed: 12/13/2022] Open
Abstract
The accelerating complexity and variety of medical imaging devices and methods have outpaced the ability to evaluate and optimize their design and clinical use. This is a significant and increasing challenge for both scientific investigations and clinical applications. Evaluations would ideally be done using clinical imaging trials. These experiments, however, are often not practical due to ethical limitations, expense, time requirements, or lack of ground truth. Virtual clinical trials (VCTs) (also known as in silico imaging trials or virtual imaging trials) offer an alternative means to efficiently evaluate medical imaging technologies virtually. They do so by simulating the patients, imaging systems, and interpreters. The field of VCTs has been constantly advanced over the past decades in multiple areas. We summarize the major developments and current status of the field of VCTs in medical imaging. We review the core components of a VCT: computational phantoms, simulators of different imaging modalities, and interpretation models. We also highlight some of the applications of VCTs across various imaging modalities.
Collapse
Affiliation(s)
- Ehsan Abadi
- Duke University, Department of Radiology, Durham, North Carolina, United States
| | - William P. Segars
- Duke University, Department of Radiology, Durham, North Carolina, United States
| | - Benjamin M. W. Tsui
- Johns Hopkins University, Department of Radiology, Baltimore, Maryland, United States
| | - Paul E. Kinahan
- University of Washington, Department of Radiology, Seattle, Washington, United States
| | - Nick Bottenus
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
- University of Colorado Boulder, Department of Mechanical Engineering, Boulder, Colorado, United States
| | - Alejandro F. Frangi
- University of Leeds, School of Computing, Leeds, United Kingdom
- University of Leeds, School of Medicine, Leeds, United Kingdom
| | - Andrew Maidment
- University of Pennsylvania, Department of Radiology, Philadelphia, Pennsylvania, United States
| | - Joseph Lo
- Duke University, Department of Radiology, Durham, North Carolina, United States
| | - Ehsan Samei
- Duke University, Department of Radiology, Durham, North Carolina, United States
| |
Collapse
|
36
|
Identification of Breast Malignancy by Marker-Controlled Watershed Transformation and Hybrid Feature Set for Healthcare. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10061900] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Breast cancer is a highly prevalent disease in females that may lead to mortality in severe cases. The mortality can be subsided if breast cancer is diagnosed at an early stage. The focus of this study is to detect breast malignancy through computer-aided diagnosis (CADx). In the first phase of this work, Hilbert transform is employed to reconstruct B-mode images from the raw data followed by the marker-controlled watershed transformation to segment the lesion. The methods based only on texture analysis are quite sensitive to speckle noise and other artifacts. Therefore, a hybrid feature set is developed after the extraction of shape-based and texture features from the breast lesion. Decision tree, k-nearest neighbor (KNN), and ensemble decision tree model via random under-sampling with Boost (RUSBoost) are utilized to segregate the cancerous lesions from the benign ones. The proposed technique is tested on OASBUD (Open Access Series of Breast Ultrasonic Data) and breast ultrasound (BUS) images collected at Baheya Hospital Egypt (BHE). The OASBUD dataset contains raw ultrasound data obtained from 100 patients containing 52 malignant and 48 benign lesions. The dataset collected at BHE contains 210 malignant and 437 benign images. The proposed system achieved promising accuracy of 97% with confidence interval (CI) of 91.48% to 99.38% for OASBUD and 96.6% accuracy with CI of 94.90% to 97.86% for the BHE dataset using ensemble method.
Collapse
|
37
|
Classification of Benign and Malignant Breast Tumors Using H-Scan Ultrasound Imaging. Diagnostics (Basel) 2019; 9:diagnostics9040182. [PMID: 31717382 PMCID: PMC6963514 DOI: 10.3390/diagnostics9040182] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Revised: 10/29/2019] [Accepted: 11/07/2019] [Indexed: 12/28/2022] Open
Abstract
Breast cancer is one of the most common cancers among women worldwide. Ultrasound imaging has been widely used in the detection and diagnosis of breast tumors. However, due to factors such as limited spatial resolution and speckle noise, classification of benign and malignant breast tumors using conventional B-mode ultrasound still remains a challenging task. H-scan is a new ultrasound technique that images the relative size of acoustic scatterers. However, the feasibility of H-scan ultrasound imaging in the classification of benign and malignant breast tumors has not been investigated. In this paper, we proposed a new method based on H-scan ultrasound imaging to classify benign and malignant breast tumors. Backscattered ultrasound radiofrequency signals of 100 breast tumors were used (48 benign and 52 malignant cases). H-scan ultrasound images were constructed with the radiofrequency signals by matched filtering using Gaussian-weighted Hermite polynomials. Experimental results showed that benign breast tumors had more red components, while malignant breast tumors had more blue components in H-scan ultrasound images. There were significant differences between the RGB channels of H-scan ultrasound images of benign and malignant breast tumors. We conclude H-scan ultrasound imaging can be used as a new method for classifying benign and malignant breast tumors.
Collapse
|
38
|
Steifer T, Lewandowski M. Ultrasound tissue characterization based on the Lempel–Ziv complexity with application to breast lesion classification. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.02.020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
39
|
Santos CAN, Mascarenhas NDA. Patch similarity in ultrasound images with hypothesis testing and stochastic distances. Comput Med Imaging Graph 2019; 74:37-48. [PMID: 30978595 DOI: 10.1016/j.compmedimag.2019.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2018] [Revised: 02/26/2019] [Accepted: 03/05/2019] [Indexed: 10/27/2022]
Abstract
Patch-based techniques have been largely applied to process ultrasound (US) images, with applications in various fields as denoising, segmentation, and registration. An important aspect of the performance of these techniques is how to measure the similarity between patches. While it is usual to base the similarity on the Euclidean distance when processing images corrupted by additive Gaussian noise, finding measures suitable for the multiplicative nature of the speckle in US images is still an open research. In this work, we propose new stochastic distances based on the statistical characteristics of speckle in US. Additionally, we derive statistical measures to compose hypothesis tests that allow a quantitative decision on the patch similarity of US images. Good results with experiments in denoising, segmentation and selecting similar patches confirm the potential of the proposed measures.
Collapse
Affiliation(s)
- Cid A N Santos
- Federal University of São Carlos, Washington Luís Highway, km 235, PO Box 676, São Carlos, Brazil.
| | - Nelson D A Mascarenhas
- Federal University of São Carlos, Washington Luís Highway, km 235, PO Box 676, São Carlos, Brazil; Centro Universitário Campo Limpo Paulista, Guatemala Street, 167, Campo Limpo Paulista, Brazil
| |
Collapse
|
40
|
Byra M, Galperin M, Ojeda‐Fournier H, Olson L, O'Boyle M, Comstock C, Andre M. Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion. Med Phys 2019; 46:746-755. [PMID: 30589947 DOI: 10.1002/mp.13361] [Citation(s) in RCA: 116] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 12/13/2018] [Accepted: 12/18/2018] [Indexed: 12/24/2022] Open
Affiliation(s)
- Michal Byra
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
- Department of Ultrasound Institute of Fundamental Technological Research Polish Academy of Sciences Pawinskiego 5B 02‐106 Warsaw Poland
| | | | - Haydee Ojeda‐Fournier
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| | - Linda Olson
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| | - Mary O'Boyle
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| | | | - Michael Andre
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| |
Collapse
|
41
|
Discriminant analysis of neural style representations for breast lesion classification in ultrasound. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.05.003] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|