1
|
Gómez-Flores W, Gregorio-Calas MJ, Coelho de Albuquerque Pereira W. BUS-BRA: A breast ultrasound dataset for assessing computer-aided diagnosis systems. Med Phys 2024; 51:3110-3123. [PMID: 37937827 DOI: 10.1002/mp.16812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 11/09/2023] Open
Abstract
PURPOSE Computer-aided diagnosis (CAD) systems on breast ultrasound (BUS) aim to increase the efficiency and effectiveness of breast screening, helping specialists to detect and classify breast lesions. CAD system development requires a set of annotated images, including lesion segmentation, biopsy results to specify benign and malignant cases, and BI-RADS categories to indicate the likelihood of malignancy. Besides, standardized partitions of training, validation, and test sets promote reproducibility and fair comparisons between different approaches. Thus, we present a publicly available BUS dataset whose novelty is the substantial increment of cases with the above-mentioned annotations and the inclusion of standardized partitions to objectively assess and compare CAD systems. ACQUISITION AND VALIDATION METHODS The BUS dataset comprises 1875 anonymized images from 1064 female patients acquired via four ultrasound scanners during systematic studies at the National Institute of Cancer (Rio de Janeiro, Brazil). The dataset includes biopsy-proven tumors divided into 722 benign and 342 malignant cases. Besides, a senior ultrasonographer performed a BI-RADS assessment in categories 2 to 5. Additionally, the ultrasonographer manually outlined the breast lesions to obtain ground truth segmentations. Furthermore, 5- and 10-fold cross-validation partitions are provided to standardize the training and test sets to evaluate and reproduce CAD systems. Finally, to validate the utility of the BUS dataset, an evaluation framework is implemented to assess the performance of deep neural networks for segmenting and classifying breast lesions. DATA FORMAT AND USAGE NOTES The BUS dataset is publicly available for academic and research purposes through an open-access repository under the name BUS-BRA: A Breast Ultrasound Dataset for Assessing CAD Systems. BUS images and reference segmentations are saved in Portable Network Graphic (PNG) format files, and the dataset information is stored in separate Comma-Separated Value (CSV) files. POTENTIAL APPLICATIONS The BUS-BRA dataset can be used to develop and assess artificial intelligence-based lesion detection and segmentation methods, and the classification of BUS images into pathological classes and BI-RADS categories. Other potential applications include developing image processing methods like despeckle filtering and contrast enhancement methods to improve image quality and feature engineering for image description.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Tamaulipas, Mexico
| | | | | |
Collapse
|
2
|
Gómez-Flores W, Pereira WCDA. Gray-to-color image conversion in the classification of breast lesions on ultrasound using pre-trained deep neural networks. Med Biol Eng Comput 2023; 61:3193-3207. [PMID: 37713158 DOI: 10.1007/s11517-023-02928-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 08/29/2023] [Indexed: 09/16/2023]
Abstract
Breast ultrasound (BUS) image classification in benign and malignant classes is often based on pre-trained convolutional neural networks (CNNs) to cope with small-sized training data. Nevertheless, BUS images are single-channel gray-level images, whereas pre-trained CNNs learned from color images with red, green, and blue (RGB) components. Thus, a gray-to-color conversion method is applied to fit the BUS image to the CNN's input layer size. This paper evaluates 13 gray-to-color conversion methods proposed in the literature that follow three strategies: replicating the gray-level image to all RGB channels, decomposing the image to enhance inherent information like the lesion's texture and morphology, and learning a matching layer. Besides, we introduce an image decomposition method based on the lesion's structural information to describe its inner and outer complexity. These gray-to-color conversion methods are evaluated under the same experimental framework using a pre-trained CNN architecture named ResNet-18 and a BUS dataset with more than 3000 images. In addition, the Matthews correlation coefficient (MCC), sensitivity (SEN), and specificity (SPE) measure the classification performance. The experimental results show that decomposition methods outperform replication and learning-based methods when using information from the lesion's binary mask (obtained from a segmentation method), reaching an MCC value greater than 0.70 and specificity up to 0.92, although the sensitivity is about 0.80. On the other hand, regarding the proposed method, the trade-off between sensitivity and specificity is better balanced, obtaining about 0.88 for both indices and an MCC of 0.73. This study contributes to the objective assessment of different gray-to-color conversion approaches in classifying breast lesions, revealing that mask-based decomposition methods improve classification performance. Besides, the proposed method based on structural information improves the sensitivity, obtaining more reliable classification results on malignant cases and potentially benefiting clinical practice.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del IPN, Unidad Tamaulipas, Ciudad Victoria, 87138, Tamaulipas, Mexico.
| | | |
Collapse
|
3
|
Hossain S, Azam S, Montaha S, Karim A, Chowa SS, Mondol C, Zahid Hasan M, Jonkman M. Automated breast tumor ultrasound image segmentation with hybrid UNet and classification using fine-tuned CNN model. Heliyon 2023; 9:e21369. [PMID: 37885728 PMCID: PMC10598544 DOI: 10.1016/j.heliyon.2023.e21369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 10/11/2023] [Accepted: 10/20/2023] [Indexed: 10/28/2023] Open
Abstract
Introduction Breast cancer stands as the second most deadly form of cancer among women worldwide. Early diagnosis and treatment can significantly mitigate mortality rates. Purpose The study aims to classify breast ultrasound images into benign and malignant tumors. This approach involves segmenting the breast's region of interest (ROI) employing an optimized UNet architecture and classifying the ROIs through an optimized shallow CNN model utilizing an ablation study. Method Several image processing techniques are utilized to improve image quality by removing text, artifacts, and speckle noise, and statistical analysis is done to check the enhanced image quality is satisfactory. With the processed dataset, the segmentation of breast tumor ROI is carried out, optimizing the UNet model through an ablation study where the architectural configuration and hyperparameters are altered. After obtaining the tumor ROIs from the fine-tuned UNet model (RKO-UNet), an optimized CNN model is employed to classify the tumor into benign and malignant classes. To enhance the CNN model's performance, an ablation study is conducted, coupled with the integration of an attention unit. The model's performance is further assessed by classifying breast cancer with mammogram images. Result The proposed classification model (RKONet-13) results in an accuracy of 98.41 %. The performance of the proposed model is further compared with five transfer learning models for both pre-segmented and post-segmented datasets. K-fold cross-validation is done to assess the proposed RKONet-13 model's performance stability. Furthermore, the performance of the proposed model is compared with previous literature, where the proposed model outperforms existing methods, demonstrating its effectiveness in breast cancer diagnosis. Lastly, the model demonstrates its robustness for breast cancer classification, delivering an exceptional performance of 96.21 % on a mammogram dataset. Conclusion The efficacy of this study relies on image pre-processing, segmentation with hybrid attention UNet, and classification with fine-tuned robust CNN model. This comprehensive approach aims to determine an effective technique for detecting breast cancer within ultrasound images.
Collapse
Affiliation(s)
- Shahed Hossain
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| | - Sidratul Montaha
- Department of Computer Science, University of Calgary, Calgary, AB, T2N 1N4, Canada
| | - Asif Karim
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| | - Sadia Sultana Chowa
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Chaity Mondol
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| |
Collapse
|
4
|
Alhussan AA, Eid MM, Towfek SK, Khafaga DS. Breast Cancer Classification Depends on the Dynamic Dipper Throated Optimization Algorithm. Biomimetics (Basel) 2023; 8:163. [PMID: 37092415 PMCID: PMC10123690 DOI: 10.3390/biomimetics8020163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/12/2023] [Accepted: 04/14/2023] [Indexed: 04/25/2023] Open
Abstract
According to the American Cancer Society, breast cancer is the second largest cause of mortality among women after lung cancer. Women's death rates can be decreased if breast cancer is diagnosed and treated early. Due to the lengthy duration of manual breast cancer diagnosis, an automated approach is necessary for early cancer identification. This research proposes a novel framework integrating metaheuristic optimization with deep learning and feature selection for robustly classifying breast cancer from ultrasound images. The structure of the proposed methodology consists of five stages, namely, data augmentation to improve the learning of convolutional neural network (CNN) models, transfer learning using GoogleNet deep network for feature extraction, selection of the best set of features using a novel optimization algorithm based on a hybrid of dipper throated and particle swarm optimization algorithms, and classification of the selected features using CNN optimized using the proposed optimization algorithm. To prove the effectiveness of the proposed approach, a set of experiments were conducted on a breast cancer dataset, freely available on Kaggle, to evaluate the performance of the proposed feature selection method and the performance of the optimized CNN. In addition, statistical tests were established to study the stability and difference of the proposed approach compared to state-of-the-art approaches. The achieved results confirmed the superiority of the proposed approach with a classification accuracy of 98.1%, which is better than the other approaches considered in the conducted experiments.
Collapse
Affiliation(s)
- Amel Ali Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Marwa M. Eid
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35712, Egypt
| | - S. K. Towfek
- Delta Higher Institute for Engineering and Technology, Mansoura 35111, Egypt
- Computer Science and Intelligent Systems Research Center, Blacksburg, VA 24060, USA
| | - Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
5
|
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review. Diagnostics (Basel) 2022; 12:diagnostics12123111. [PMID: 36553119 PMCID: PMC9777253 DOI: 10.3390/diagnostics12123111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial intelligence (AI), a rousing advancement disrupting a wide spectrum of applications with remarkable betterment, has continued to gain momentum over the past decades. Within breast imaging, AI, especially machine learning and deep learning, honed with unlimited cross-data/case referencing, has found great utility encompassing four facets: screening and detection, diagnosis, disease monitoring, and data management as a whole. Over the years, breast cancer has been the apex of the cancer cumulative risk ranking for women across the six continents, existing in variegated forms and offering a complicated context in medical decisions. Realizing the ever-increasing demand for quality healthcare, contemporary AI has been envisioned to make great strides in clinical data management and perception, with the capability to detect indeterminate significance, predict prognostication, and correlate available data into a meaningful clinical endpoint. Here, the authors captured the review works over the past decades, focusing on AI in breast imaging, and systematized the included works into one usable document, which is termed an umbrella review. The present study aims to provide a panoramic view of how AI is poised to enhance breast imaging procedures. Evidence-based scientometric analysis was performed in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, resulting in 71 included review works. This study aims to synthesize, collate, and correlate the included review works, thereby identifying the patterns, trends, quality, and types of the included works, captured by the structured search strategy. The present study is intended to serve as a "one-stop center" synthesis and provide a holistic bird's eye view to readers, ranging from newcomers to existing researchers and relevant stakeholders, on the topic of interest.
Collapse
|
6
|
Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. SENSORS 2022; 22:s22030807. [PMID: 35161552 PMCID: PMC8840464 DOI: 10.3390/s22030807] [Citation(s) in RCA: 56] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 12/11/2022]
Abstract
After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK;
| | - Ameer Hamza
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Artūras Mickus
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
- Correspondence:
| |
Collapse
|
7
|
Shen Y, Shamout FE, Oliver JR, Witowski J, Kannan K, Park J, Wu N, Huddleston C, Wolfson S, Millet A, Ehrenpreis R, Awal D, Tyma C, Samreen N, Gao Y, Chhor C, Gandhi S, Lee C, Kumari-Subaiya S, Leonard C, Mohammed R, Moczulski C, Altabet J, Babb J, Lewin A, Reig B, Moy L, Heacock L, Geras KJ. Artificial intelligence system reduces false-positive findings in the interpretation of breast ultrasound exams. Nat Commun 2021; 12:5645. [PMID: 34561440 PMCID: PMC8463596 DOI: 10.1038/s41467-021-26023-2] [Citation(s) in RCA: 86] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 09/14/2021] [Indexed: 02/08/2023] Open
Abstract
Though consistently shown to detect mammographically occult cancers, breast ultrasound has been noted to have high false-positive rates. In this work, we present an AI system that achieves radiologist-level accuracy in identifying breast cancer in ultrasound images. Developed on 288,767 exams, consisting of 5,442,907 B-mode and Color Doppler images, the AI achieves an area under the receiver operating characteristic curve (AUROC) of 0.976 on a test set consisting of 44,755 exams. In a retrospective reader study, the AI achieves a higher AUROC than the average of ten board-certified breast radiologists (AUROC: 0.962 AI, 0.924 ± 0.02 radiologists). With the help of the AI, radiologists decrease their false positive rates by 37.3% and reduce requested biopsies by 27.8%, while maintaining the same level of sensitivity. This highlights the potential of AI in improving the accuracy, consistency, and efficiency of breast ultrasound diagnosis.
Collapse
Affiliation(s)
- Yiqiu Shen
- grid.137628.90000 0004 1936 8753Center for Data Science, New York University, New York, NY USA
| | - Farah E. Shamout
- grid.440573.1Engineering Division, NYU Abu Dhabi, Abu Dhabi, UAE
| | - Jamie R. Oliver
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Jan Witowski
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Kawshik Kannan
- grid.482020.c0000 0001 1089 179XDepartment of Computer Science, Courant Institute, New York University, New York, NY USA
| | - Jungkyu Park
- grid.137628.90000 0004 1936 8753Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY USA
| | - Nan Wu
- grid.137628.90000 0004 1936 8753Center for Data Science, New York University, New York, NY USA
| | - Connor Huddleston
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Stacey Wolfson
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Alexandra Millet
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Robin Ehrenpreis
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Divya Awal
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Cathy Tyma
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Naziya Samreen
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Yiming Gao
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Chloe Chhor
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Stacey Gandhi
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Cindy Lee
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Sheila Kumari-Subaiya
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Cindy Leonard
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Reyhan Mohammed
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Christopher Moczulski
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Jaime Altabet
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - James Babb
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Alana Lewin
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Beatriu Reig
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Linda Moy
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA ,grid.137628.90000 0004 1936 8753Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY USA
| | - Laura Heacock
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Krzysztof J. Geras
- grid.137628.90000 0004 1936 8753Center for Data Science, New York University, New York, NY USA ,grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA ,grid.137628.90000 0004 1936 8753Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY USA
| |
Collapse
|
8
|
El-Azizy ARM, Salaheldien M, Rushdi MA, Gewefel H, Mahmoud AM. Morphological characterization of breast tumors using conventional B-mode ultrasound images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:6620-6623. [PMID: 31947359 DOI: 10.1109/embc.2019.8857438] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This work aims to develop and test a vendor-independent computer-aided diagnosis (CAD) system that uses conventional B-mode ultrasound images to distinguish between benign and malignant breast tumors. Three morphological features were extracted from 323 breast tumor lesions including the perimeter, regularity variance, and circularity range ratio. Lesions were segmented using the active contour method via semi- andfully-automated algorithms. Then, the support vector machine classifier was used to identify breast lesions. Results of the CAD system exhibited accuracies of 95.98% and 95.67%using the semi- and fully-automated segmentation, respectively. Based on the preliminary results, this CAD system with such unique combination of geometrical features shall improve the diagnostic decisions and may reduce the need of unnecessary needle biopsies.
Collapse
|
9
|
A Fusion-Based Approach for Breast Ultrasound Image Classification Using Multiple-ROI Texture and Morphological Analyses. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2016; 2016:6740956. [PMID: 28127383 PMCID: PMC5227307 DOI: 10.1155/2016/6740956] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Revised: 10/31/2016] [Accepted: 11/15/2016] [Indexed: 11/18/2022]
Abstract
Ultrasound imaging is commonly used for breast cancer diagnosis, but accurate interpretation of breast ultrasound (BUS) images is often challenging and operator-dependent. Computer-aided diagnosis (CAD) systems can be employed to provide the radiologists with a second opinion to improve the diagnosis accuracy. In this study, a new CAD system is developed to enable accurate BUS image classification. In particular, an improved texture analysis is introduced, in which the tumor is divided into a set of nonoverlapping regions of interest (ROIs). Each ROI is analyzed using gray-level cooccurrence matrix features and a support vector machine classifier to estimate its tumor class indicator. The tumor class indicators of all ROIs are combined using a voting mechanism to estimate the tumor class. In addition, morphological analysis is employed to classify the tumor. A probabilistic approach is used to fuse the classification results of the multiple-ROI texture analysis and morphological analysis. The proposed approach is applied to classify 110 BUS images that include 64 benign and 46 malignant tumors. The accuracy, specificity, and sensitivity obtained using the proposed approach are 98.2%, 98.4%, and 97.8%, respectively. These results demonstrate that the proposed approach can effectively be used to differentiate benign and malignant tumors.
Collapse
|
10
|
Wu WJ, Lin SW, Moon WK. An Artificial Immune System-Based Support Vector Machine Approach for Classifying Ultrasound Breast Tumor Images. J Digit Imaging 2016; 28:576-85. [PMID: 25561066 DOI: 10.1007/s10278-014-9757-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
A rapid and highly accurate diagnostic tool for distinguishing benign tumors from malignant ones is required owing to the high incidence of breast cancer. Although various computer-aided diagnosis (CAD) systems have been developed to interpret ultrasound images of breast tumors, feature selection and the setting of parameters are still essential to classification accuracy and the minimization of computational complexity. This work develops a highly accurate CAD system that is based on a support vector machine (SVM) and the artificial immune system (AIS) algorithm for evaluating breast tumors. Experiments demonstrate that the accuracy of the proposed CAD system for classifying breast tumors is 96.67%. The sensitivity, specificity, PPV, and NPV of the proposed CAD system are 96.67, 96.67, 95.60, and 97.48%, respectively. The receiver operator characteristic (ROC) area index A z is 0.9827. Hence, the proposed CAD system can reduce the number of biopsies and yield useful results that assist physicians in diagnosing breast tumors.
Collapse
Affiliation(s)
- Wen-Jie Wu
- Department of Information Management, Chang Gung University, Tao-Yuan, Taiwan, 333, Republic of China
| | - Shih-Wei Lin
- Department of Information Management, Chang Gung University, Tao-Yuan, Taiwan, 333, Republic of China.
| | - Woo Kyung Moon
- Department of Diagnostic Radiology, Seoul National University Hospital, Seoul, South Korea
| |
Collapse
|
11
|
Song G, Xue F, Zhang C. A Model Using Texture Features to Differentiate the Nature of Thyroid Nodules on Sonography. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2015; 34:1753-1760. [PMID: 26307120 DOI: 10.7863/ultra.15.14.10045] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2014] [Accepted: 12/28/2014] [Indexed: 06/04/2023]
Abstract
OBJECTIVES To evaluate the use of texture-based gray-level co-occurrence matrix (GLCM) features extracted from thyroid sonograms in building prediction models to determine the nature of thyroid nodules. METHODS A GLCM was used to extract the texture features of 155 sonograms of thyroid nodules (76 benign and 79 malignant). The GLCM features included energy, contrast, correlation, sum of squares, inverse difference moment, sum average, sum variance, sum entropy, entropy, difference variance, difference entropy, information measures of correlation, and maximal correlation coefficient. The texture features extracted by the GLCM were used to build 6 different statistical models, including support vector machine, random tree, random forest, boost, logistic, and artificial neural network models. The models' performances were evaluated by 10-fold cross-validation combining a receiver operating characteristic curve, indices of accuracy, true-positive rate, false-positive rate, sensitivity, specificity, precision, recall, F-measure, and area under the receiver operating characteristic curve. External validation was used to examine the stability of the model that showed the best performance. RESULTS The logistic model showed the best performance, according to 10-fold cross-validation, among the 6 models, with the highest area under the curve (0.84), accuracy (78.5%), true-positive rate (0.785), sensitivity (0.789), specificity (0.785), precision (0.789), recall (0.785), and F-measure (0.784), as well as the lowest false-positive rate (0.215). The external validation results showed that the logistic model was stable. CONCLUSIONS Gray-level co-occurrence matrix texture features extracted from sonograms of thyroid nodules coupled with a logistic model are useful for differentiating between benign and malignant thyroid nodules.
Collapse
Affiliation(s)
- Gesheng Song
- School of Medicine (G.S.), and Department of Epidemiology and Biostatistics, School of Public Health (F.X.), Shandong University, Jinan, China; and Health Management Center, Shandong Provincial Qianfoshan Hospital, Jinan, China (C.Z.)
| | - Fuzhong Xue
- School of Medicine (G.S.), and Department of Epidemiology and Biostatistics, School of Public Health (F.X.), Shandong University, Jinan, China; and Health Management Center, Shandong Provincial Qianfoshan Hospital, Jinan, China (C.Z.)
| | - Chengqi Zhang
- School of Medicine (G.S.), and Department of Epidemiology and Biostatistics, School of Public Health (F.X.), Shandong University, Jinan, China; and Health Management Center, Shandong Provincial Qianfoshan Hospital, Jinan, China (C.Z.).
| |
Collapse
|
12
|
Investigation of attenuation correction in SPECT using textural features, Monte Carlo simulations, and computational anthropomorphic models. Nucl Med Commun 2015; 36:952-61. [DOI: 10.1097/mnm.0000000000000345] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
13
|
Chang CY, Kuo SJ, Wu HK, Huang YL, Chen DR. Stellate masses and histologic grades in breast cancer. ULTRASOUND IN MEDICINE & BIOLOGY 2014; 40:904-916. [PMID: 24462153 DOI: 10.1016/j.ultrasmedbio.2013.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2013] [Revised: 10/29/2013] [Accepted: 11/04/2013] [Indexed: 06/03/2023]
Abstract
Breast masses with a radiologic stellate pattern often transform into malignancies, but their tendency to be of low histologic grade yields a better survival rate compared with tumors with other patterns on mammography screening. This study was designed to investigate the correlation of histologic grade with stellate features extracted from the coronal plane of 3-D ultrasound images. A pre-processing method was proposed to facilitate the extraction of stellate features. Extracted features were statistically measured to derive a set of indices that quantitatively represent the stellate pattern. These indices then went through a selection procedure to build proper decision trees. The splitting rules of decision trees indicated that stellate tumors are associated with low grade. A set of indices from the low grade-associated rules has the potential to represent the stellate feature. Further investigation of the hypoechoic region of peripheral tissue is essential to establishment of a complete discriminating model for tumor grades.
Collapse
Affiliation(s)
- Chin-Yuan Chang
- Cancer Research Center, Changhua Christian Hospital, Changhua, Taiwan
| | - Shou-Jen Kuo
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua, Taiwan
| | - Hwa-Koon Wu
- Department of Medical Imaging, Changhua Christian Hospital, Changhua, Taiwan
| | - Yu-Len Huang
- Department of Computer Science, Tunghai University, Taichung, Taiwan
| | - Dar-Ren Chen
- Cancer Research Center, Changhua Christian Hospital, Changhua, Taiwan; Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua, Taiwan.
| |
Collapse
|
14
|
Su Y, Wang Y, Jiao J, Guo Y. Automatic detection and classification of breast tumors in ultrasonic images using texture and morphological features. Open Med Inform J 2011; 5:26-37. [PMID: 21892371 PMCID: PMC3158436 DOI: 10.2174/1874431101105010026] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2011] [Revised: 05/15/2011] [Accepted: 05/15/2011] [Indexed: 11/22/2022] Open
Abstract
Due to severe presence of speckle noise, poor image contrast and irregular lesion shape, it is challenging to build a fully automatic detection and classification system for breast ultrasonic images. In this paper, a novel and effective computer-aided method including generation of a region of interest (ROI), segmentation and classification of breast tumor is proposed without any manual intervention. By incorporating local features of texture and position, a ROI is firstly detected using a self-organizing map neural network. Then a modified Normalized Cut approach considering the weighted neighborhood gray values is proposed to partition the ROI into clusters and get the initial boundary. In addition, a regional-fitting active contour model is used to adjust the few inaccurate initial boundaries for the final segmentation. Finally, three textures and five morphologic features are extracted from each breast tumor; whereby a highly efficient Affinity Propagation clustering is used to fulfill the malignancy and benign classification for an existing database without any training process. The proposed system is validated by 132 cases (67 benignancies and 65 malignancies) with its performance compared to traditional methods such as level set segmentation, artificial neural network classifiers, and so forth. Experiment results show that the proposed system, which needs no training procedure or manual interference, performs best in detection and classification of ultrasonic breast tumors, while having the lowest computation complexity.
Collapse
Affiliation(s)
- Yanni Su
- Department of Electronic Engineering, Fudan University, Shanghai 200433, China
| | | | | | | |
Collapse
|
15
|
Chen DR, Lai HW. Three-dimensional ultrasonography for breast malignancy detection. ACTA ACUST UNITED AC 2011; 5:253-61. [PMID: 23484500 DOI: 10.1517/17530059.2011.561314] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
INTRODUCTION Breast ultrasound is used not only to differentiate a solid breast mass from a cyst and to assist in guided biopsy, but also to classify benign and malignant lesions, with good resolution gray-scale imaging equipped with color Doppler adequate for daily clinical practice in most circumstances. AREAS COVERED This article critically reviews three-dimensional (3D) ultrasound for the detection of breast malignancies in comparison with the popular two-dimensional ultrasound, highlighting the advantages it has over other imaging modalities as well as the drawbacks that are presented. In particular, the article looks at how 3D ultrasound planes help us to define more clearly the margins, that is, microlobulation and papillomas, of breast tumors. This paper also highlights how the resolution and multiple planes of 3D ultrasound can clearly demonstrate skin tumor infiltration for evaluation and how it can be used for planning, monitoring and treatment of breast cancer. EXPERT OPINION As with any new technology, 3D ultrasound has a learning curve and clinicians will need to master the technology in order to use this tool to its full potential. Although 3D ultrasound does have its limitations, a better understanding of its settings along with the optimization of image acquisition and a better ability to manipulate data during analysis will lead to 3D ultrasound becoming a useful tool for breast malignancy detection.
Collapse
Affiliation(s)
- Dar-Ren Chen
- Changhua Christian Hospital, Comprehensive Breast Cancer Center, 135 Nanhsiao Street, Changhua 500 , Taiwan +886 4 723 8595 ext. 4871 ; +886 4 723 3715 ;
| | | |
Collapse
|
16
|
Ayer T, Ayvaci MUS, Liu ZX, Alagoz O, Burnside ES. Computer-aided diagnostic models in breast cancer screening. IMAGING IN MEDICINE 2010; 2:313-323. [PMID: 20835372 PMCID: PMC2936490 DOI: 10.2217/iim.10.24] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Mammography is the most common modality for breast cancer detection and diagnosis and is often complemented by ultrasound and MRI. However, similarities between early signs of breast cancer and normal structures in these images make detection and diagnosis of breast cancer a difficult task. To aid physicians in detection and diagnosis, computer-aided detection and computer-aided diagnostic (CADx) models have been proposed. A large number of studies have been published for both computer-aided detection and CADx models in the last 20 years. The purpose of this article is to provide a comprehensive survey of the CADx models that have been proposed to aid in mammography, ultrasound and MRI interpretation. We summarize the noteworthy studies according to the screening modality they consider and describe the type of computer model, input data size, feature selection method, input feature type, reference standard and performance measures for each study. We also list the limitations of the existing CADx models and provide several possible future research directions.
Collapse
Affiliation(s)
- Turgay Ayer
- Industrial & Systems Engineering Department, University of Wisconsin, Madison, WI, USA
| | - Mehmet US Ayvaci
- Industrial & Systems Engineering Department, University of Wisconsin, Madison, WI, USA
| | - Ze Xiu Liu
- Industrial & Systems Engineering Department, University of Wisconsin, Madison, WI, USA
| | - Oguzhan Alagoz
- Industrial & Systems Engineering Department, University of Wisconsin, Madison, WI, USA
- Department of Population Health Sciences, University of Wisconsin, Madison, WI, USA
| | - Elizabeth S Burnside
- Industrial & Systems Engineering Department, University of Wisconsin, Madison, WI, USA
- Department of Biostatistics & Medical Informatics, University of Wisconsin, Madison, WI, USA
| |
Collapse
|