1
|
Chowdary J, Yogarajah P, Chaurasia P, Guruviah V. A Multi-Task Learning Framework for Automated Segmentation and Classification of Breast Tumors From Ultrasound Images. ULTRASONIC IMAGING 2022; 44:3-12. [PMID: 35128997 PMCID: PMC8902030 DOI: 10.1177/01617346221075769] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Breast cancer is one of the most fatal diseases leading to the death of several women across the world. But early diagnosis of breast cancer can help to reduce the mortality rate. So an efficient multi-task learning approach is proposed in this work for the automatic segmentation and classification of breast tumors from ultrasound images. The proposed learning approach consists of an encoder, decoder, and bridge blocks for segmentation and a dense branch for the classification of tumors. For efficient classification, multi-scale features from different levels of the network are used. Experimental results show that the proposed approach is able to enhance the accuracy and recall of segmentation by 1.08%, 4.13%, and classification by 1.16%, 2.34%, respectively than the methods available in the literature.
Collapse
Affiliation(s)
| | - Pratheepan Yogarajah
- University of Ulster, Londonderry, UK
- Pratheepan Yogarajah, University of Ulster, Northland Road, Magee Campus, Londonderry, Northern Ireland BT48 7JL, UK.
| | | | | |
Collapse
|
2
|
Zhang D, Jiang F, Yin R, Wu GG, Wei Q, Cui XW, Zeng SE, Ni XJ, Dietrich CF. A Review of the Role of the S-Detect Computer-Aided Diagnostic Ultrasound System in the Evaluation of Benign and Malignant Breast and Thyroid Masses. Med Sci Monit 2021; 27:e931957. [PMID: 34552043 PMCID: PMC8477643 DOI: 10.12659/msm.931957] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 06/10/2021] [Indexed: 12/24/2022] Open
Abstract
Computer-aided diagnosis (CAD) systems have attracted extensive attention owing to their performance in the field of image diagnosis and are rapidly becoming a promising auxiliary tool in medical imaging tasks. These systems can quantitatively evaluate complex medical imaging features and achieve efficient and high-diagnostic accuracy. Deep learning is a representation learning method. As a major branch of artificial intelligence technology, it can directly process original image data by simulating the structure of the human brain neural network, thus independently completing the task of image recognition. S-Detect is a novel and interactive CAD system based on a deep learning algorithm, which has been integrated into ultrasound equipment and can help radiologists identify benign and malignant nodules, reduce physician workload, and optimize the ultrasound clinical workflow. S-Detect is becoming one of the most commonly used CAD systems for ultrasound evaluation of breast and thyroid nodules. In this review, we describe the S-Detect workflow and outline its application in breast and thyroid nodule detection. Finally, we discuss the difficulties and challenges faced by S-Detect as a precision medical tool in clinical practice and its prospects.
Collapse
Affiliation(s)
- Di Zhang
- Department of Medical Ultrasound, Affiliated Hospital of Nantong University, Nantong, Jiangsu, PR China
| | - Fan Jiang
- Department of Medical Ultrasound, The Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, PR China
| | - Rui Yin
- Department of Ultrasound, Affiliated Renhe Hospital of China Three Gorges University, Yichang, Hubei, PR China
| | - Ge-Ge Wu
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, PR China
| | - Qi Wei
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, PR China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, PR China
| | - Shu-E Zeng
- Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, PR China
| | - Xue-Jun Ni
- Department of Medical Ultrasound, Affiliated Hospital of Nantong University, Nantong, Jiangsu, PR China
| | | |
Collapse
|
3
|
Zhou Y, Chen H, Li Y, Liu Q, Xu X, Wang S, Yap PT, Shen D. Multi-task learning for segmentation and classification of tumors in 3D automated breast ultrasound images. Med Image Anal 2020; 70:101918. [PMID: 33676100 DOI: 10.1016/j.media.2020.101918] [Citation(s) in RCA: 98] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 11/22/2020] [Accepted: 11/23/2020] [Indexed: 12/12/2022]
Abstract
Tumor classification and segmentation are two important tasks for computer-aided diagnosis (CAD) using 3D automated breast ultrasound (ABUS) images. However, they are challenging due to the significant shape variation of breast tumors and the fuzzy nature of ultrasound images (e.g., low contrast and signal to noise ratio). Considering the correlation between tumor classification and segmentation, we argue that learning these two tasks jointly is able to improve the outcomes of both tasks. In this paper, we propose a novel multi-task learning framework for joint segmentation and classification of tumors in ABUS images. The proposed framework consists of two sub-networks: an encoder-decoder network for segmentation and a light-weight multi-scale network for classification. To account for the fuzzy boundaries of tumors in ABUS images, our framework uses an iterative training strategy to refine feature maps with the help of probability maps obtained from previous iterations. Experimental results based on a clinical dataset of 170 3D ABUS volumes collected from 107 patients indicate that the proposed multi-task framework improves tumor segmentation and classification over the single-task learning counterparts.
Collapse
Affiliation(s)
- Yue Zhou
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China; Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China.
| | - Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Xuanang Xu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Shu Wang
- Peking University People's Hospital, Beijing 100044, China
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
4
|
Han S, Hwang SI, Lee HJ. The Classification of Renal Cancer in 3-Phase CT Images Using a Deep Learning Method. J Digit Imaging 2020; 32:638-643. [PMID: 31098732 PMCID: PMC6646616 DOI: 10.1007/s10278-019-00230-2] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
In this research, we exploit an image-based deep learning framework to distinguish three major subtypes of renal cell carcinoma (clear cell, papillary, and chromophobe) using images acquired with computed tomography (CT). A biopsy-proven benchmarking dataset was built from 169 renal cancer cases. In each case, images were acquired at three phases(phase 1, before injection of the contrast agent; phase 2, 1 min after the injection; phase 3, 5 min after the injection). After image acquisition, rectangular ROI (region of interest) in each phase image was marked by radiologists. After cropping the ROIs, a combination weight was multiplied to the three-phase ROI images and the linearly combined images were fed into a deep learning neural network after concatenation. A deep learning neural network was trained to classify the subtypes of renal cell carcinoma, using the drawn ROIs as inputs and the biopsy results as labels. The network showed about 0.85 accuracy, 0.64–0.98 sensitivity, 0.83–0.93 specificity, and 0.9 AUC. The proposed framework which is based on deep learning method and ROIs provided by radiologists showed promising results in renal cell subtype classification. We hope it will help future research on this subject and it can cooperate with radiologists in classifying the subtype of lesion in real clinical situation.
Collapse
Affiliation(s)
- Seokmin Han
- Korea National University of Transportation, Uiwang-si, Gyeonggi-do, South Korea
| | - Sung Il Hwang
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si, Gyeonggi-do, South Korea.
| | - Hak Jong Lee
- Department of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam-si, Gyeonggi-do, South Korea.,Department of Nanoconvergence, Seoul National University Graduate School of Convergence Science and Technology, Suwon-si, Gyeonggi-do, South Korea
| |
Collapse
|
5
|
Lei B, Huang S, Li H, Li R, Bian C, Chou YH, Qin J, Zhou P, Gong X, Cheng JZ. Self-co-attention neural network for anatomy segmentation in whole breast ultrasound. Med Image Anal 2020; 64:101753. [DOI: 10.1016/j.media.2020.101753] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 05/27/2020] [Accepted: 06/06/2020] [Indexed: 11/25/2022]
|
6
|
Wang Y, Choi EJ, Choi Y, Zhang H, Jin GY, Ko SB. Breast Cancer Classification in Automated Breast Ultrasound Using Multiview Convolutional Neural Network with Transfer Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:1119-1132. [PMID: 32059918 DOI: 10.1016/j.ultrasmedbio.2020.01.001] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 12/12/2019] [Accepted: 01/02/2020] [Indexed: 05/11/2023]
Abstract
To assist radiologists in breast cancer classification in automated breast ultrasound (ABUS) imaging, we propose a computer-aided diagnosis based on a convolutional neural network (CNN) that classifies breast lesions as benign and malignant. The proposed CNN adopts a modified Inception-v3 architecture to provide efficient feature extraction in ABUS imaging. Because the ABUS images can be visualized in transverse and coronal views, the proposed CNN provides an efficient way to extract multiview features from both views. The proposed CNN was trained and evaluated on 316 breast lesions (135 malignant and 181 benign). An observer performance test was conducted to compare five human reviewers' diagnostic performance before and after referring to the predicting outcomes of the proposed CNN. Our method achieved an area under the curve (AUC) value of 0.9468 with five-folder cross-validation, for which the sensitivity and specificity were 0.886 and 0.876, respectively. Compared with conventional machine learning-based feature extraction schemes, particularly principal component analysis (PCA) and histogram of oriented gradients (HOG), our method achieved a significant improvement in classification performance. The proposed CNN achieved a >10% increased AUC value compared with PCA and HOG. During the observer performance test, the diagnostic results of all human reviewers had increased AUC values and sensitivities after referring to the classification results of the proposed CNN, and four of the five human reviewers' AUCs were significantly improved. The proposed CNN employing a multiview strategy showed promise for the diagnosis of breast cancer, and could be used as a second reviewer for increasing diagnostic reliability.
Collapse
Affiliation(s)
- Yi Wang
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada
| | - Eun Jung Choi
- Department of Radiology, Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonbuk National University Medical School, Jeonju City, Jeollabuk-Do, South Korea
| | - Younhee Choi
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada
| | - Hao Zhang
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada
| | - Gong Yong Jin
- Department of Radiology, Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonbuk National University Medical School, Jeonju City, Jeollabuk-Do, South Korea
| | - Seok-Bum Ko
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, Canada.
| |
Collapse
|
7
|
Identification of Breast Malignancy by Marker-Controlled Watershed Transformation and Hybrid Feature Set for Healthcare. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10061900] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Breast cancer is a highly prevalent disease in females that may lead to mortality in severe cases. The mortality can be subsided if breast cancer is diagnosed at an early stage. The focus of this study is to detect breast malignancy through computer-aided diagnosis (CADx). In the first phase of this work, Hilbert transform is employed to reconstruct B-mode images from the raw data followed by the marker-controlled watershed transformation to segment the lesion. The methods based only on texture analysis are quite sensitive to speckle noise and other artifacts. Therefore, a hybrid feature set is developed after the extraction of shape-based and texture features from the breast lesion. Decision tree, k-nearest neighbor (KNN), and ensemble decision tree model via random under-sampling with Boost (RUSBoost) are utilized to segregate the cancerous lesions from the benign ones. The proposed technique is tested on OASBUD (Open Access Series of Breast Ultrasonic Data) and breast ultrasound (BUS) images collected at Baheya Hospital Egypt (BHE). The OASBUD dataset contains raw ultrasound data obtained from 100 patients containing 52 malignant and 48 benign lesions. The dataset collected at BHE contains 210 malignant and 437 benign images. The proposed system achieved promising accuracy of 97% with confidence interval (CI) of 91.48% to 99.38% for OASBUD and 96.6% accuracy with CI of 94.90% to 97.86% for the BHE dataset using ensemble method.
Collapse
|
8
|
Saini SK, Bansal V, Kaur R, Juneja M. ColpoNet for automated cervical cancer screening using colposcopy images. MACHINE VISION AND APPLICATIONS 2020; 31:15. [DOI: 10.1007/s00138-020-01063-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 12/19/2019] [Accepted: 02/04/2020] [Indexed: 08/30/2023]
|
9
|
Byra M, Galperin M, Ojeda‐Fournier H, Olson L, O'Boyle M, Comstock C, Andre M. Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion. Med Phys 2019; 46:746-755. [PMID: 30589947 DOI: 10.1002/mp.13361] [Citation(s) in RCA: 131] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 12/13/2018] [Accepted: 12/18/2018] [Indexed: 12/24/2022] Open
Affiliation(s)
- Michal Byra
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
- Department of Ultrasound Institute of Fundamental Technological Research Polish Academy of Sciences Pawinskiego 5B 02‐106 Warsaw Poland
| | | | - Haydee Ojeda‐Fournier
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| | - Linda Olson
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| | - Mary O'Boyle
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| | | | - Michael Andre
- Department of Radiology University of California, San Diego 9500 Gilman Drive La Jolla CA 92093 USA
| |
Collapse
|
10
|
Discriminant analysis of neural style representations for breast lesion classification in ultrasound. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.05.003] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
11
|
Saeedi P, Yee D, Au J, Havelock J. Automatic Identification of Human Blastocyst Components via Texture. IEEE Trans Biomed Eng 2017; 64:2968-2978. [PMID: 28991729 DOI: 10.1109/tbme.2017.2759665] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Choosing the most viable embryo during human in vitro fertilization (IVF) is a prime factor in maximizing pregnancy rate. Embryologists visually inspect morphological structures of blastocysts under microscopes to gauge their health. Such grading introduces subjectivity amongst embryologists and adds to the difficulty of quality control during IVF. In this paper, we introduce an algorithm for automatic segmentation of two main components of human blastocysts named: Trophectoderm (TE) and inner cell mass (ICM). We utilize texture information along with biological and physical characteristics of day-5 human embryos (blastocysts) to identify TE or ICM regions according to their intrinsic properties. Both these regions are highly textured and very similar in the quality of their texture, and they often look connected to each other when imaged. These attributes make their automatic identification and separation from each other a difficult task even for an expert embryologist. By automatically identifying TE and ICM regions, we offer the opportunity to perform more detailed assessment of blastocysts. This could help in analyzing, in a quantitative way, various visual/geometrical characteristics of these regions that when combined with the pregnancy outcome can determine the predictive values of such attributes. Our work aids future research in understanding why certain embryos have higher pregnancy success rates. This paper is tested on a set of 211 blastocyst images. We report an accuracy of 86.6% for identification of TE and 91.3% for ICM.
Collapse
|
12
|
Han S, Kang HK, Jeong JY, Park MH, Kim W, Bang WC, Seong YK. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys Med Biol 2017; 62:7714-7728. [PMID: 28753132 DOI: 10.1088/1361-6560/aa82ec] [Citation(s) in RCA: 188] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.
Collapse
Affiliation(s)
- Seokmin Han
- Korea National University of Transportation, Uiwang-si, Kyunggi-do, Republic of Korea
| | | | | | | | | | | | | |
Collapse
|
13
|
Tu X, Xie M, Gao J, Ma Z, Chen D, Wang Q, Finlayson SG, Ou Y, Cheng JZ. Automatic Categorization and Scoring of Solid, Part-Solid and Non-Solid Pulmonary Nodules in CT Images with Convolutional Neural Network. Sci Rep 2017; 7:8533. [PMID: 28864824 PMCID: PMC5581338 DOI: 10.1038/s41598-017-08040-8] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Accepted: 06/29/2017] [Indexed: 12/17/2022] Open
Abstract
We present a computer-aided diagnosis system (CADx) for the automatic categorization of solid, part-solid and non-solid nodules in pulmonary computerized tomography images using a Convolutional Neural Network (CNN). Provided with only a two-dimensional region of interest (ROI) surrounding each nodule, our CNN automatically reasons from image context to discover informative computational features. As a result, no image segmentation processing is needed for further analysis of nodule attenuation, allowing our system to avoid potential errors caused by inaccurate image processing. We implemented two computerized texture analysis schemes, classification and regression, to automatically categorize solid, part-solid and non-solid nodules in CT scans, with hierarchical features in each case learned directly by the CNN model. To show the effectiveness of our CNN-based CADx, an established method based on histogram analysis (HIST) was implemented for comparison. The experimental results show significant performance improvement by the CNN model over HIST in both classification and regression tasks, yielding nodule classification and rating performance concordant with those of practicing radiologists. Adoption of CNN-based CADx systems may reduce the inter-observer variation among screening radiologists and provide a quantitative reference for further nodule analysis.
Collapse
Affiliation(s)
- Xiaoguang Tu
- School of Communication and Information Engineering, University of Electronic Science and Technology of China, Xiyuan Ave. 2006, West Hi-Tech Zone, Chengdu, Sichuan, 611731, China
| | - Mei Xie
- School of Electronic Engineering, University of Electronic Science and Technology of China, Xiyuan Ave. 2006, West Hi-Tech Zone, Chengdu, Sichuan, 611731, China.
| | - Jingjing Gao
- School of Electronic Engineering, University of Electronic Science and Technology of China, Xiyuan Ave. 2006, West Hi-Tech Zone, Chengdu, Sichuan, 611731, China
| | - Zheng Ma
- School of Communication and Information Engineering, University of Electronic Science and Technology of China, Xiyuan Ave. 2006, West Hi-Tech Zone, Chengdu, Sichuan, 611731, China
| | - Daiqiang Chen
- Third Military Medical University, Chongqing, 400038, China
| | - Qingfeng Wang
- School of Software Engineering, University of Science and Technology of China, 230026, Hefei, China
| | - Samuel G Finlayson
- Department of Systems Biology, Harvard Medical School, 10 Shattuck St., Boston, MA, 02115, USA
- Harvard-MIT Division of Health Sciences and Technology (HST), 77 Massachusetts Avenue, E25-518, Cambridge, MA, 02139, USA
| | - Yangming Ou
- Department of Radiology, Harvard Medical School, 1 Autumn St., Boston, MA, 02215, USA
| | - Jie-Zhi Cheng
- Department and Graduate Institute of Electrical Engineering, Chang Gung University, 259 Wen-Hwa 1st Road, Kwei-Shan Tao-Yuan, 333, Taiwan.
| |
Collapse
|
14
|
Moon WK, Chen IL, Chang JM, Shin SU, Lo CM, Chang RF. The adaptive computer-aided diagnosis system based on tumor sizes for the classification of breast tumors detected at screening ultrasound. ULTRASONICS 2017; 76:70-77. [PMID: 28086107 DOI: 10.1016/j.ultras.2016.12.017] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Revised: 12/06/2016] [Accepted: 12/26/2016] [Indexed: 06/06/2023]
Abstract
Screening ultrasound (US) is increasingly used as a supplement to mammography in women with dense breasts, and more than 80% of cancers detected by US alone are 1cm or smaller. An adaptive computer-aided diagnosis (CAD) system based on tumor size was proposed to classify breast tumors detected at screening US images using quantitative morphological and textural features. In the present study, a database containing 156 tumors (78 benign and 78 malignant) was separated into two subsets of different tumor sizes (<1cm and ⩾1cm) to explore the improvement in the performance of the CAD system. After adaptation, the accuracies, sensitivities, specificities and Az values of the CAD for the entire database increased from 73.1% (114/156), 73.1% (57/78), 73.1% (57/78), and 0.790 to 81.4% (127/156), 83.3% (65/78), 79.5% (62/78), and 0.852, respectively. In the data subset of tumors larger than 1cm, the performance improved from 66.2% (51/77), 68.3% (28/41), 63.9% (23/36), and 0.703 to 81.8% (63/77), 85.4% (35/41), 77.8% (28/36), and 0.855, respectively. The proposed CAD system can be helpful to classify breast tumors detected at screening US.
Collapse
Affiliation(s)
- Woo Kyung Moon
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - I-Ling Chen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Jung Min Chang
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - Sung Ui Shin
- Department of Radiology, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, Republic of Korea
| | - Chung-Ming Lo
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan.
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
15
|
Chen S, Qin J, Ji X, Lei B, Wang T, Ni D, Cheng JZ. Automatic Scoring of Multiple Semantic Attributes With Multi-Task Feature Leverage: A Study on Pulmonary Nodules in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:802-814. [PMID: 28113928 DOI: 10.1109/tmi.2016.2629462] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images. We regard that there may exist relations among the semantic features of "spiculation", "texture", "margin", etc., that can be explored with the MTL. The Lung Image Database Consortium (LIDC) data is adopted in this study for the rich annotation resources. The LIDC nodules were quantitatively scored w.r.t. 9 semantic features from 12 radiologists of several institutes in U.S.A. By treating each semantic feature as an individual task, the MTL schemes select and map the heterogeneous computational features toward the radiologists' ratings with cross validation evaluation schemes on the randomly selected 2400 nodules from the LIDC dataset. The experimental results suggest that the predicted semantic scores from the three MTL schemes are closer to the radiologists' ratings than the scores from single-task LASSO and elastic net regression methods. The proposed semantic attribute scoring scheme may provide richer quantitative assessments of nodules for better support of diagnostic decision and management. Meanwhile, the capability of the automatic association of medical image contents with the clinical semantic terms by our method may also assist the development of medical search engine.
Collapse
|
16
|
Song Y, Tan EL, Jiang X, Cheng JZ, Ni D, Chen S, Lei B, Wang T. Accurate Cervical Cell Segmentation from Overlapping Clumps in Pap Smear Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:288-300. [PMID: 27623573 DOI: 10.1109/tmi.2016.2606380] [Citation(s) in RCA: 70] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Accurate segmentation of cervical cells in Pap smear images is an important step in automatic pre-cancer identification in the uterine cervix. One of the major segmentation challenges is overlapping of cytoplasm, which has not been well-addressed in previous studies. To tackle the overlapping issue, this paper proposes a learning-based method with robust shape priors to segment individual cell in Pap smear images to support automatic monitoring of changes in cells, which is a vital prerequisite of early detection of cervical cancer. We define this splitting problem as a discrete labeling task for multiple cells with a suitable cost function. The labeling results are then fed into our dynamic multi-template deformation model for further boundary refinement. Multi-scale deep convolutional networks are adopted to learn the diverse cell appearance features. We also incorporated high-level shape information to guide segmentation where cell boundary might be weak or lost due to cell overlapping. An evaluation carried out using two different datasets demonstrates the superiority of our proposed method over the state-of-the-art methods in terms of segmentation accuracy.
Collapse
|
17
|
Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Sci Rep 2016; 6:24454. [PMID: 27079888 PMCID: PMC4832199 DOI: 10.1038/srep24454] [Citation(s) in RCA: 301] [Impact Index Per Article: 33.4] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2015] [Accepted: 03/30/2016] [Indexed: 01/02/2023] Open
Abstract
This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features.
Collapse
|
18
|
Gómez W, Pereira W, Infantosi A. Evolutionary pulse-coupled neural network for segmenting breast lesions on ultrasonography. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.04.121] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
19
|
Automatic Segmentation of the Corpus Callosum Using a Cell-Competition Algorithm: Diffusion Tensor Imaging-Based Evaluation of Callosal Atrophy and Tissue Alterations in Patients With Systemic Lupus Erythematosus. J Comput Assist Tomogr 2015; 39:781-6. [PMID: 26295188 DOI: 10.1097/rct.0000000000000282] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Patients with neuropsychiatric systemic lupus erythematosus (NPSLE) may exhibit corpus callosal atrophy and tissue alterations. Measuring the callosal volume and tissue integrity using diffusion tensor imaging (DTI) could help to differentiate patients with NPSLE from patients without NPSLE. Hence, this study aimed to use an automatic cell-competition algorithm to segment the corpus callosum and to investigate the effects of central nervous system (CNS) involvement on the callosal volume and tissue integrity in patients with SLE. METHODS Twenty-two SLE patients with (N = 10, NPSLE) and without (N = 12, non-NPSLE) CNS involvement and 22 control subjects were enrolled in this study. For volumetric measurement, a cell-competition algorithm was used to automatically delineate corpus callosal boundaries based on a midsagittal fractional anisotropy (FA) map. After obtaining corpus callosal boundaries for all subjects, the volume, FA, and mean diffusivity (MD) of the corpus callosum were calculated. A post hoc Tamhane's T2 analysis was performed to statistically compare differences among NPSLE, non-NPSLE, and control subjects. A receiver operating characteristic curve analysis was also performed to compare the performance of the volume, FA, and MD of the corpus callosum in differentiating NPSLE from other subjects. RESULTS Patients with NPSLE had significant decreases in volume and FA but an increase in MD in the corpus callosum compared with control subjects, whereas no significant difference was noted between patients without NPSLE and control subjects. The FA was found to have better performance in differentiating NPSLE from other subjects. CONCLUSIONS A cell-competition algorithm could be used to automatically evaluate callosal atrophy and tissue alterations. Assessments of the corpus callosal volume and tissue integrity helped to demonstrate the effects of CNS involvement in patients with SLE.
Collapse
|
20
|
Hua KL, Hsu CH, Hidayati SC, Cheng WH, Chen YJ. Computer-aided classification of lung nodules on computed tomography images via deep learning technique. Onco Targets Ther 2015; 8:2015-22. [PMID: 26346558 PMCID: PMC4531007 DOI: 10.2147/ott.s80733] [Citation(s) in RCA: 119] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain.
Collapse
Affiliation(s)
- Kai-Lung Hua
- Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Che-Hao Hsu
- Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Shintami Chusnul Hidayati
- Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Wen-Huang Cheng
- Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
| | - Yu-Jen Chen
- Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan
| |
Collapse
|
21
|
Tsou CH, Lor KL, Chang YC, Chen CM. Anatomy packing with hierarchical segments: an algorithm for segmentation of pulmonary nodules in CT images. Biomed Eng Online 2015; 14:42. [PMID: 25971587 PMCID: PMC4430912 DOI: 10.1186/s12938-015-0043-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2015] [Accepted: 04/21/2015] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND This paper proposes a semantic segmentation algorithm that provides the spatial distribution patterns of pulmonary ground-glass nodules with solid portions in computed tomography (CT) images. METHODS The proposed segmentation algorithm, anatomy packing with hierarchical segments (APHS), performs pulmonary nodule segmentation and quantification in CT images. In particular, the APHS algorithm consists of two essential processes: hierarchical segmentation tree construction and anatomy packing. It constructs the hierarchical segmentation tree based on region attributes and local contour cues along the region boundaries. Each node of the tree corresponds to the soft boundary associated with a family of nested segmentations through different scales applied by a hierarchical segmentation operator that is used to decompose the image in a structurally coherent manner. The anatomy packing process detects and localizes individual object instances by optimizing a hierarchical conditional random field model. Ninety-two histopathologically confirmed pulmonary nodules were used to evaluate the performance of the proposed APHS algorithm. Further, a comparative study was conducted with two conventional multi-label image segmentation algorithms based on four assessment metrics: the modified Williams index, percentage statistic, overlapping ratio, and difference ratio. RESULTS Under the same framework, the proposed APHS algorithm was applied to two clinical applications: multi-label segmentation of nodules with a solid portion and surrounding tissues and pulmonary nodule segmentation. The results obtained indicate that the APHS-generated boundaries are comparable to manual delineations with a modified Williams index of 1.013. Further, the resulting segmentation of the APHS algorithm is also better than that achieved by two conventional multi-label image segmentation algorithms. CONCLUSIONS The proposed two-level hierarchical segmentation algorithm effectively labelled the pulmonary nodule and its surrounding anatomic structures in lung CT images. This suggests that the generated multi-label structures can potentially serve as the basis for developing related clinical applications.
Collapse
Affiliation(s)
- Chi-Hsuan Tsou
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Number 1, Section 1, Jen-Ai Road, Taipei 100, Taiwan.
| | - Kuo-Lung Lor
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Number 1, Section 1, Jen-Ai Road, Taipei 100, Taiwan.
| | - Yeun-Chung Chang
- Department of Radiology, National Taiwan University College of Medicine, Number 7, Chung-Shan South Road, Taipei 100, Taiwan. .,Department of Medical Imaging, National Taiwan University Hospital, Number 7, Chung-Shan South Road, Taipei 100, Taiwan.
| | - Chung-Ming Chen
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Number 1, Section 1, Jen-Ai Road, Taipei 100, Taiwan.
| |
Collapse
|
22
|
Platel B, Mus R, Welte T, Karssemeijer N, Mann R. Automated characterization of breast lesions imaged with an ultrafast DCE-MR protocol. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:225-232. [PMID: 24058020 DOI: 10.1109/tmi.2013.2281984] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of the breast has become an invaluable tool in the clinical work-up of patients suspected of having breast carcinoma. The purpose of this study is to introduce novel features extracted from the kinetics of contrast agent uptake imaged by a short (100 s) view-sharing MRI protocol, and to investigate how these features measure up to commonly used features for regular DCE-MRI of the breast. Performance is measured with a computer aided diagnosis (CADx) system aimed at distinguishing benign from malignant lesions. A bi-temporal breast MRI protocol was used. This protocol produces five regular, high spatial-resolution T1-weighted acquisitions interleaved with a series of 20 ultrafast view-sharing acquisitions during contrast agent uptake. We measure and compare the performances of morphological and kinetic features derived from both the regular DCE-MRI sequence and the ultrafast view-sharing sequence with four different classifiers. The classification performance of kinetics derived from the short (100 s) ultrafast acquisition starting with contrast agent administration, is significantly higher than the performance of kinetics derived from a much lengthier (510 s), commonly used 3-D gradient echo acquisition. When combined with morphology information all classifiers show a higher performance for the ultrafast acquisition (two out of four results are significantly better).
Collapse
|
23
|
Lee T. Comparison of Breast Cancer Screening Results in Korean Middle-Aged Women: A Hospital-based Prospective Cohort Study. Osong Public Health Res Perspect 2013; 4:197-202. [PMID: 24159556 PMCID: PMC3767103 DOI: 10.1016/j.phrp.2013.06.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2013] [Revised: 06/17/2013] [Accepted: 06/18/2013] [Indexed: 12/14/2022] Open
Abstract
Objectives The aim of this hospital-based prospective study was to evaluate the diagnostic ability of breast cancer screening in Korean middle-aged women using age, ultrasonography, mammography, and magnification mammography, which are commonly used in most hospitals. Methods A total of 21 patents were examined using ultrasonography, mammography, and magnification mammography, and their data were prospectively analyzed from August 2011 to March 2013. All patients were divided into benign and malignant groups and the screening results were classified using the American College of Radiology Breast Imaging Reporting and Data System (BI-RADS). The final pathology report was used as the reference standard and the sensitivity and specificity of ultrasonography, mammography, and magnification mammography were evaluated using receiver-operating characteristics (ROC) analysis. Results The analysis included 21 patients who underwent biopsy. Among them, three (14.3%) were positive and 18 (85.7%) negative for breast cancer. The average age was 50.5 years (range = 38–61 years). The sensitivity was the same for ultrasonography and magnification mammography and the specificity of magnification mammography was higher than that of ultrasonography. The highest area under the ROC curve (AUC) was observed in the combination of age and magnification mammography (1.000) and the decreasing order of AUC in others was magnification mammography (0.833), ultrasonography (0.787), mammography (0.667), and age (0.648). Conclusions In Korean women, the diagnostic accuracy of magnification mammography was better than that of ultrasonography and mammography. The combination of age and magnification mammography increased the sensitivity and diagnostic accuracy.
Collapse
Affiliation(s)
- Taebum Lee
- Advanced Medical Device Research Center, Korea Electrotechnology Research Institute, Ansan, Korea
| |
Collapse
|
24
|
Tan T, Platel B, Huisman H, Sánchez CI, Mus R, Karssemeijer N. Computer-aided lesion diagnosis in automated 3-D breast ultrasound using coronal spiculation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:1034-1042. [PMID: 22271831 DOI: 10.1109/tmi.2012.2184549] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
A computer-aided diagnosis (CAD) system for the classification of lesions as malignant or benign in automated 3-D breast ultrasound (ABUS) images, is presented. Lesions are automatically segmented when a seed point is provided, using dynamic programming in combination with a spiral scanning technique. A novel aspect of ABUS imaging is the presence of spiculation patterns in coronal planes perpendicular to the transducer. Spiculation patterns are characteristic for malignant lesions. Therefore, we compute spiculation features and combine them with features related to echotexture, echogenicity, shape, posterior acoustic behavior and margins. Classification experiments were performed using a support vector machine classifier and evaluation was done with leave-one-patient-out cross-validation. Receiver operator characteristic (ROC) analysis was used to determine performance of the system on a dataset of 201 lesions. We found that spiculation was among the most discriminative features. Using all features, the area under the ROC curve (A(z)) was 0.93, which was significantly higher than the performance without spiculation features (A(z)=0.90, p=0.02). On a subset of 88 cases, classification performance of CAD (A(z)=0.90) was comparable to the average performance of 10 readers (A(z)=0.87).
Collapse
Affiliation(s)
- Tao Tan
- Department of Radiology, Radboud University Nijmegen Medical Centre, 6525 GA Nijmegen, The Netherlands.
| | | | | | | | | | | |
Collapse
|
25
|
Boersma LJ, Hanbeukers B, Boetes C, Borger J, Ende PVD, Haaren EV, Houben R, Jager J, Murrer L, Sastrowijoto S, Baardwijk AV. Is contrast enhancement required to visualize a known breast tumor in a pre-operative CT scan? Radiother Oncol 2011; 100:271-5. [DOI: 10.1016/j.radonc.2011.06.027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2010] [Revised: 06/11/2011] [Accepted: 06/11/2011] [Indexed: 10/18/2022]
|
26
|
Cheng JZ, Chou YH, Huang CS, Chang YC, Tiu CM, Yeh FC, Chen KW, Tsou CH, Chen CM. ACCOMP: Augmented cell competition algorithm for breast lesion demarcation in sonography. Med Phys 2011; 37:6240-52. [PMID: 21302781 DOI: 10.1118/1.3512799] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Fully automatic and high-quality demarcation of sonographical breast lesions remains a far-reaching goal. This article aims to develop an image segmentation algorithm that provides quality delineation of breast lesions in sonography with a simple and friendly semiautomatic scheme. METHODS A data-driven image segmentation algorithm, named as augmented cell competition (ACCOMP) algorithm, is developed to delineate breast lesion boundaries in ultrasound images. Inspired by visual perceptual experience and Gestalt principles, the ACCOMP algorithm is constituted of two major processes, i.e., cell competition and cell-based contour grouping. The cell competition process drives cells, i.e., the catchment basins generated by a two-pass watershed transformation, to merge and split into prominent components. A prominent component is defined as a relatively large and homogeneous region circumscribed by a perceivable boundary. Based on the prominent component tessellation, cell-based contour grouping process seeks the best closed subsets of edges in the prominent component structure as the desirable boundary candidates. Finally, five boundary candidates with respect to five devised boundary cost functions are suggested by the ACCOMP algorithm for user selection. To evaluate the efficacy of the ACCOMP algorithm on breast lesions with complicated echogenicity and shapes, 324 breast sonograms, including 199 benign and 125 malignant lesions, are adopted as testing data. The boundaries generated by the ACCOMP algorithm are compared to manual delineations, which were confirmed by four experienced medical doctors. Four assessment metrics, including the modified Williams index, percentage statistic, overlapping ratio, and difference ratio, are employed to see if the ACCOMP-generated boundaries are comparable to manual delineations. A comparative study is also conducted by implementing two pixel-based segmentation algorithms. The same four assessment metrics are employed to evaluate the boundaries generated by two conventional pixel-based algorithms based on the same set of manual delineations. RESULTS The ACCOMP-generated boundaries are shown to be comparable to the manual delineations. Particularly, the modified Williams indices of the boundaries generated by the ACCOMP algorithm and the first and second pixel-based algorithms are 1.069 +/- 0.024, 0.935 +/- 0.024, and 0.579 +/- 0.013, respectively. If the modified Williams index is greater than or equal to 1, the average distance between the computer-generated boundaries and manual delineations is deemed to be comparable to that between the manual delineations. CONCLUSIONS The boundaries derived by the ACCOMP algorithm are shown to reasonably demarcate sonographic breast lesions, especially for the cases with complicated echogenicity and shapes. It suggests that the ACCOMP-generated boundaries can potentially serve as the basis for further morphological or quantitative analysis.
Collapse
Affiliation(s)
- Jie-Zhi Cheng
- Institute of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Number 1, Section 1, Jen-Ai Road, Taipei 100, Taiwan.
| | | | | | | | | | | | | | | | | |
Collapse
|