1
|
Hasan Z, Key S, Habib AR, Wong E, Aweidah L, Kumar A, Sacks R, Singh N. Convolutional Neural Networks in ENT Radiology: Systematic Review of the Literature. Ann Otol Rhinol Laryngol 2023; 132:417-430. [PMID: 35651308 DOI: 10.1177/00034894221095899] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
INTRODUCTION Convolutional neural networks (CNNs) represent a state-of-the-art methodological technique in AI and deep learning, and were specifically created for image classification and computer vision tasks. CNNs have been applied in radiology in a number of different disciplines, mostly outside otolaryngology, potentially due to a lack of familiarity with this technology within the otolaryngology community. CNNs have the potential to revolutionize clinical practice by reducing the time required to perform manual tasks. This literature search aims to present a comprehensive systematic review of the published literature with regard to CNNs and their utility to date in ENT radiology. METHODS Data were extracted from a variety of databases including PubMED, Proquest, MEDLINE Open Knowledge Maps, and Gale OneFile Computer Science. Medical subject headings (MeSH) terms and keywords were used to extract related literature from each databases inception to October 2020. Inclusion criteria were studies where CNNs were used as the main intervention and CNNs focusing on radiology relevant to ENT. Titles and abstracts were reviewed followed by the contents. Once the final list of articles was obtained, their reference lists were also searched to identify further articles. RESULTS Thirty articles were identified for inclusion in this study. Studies utilizing CNNs in most ENT subspecialties were identified. Studies utilized CNNs for a number of tasks including identification of structures, presence of pathology, and segmentation of tumors for radiotherapy planning. All studies reported a high degree of accuracy of CNNs in performing the chosen task. CONCLUSION This study provides a better understanding of CNN methodology used in ENT radiology demonstrating a myriad of potential uses for this exciting technology including nodule and tumor identification, identification of anatomical variation, and segmentation of tumors. It is anticipated that this field will continue to evolve and these technologies and methodologies will become more entrenched in our everyday practice.
Collapse
Affiliation(s)
- Zubair Hasan
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| | - Seraphina Key
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, VIC, Australia
| | - Al-Rahim Habib
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Princess Alexandra Hospital, Woolloongabba, QLD, Australia
| | - Eugene Wong
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| | - Layal Aweidah
- Faculty of Medicine, University of Notre Dame, Darlinghurst, NSW, Australia
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, University of Sydney, Darlington, NSW, Australia
| | - Raymond Sacks
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Concord Hospital, Concord, NSW, Australia
| | - Narinder Singh
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| |
Collapse
|
2
|
Homayoun H, Ebrahimpour-komleh H. Automated Segmentation of Abnormal Tissues in Medical Images. J Biomed Phys Eng 2021; 11:415-424. [PMID: 34458189 PMCID: PMC8385212 DOI: 10.31661/jbpe.v0i0.958] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Accepted: 08/14/2018] [Indexed: 11/29/2022]
Abstract
Nowadays, medical image modalities are almost available everywhere. These modalities are bases of diagnosis of various diseases sensitive to specific tissue type.
Usually physicians look for abnormalities in these modalities in diagnostic procedures. Count and volume of abnormalities are very important for optimal treatment of patients.
Segmentation is a preliminary step for these measurements and also further analysis. Manual segmentation of abnormalities is cumbersome, error prone, and subjective. As a result,
automated segmentation of abnormal tissue is a need. In this study, representative techniques for segmentation of abnormal tissues are reviewed. Main focus is on the segmentation of
multiple sclerosis lesions, breast cancer masses, lung nodules, and skin lesions. As experimental results demonstrate, the methods based on deep learning techniques perform better than
other methods that are usually based on handy feature engineering techniques. Finally, the most common measures to evaluate automated abnormal tissue segmentation methods are reported
Collapse
Affiliation(s)
- Hassan Homayoun
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| | - Hossein Ebrahimpour-komleh
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| |
Collapse
|
3
|
On the performance of lung nodule detection, segmentation and classification. Comput Med Imaging Graph 2021; 89:101886. [PMID: 33706112 DOI: 10.1016/j.compmedimag.2021.101886] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 01/11/2021] [Accepted: 02/02/2021] [Indexed: 01/10/2023]
Abstract
Computed tomography (CT) screening is an effective way for early detection of lung cancer in order to improve the survival rate of such a deadly disease. For more than two decades, image processing techniques such as nodule detection, segmentation, and classification have been extensively studied to assist physicians in identifying nodules from hundreds of CT slices to measure shapes and HU distributions of nodules automatically and to distinguish their malignancy. Thanks to new parallel computation, multi-layer convolution, nonlinear pooling operation, and the big data learning strategy, recent development of deep-learning algorithms has shown great progress in lung nodule screening and computer-assisted diagnosis (CADx) applications due to their high sensitivity and low false positive rates. This paper presents a survey of state-of-the-art deep-learning-based lung nodule screening and analysis techniques focusing on their performance and clinical applications, aiming to help better understand the current performance, the limitation, and the future trends of lung nodule analysis.
Collapse
|
4
|
A Comparative Study of Modern Machine Learning Approaches for Focal Lesion Detection and Classification in Medical Images: BoVW, CNN and MTANN. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2018. [DOI: 10.1007/978-3-319-68843-5_2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
5
|
Suzuki K. Overview of deep learning in medical imaging. Radiol Phys Technol 2017; 10:257-273. [PMID: 28689314 DOI: 10.1007/s12194-017-0406-5] [Citation(s) in RCA: 381] [Impact Index Per Article: 54.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 06/29/2017] [Indexed: 02/07/2023]
Abstract
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Collapse
Affiliation(s)
- Kenji Suzuki
- Medical Imaging Research Center and Department of Electrical and Computer Engineering, Illinois Institute of Technology, 3440 South Dearborn Street, Chicago, IL, 60616, USA. .,World Research Hub Initiative (WRHI), Tokyo Institute of Technology, Tokyo, Japan.
| |
Collapse
|
6
|
Khastavaneh H, Ebrahimpour-Komleh H. Neural Network-Based Learning Kernel for Automatic Segmentation of Multiple Sclerosis Lesions on Magnetic Resonance Images. J Biomed Phys Eng 2017; 7:155-162. [PMID: 28580337 PMCID: PMC5447252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2015] [Accepted: 06/20/2015] [Indexed: 06/07/2023]
Abstract
BACKGROUND Multiple Sclerosis (MS) is a degenerative disease of central nervous system. MS patients have some dead tissues in their brains called MS lesions. MRI is an imaging technique sensitive to soft tissues such as brain that shows MS lesions as hyper-intense or hypo-intense signals. Since manual segmentation of these lesions is a laborious and time consuming task, automatic segmentation is a need. MATERIALS AND METHODS In order to segment MS lesions, a method based on learning kernels has been proposed. The proposed method has three main steps namely; pre-processing, sub-region extraction and segmentation. The segmentation is performed by a kernel. This kernel is trained using a modified version of a special type of Artificial Neural Networks (ANN) called Massive Training ANN (MTANN). The kernel incorporates surrounding pixel information as features for classification of middle pixel of kernel. The materials of this study include a part of MICCAI 2008 MS lesion segmentation grand challenge data-set. RESULTS Both qualitative and quantitative results show promising results. Similarity index of 70 percent in some cases is considered convincing. These results are obtained from information of only one MRI channel rather than multi-channel MRIs. CONCLUSION This study shows the potential of surrounding pixel information to be incorporated in segmentation by learning kernels. The performance of proposed method will be improved using a special pre-processing pipeline and also a post-processing step for reducing false positives/negatives. An important advantage of proposed model is that it uses just FLAIR MRI that reduces computational time and brings comfort to patients.
Collapse
Affiliation(s)
- H Khastavaneh
- Department of Computer Engineering, Faculty of Computer and Electrical Engineering, University of Kashan, Kashan, Iran
| | - H Ebrahimpour-Komleh
- Department of Computer Engineering, Faculty of Computer and Electrical Engineering, University of Kashan, Kashan, Iran
| |
Collapse
|
7
|
Le TN, Bao PT, Huynh HT. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network. BIOMED RESEARCH INTERNATIONAL 2016; 2016:3219068. [PMID: 27597960 PMCID: PMC5002342 DOI: 10.1155/2016/3219068] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2016] [Revised: 07/14/2016] [Accepted: 07/19/2016] [Indexed: 11/18/2022]
Abstract
Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively.
Collapse
Affiliation(s)
- Trong-Ngoc Le
- Faculty of Information Technology, Industrial University of Ho Chi Minh City, 12 Nguyen Van Bao, Go Vap District, Ho Chi Minh City, Vietnam
- Faculty of Information Technology, University of Science, 227 Nguyen Van Cu, District 5, Ho Chi Minh City, Vietnam
| | - Pham The Bao
- Faculty of Mathematics and Computer Science, University of Science, 227 Nguyen Van Cu, District 5, Ho Chi Minh City, Vietnam
| | - Hieu Trung Huynh
- Faculty of Information Technology, Industrial University of Ho Chi Minh City, 12 Nguyen Van Bao, Go Vap District, Ho Chi Minh City, Vietnam
| |
Collapse
|
8
|
Pixel-based Machine Learning in Computer-Aided Diagnosis of Lung and Colon Cancer. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2014. [DOI: 10.1007/978-3-642-40017-9_5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
9
|
Suzuki K. Machine Learning in Computer-aided Diagnosis of the Thorax and Colon in CT: A Survey. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS 2013; E96-D:772-783. [PMID: 24174708 PMCID: PMC3810349 DOI: 10.1587/transinf.e96.d.772] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Computer-aided detection (CADe) and diagnosis (CAD) has been a rapidly growing, active area of research in medical imaging. Machine leaning (ML) plays an essential role in CAD, because objects such as lesions and organs may not be represented accurately by a simple equation; thus, medical pattern recognition essentially require "learning from examples." One of the most popular uses of ML is the classification of objects such as lesion candidates into certain classes (e.g., abnormal or normal, and lesions or non-lesions) based on input features (e.g., contrast and area) obtained from segmented lesion candidates. The task of ML is to determine "optimal" boundaries for separating classes in the multidimensional feature space which is formed by the input features. ML algorithms for classification include linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), multilayer perceptrons, and support vector machines (SVM). Recently, pixel/voxel-based ML (PML) emerged in medical image processing/analysis, which uses pixel/voxel values in images directly, instead of features calculated from segmented lesions, as input information; thus, feature calculation or segmentation is not required. In this paper, ML techniques used in CAD schemes for detection and diagnosis of lung nodules in thoracic CT and for detection of polyps in CT colonography (CTC) are surveyed and reviewed.
Collapse
Affiliation(s)
- Kenji Suzuki
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
10
|
Computer-aided diagnosis systems for lung cancer: challenges and methodologies. Int J Biomed Imaging 2013; 2013:942353. [PMID: 23431282 PMCID: PMC3570946 DOI: 10.1155/2013/942353] [Citation(s) in RCA: 116] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2012] [Accepted: 11/20/2012] [Indexed: 11/24/2022] Open
Abstract
This paper overviews one of the most important, interesting, and challenging problems in oncology, the problem of lung cancer diagnosis. Developing an effective computer-aided diagnosis (CAD) system for lung cancer is of great clinical importance and can increase the patient's chance of survival. For this reason, CAD systems for lung cancer have been investigated in a huge number of research studies. A typical CAD system for lung cancer diagnosis is composed of four main processing steps: segmentation of the lung fields, detection of nodules inside the lung fields, segmentation of the detected nodules, and diagnosis of the nodules as benign or malignant. This paper overviews the current state-of-the-art techniques that have been developed to implement each of these CAD processing steps. For each technique, various aspects of technical issues, implemented methodologies, training and testing databases, and validation methods, as well as achieved performances, are described. In addition, the paper addresses several challenges that researchers face in each implementation step and outlines the strengths and drawbacks of the existing approaches for lung cancer CAD systems.
Collapse
|
11
|
Suzuki K. A review of computer-aided diagnosis in thoracic and colonic imaging. Quant Imaging Med Surg 2012; 2:163-76. [PMID: 23256078 DOI: 10.3978/j.issn.2223-4292.2012.09.02] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2012] [Accepted: 09/19/2012] [Indexed: 12/24/2022]
Abstract
Medical imaging has been indispensable in medicine since the discovery of x-rays. Medical imaging offers useful information on patients' medical conditions and on the causes of their symptoms and diseases. As imaging technologies advance, a large number of medical images are produced which physicians/radiologists must interpret. Thus, computer aids are demanded and become indispensable in physicians' decision making based on medical images. Consequently, computer-aided detection and diagnosis (CAD) has been investigated and has been an active research area in medical imaging. CAD is defined as detection and/or diagnosis made by a radiologist/physician who takes into account the computer output as a "second opinion". In CAD research, detection and diagnosis of lung and colorectal cancer in thoracic and colonic imaging constitute major areas, because lung and colorectal cancers are the leading and second leading causes, respectively, of cancer deaths in the U.S. and also in other countries. In this review, CAD of the thorax and colon, including CAD for detection and diagnosis of lung nodules in thoracic CT, and that for detection of polyps in CT colonography, are reviewed.
Collapse
Affiliation(s)
- Kenji Suzuki
- Department of Radiology, The University of Chicago, 5841 South Maryland Avenue, Chicago, IL 60637, USA
| |
Collapse
|
12
|
Tan M, Deklerck R, Jansen B, Bister M, Cornelis J. A novel computer-aided lung nodule detection system for CT images. Med Phys 2011; 38:5630-45. [PMID: 21992380 DOI: 10.1118/1.3633941] [Citation(s) in RCA: 158] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The paper presents a complete computer-aided detection (CAD) system for the detection of lung nodules in computed tomography images. A new mixed feature selection and classification methodology is applied for the first time on a difficult medical image analysis problem. METHODS The CAD system was trained and tested on images from the publicly available Lung Image Database Consortium (LIDC) on the National Cancer Institute website. The detection stage of the system consists of a nodule segmentation method based on nodule and vessel enhancement filters and a computed divergence feature to locate the centers of the nodule clusters. In the subsequent classification stage, invariant features, defined on a gauge coordinates system, are used to differentiate between real nodules and some forms of blood vessels that are easily generating false positive detections. The performance of the novel feature-selective classifier based on genetic algorithms and artificial neural networks (ANNs) is compared with that of two other established classifiers, namely, support vector machines (SVMs) and fixed-topology neural networks. A set of 235 randomly selected cases from the LIDC database was used to train the CAD system. The system has been tested on 125 independent cases from the LIDC database. RESULTS The overall performance of the fixed-topology ANN classifier slightly exceeds that of the other classifiers, provided the number of internal ANN nodes is chosen well. Making educated guesses about the number of internal ANN nodes is not needed in the new feature-selective classifier, and therefore this classifier remains interesting due to its flexibility and adaptability to the complexity of the classification problem to be solved. Our fixed-topology ANN classifier with 11 hidden nodes reaches a detection sensitivity of 87.5% with an average of four false positives per scan, for nodules with diameter greater than or equal to 3 mm. Analysis of the false positive items reveals that a considerable proportion (18%) of them are smaller nodules, less than 3 mm in diameter. CONCLUSIONS A complete CAD system incorporating novel features is presented, and its performance with three separate classifiers is compared and analyzed. The overall performance of our CAD system equipped with any of the three classifiers is well with respect to other methods described in literature.
Collapse
Affiliation(s)
- Maxine Tan
- Department of Electronics and Informatics , Vrije Universiteit Brussel, Brussel, Belgium
| | | | | | | | | |
Collapse
|
13
|
Okumura E, Kawashita I, Ishida T. Computerized analysis of pneumoconiosis in digital chest radiography: effect of artificial neural network trained with power spectra. J Digit Imaging 2011; 24:1126-32. [PMID: 21153856 PMCID: PMC3222544 DOI: 10.1007/s10278-010-9357-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
It is difficult for radiologists to classify pneumoconiosis with small nodules on chest radiographs. Therefore, we have developed a computer-aided diagnosis (CAD) system based on the rule-based plus artificial neural network (ANN) method for distinction between normal and abnormal regions of interest (ROIs) selected from chest radiographs with and without pneumoconiosis. The image database consists of 11 normal and 12 abnormal chest radiographs. These abnormal cases included five silicoses, four asbestoses, and three other pneumoconioses. ROIs (matrix size, 32 × 32) were selected from normal and abnormal lungs. We obtained power spectra (PS) by Fourier transform for the frequency analysis. A rule-based method using PS values at 0.179 and 0.357 cycles per millimeter, corresponding to the spatial frequencies of nodular patterns, were employed for identification of obviously normal or obviously abnormal ROIs. Then, ANN was applied for classification of the remaining normal and abnormal ROIs, which were not classified as obviously abnormal or normal by the rule-based method. The classification performance was evaluated by the area under the receiver operating characteristic curve (Az value). The Az value was 0.972 ± 0.012 for the rule-based plus ANN method, which was larger than that of 0.961 ± 0.016 for the ANN method alone (P ≤ 0.15) and that of 0.873 for the rule-based method alone. We have developed a rule-based plus pattern recognition technique based on the ANN for classification of pneumoconiosis on chest radiography. Our CAD system based on PS would be useful to assist radiologists in the classification of pneumoconiosis.
Collapse
Affiliation(s)
- Eiichiro Okumura
- Department of Medical Radiological Technology, Kagoshima Medical Technology College, 5417-1 Hirakawa, Kagoshima, 891-0133, Japan.
| | | | | |
Collapse
|
14
|
Suzuki K, Zhang J, Xu J. Massive-training artificial neural network coupled with Laplacian-eigenfunction-based dimensionality reduction for computer-aided detection of polyps in CT colonography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:1907-17. [PMID: 20570766 PMCID: PMC4283824 DOI: 10.1109/tmi.2010.2053213] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
A major challenge in the current computer-aided detection (CAD) of polyps in CT colonography (CTC) is to reduce the number of false-positive (FP) detections while maintaining a high sensitivity level. A pattern-recognition technique based on the use of an artificial neural network (ANN) as a filter, which is called a massive-training ANN (MTANN), has been developed recently for this purpose. The MTANN is trained with a massive number of subvolumes extracted from input volumes together with the teaching volumes containing the distribution for the "likelihood of being a polyp;" hence the term "massive training." Because of the large number of subvolumes and the high dimensionality of voxels in each input subvolume, the training of an MTANN is time-consuming. In order to solve this time issue and make an MTANN work more efficiently, we propose here a dimension reduction method for an MTANN by using Laplacian eigenfunctions (LAPs), denoted as LAP-MTANN. Instead of input voxels, the LAP-MTANN uses the dependence structures of input voxels to compute the selected LAPs of the input voxels from each input subvolume and thus reduces the dimensions of the input vector to the MTANN. Our database consisted of 246 CTC datasets obtained from 123 patients, each of whom was scanned in both supine and prone positions. Seventeen patients had 29 polyps, 15 of which were 5-9 mm and 14 were 10-25 mm in size. We divided our database into a training set and a test set. The training set included 10 polyps in 10 patients and 20 negative patients. The test set had 93 patients including 19 polyps in seven patients and 86 negative patients. To investigate the basic properties of a LAP-MTANN, we trained the LAP-MTANN with actual polyps and a single source of FPs, which were rectal tubes. We applied the trained LAP-MTANN to simulated polyps and rectal tubes. The results showed that the performance of LAP-MTANNs with 20 LAPs was advantageous over that of the original MTANN with 171 inputs. To test the feasibility of the LAP-MTANN, we compared the LAP-MTANN with the original MTANN in the distinction between actual polyps and various types of FPs. The original MTANN yielded a 95% (18/19) by-polyp sensitivity at an FP rate of 3.6 (338/93) per patient, whereas the LAP-MTANN achieved a comparable performance, i.e., an FP rate of 3.9 (367/93) per patient at the same sensitivity level. With the use of the dimension reduction architecture, the time required for training was reduced from 38 h to 4 h. The classification performance in terms of the area under the receiver-operating-characteristic curve of the LAP-MTANN (0.84) was slightly higher than that of the original MTANN (0.82) with no statistically significant difference (p-value =0.48).
Collapse
Affiliation(s)
- Kenji Suzuki
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA.
| | | | | |
Collapse
|
15
|
Suzuki K, Rockey DC, Dachman AH. CT colonography: advanced computer-aided detection scheme utilizing MTANNs for detection of "missed" polyps in a multicenter clinical trial. Med Phys 2010; 37:12-21. [PMID: 20175461 DOI: 10.1118/1.3263615] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
PURPOSE The purpose of this study was to develop an advanced computer-aided detection (CAD) scheme utilizing massive-training artificial neural networks (MTANNs) to allow detection of "difficult" polyps in CT colonography (CTC) and to evaluate its performance on false-negative (FN) CTC cases that radiologists "missed" in a multicenter clinical trial. METHODS The authors developed an advanced CAD scheme consisting of an initial polyp-detection scheme for identification of polyp candidates and a mixture of expert MTANNs for substantial reduction in false positives (FPs) while maintaining sensitivity. The initial polyp-detection scheme consisted of (1) colon segmentation based on anatomy-based extraction and colon-based analysis and (2) detection of polyp candidates based on a morphologic analysis on the segmented colon. The mixture of expert MTANNs consisted of (1) supervised enhancement of polyps and suppression of various types of nonpolyps, (2) a scoring scheme for converting output voxels into a score for each polyp candidate, and (3) combining scores from multiple MTANNs by the use of a mixing artificial neural network. For testing the advanced CAD scheme, they created a database containing 24 FN cases with 23 polyps (range of 6-15 mm; average of 8 mm) and a mass (35 mm), which were "missed" by radiologists in CTC in the original trial in which 15 institutions participated. RESULTS The initial polyp-detection scheme detected 63% (15/24) of the missed polyps with 21.0 (505/24) FPs per patient. The MTANNs removed 76% of the FPs with loss of one true positive; thus, the performance of the advanced CAD scheme was improved to a sensitivity of 58% (14/24) with 8.6 (207/24) FPs per patient, whereas a conventional CAD scheme yielded a sensitivity of 25% at the same FP rate (the difference was statistically significant). CONCLUSIONS With the advanced MTANN CAD scheme, 58% of the polyps missed by radiologists in the original trial were detected and with a reasonable number of FPs. The results suggest that the use of an advanced MTANN CAD scheme may potentially enhance the detection of "difficult" polyps.
Collapse
Affiliation(s)
- Kenji Suzuki
- Department of Radiology, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637, USA.
| | | | | |
Collapse
|
16
|
Suzuki K. A supervised 'lesion-enhancement' filter by use of a massive-training artificial neural network (MTANN) in computer-aided diagnosis (CAD). Phys Med Biol 2009; 54:S31-45. [PMID: 19687563 DOI: 10.1088/0031-9155/54/18/s03] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved.
Collapse
Affiliation(s)
- Kenji Suzuki
- Department of Radiology, Committee on Medical Physics, The University of Chicago, 5841 South Maryland Avenue, Chicago, IL 60637, USA.
| |
Collapse
|
17
|
Suzuki K, Yoshida H, Näppi J, Armato SG, Dachman AH. Mixture of expert 3D massive-training ANNs for reduction of multiple types of false positives in CAD for detection of polyps in CT colonography. Med Phys 2008; 35:694-703. [PMID: 18383691 DOI: 10.1118/1.2829870] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
One of the major challenges in computer-aided detection (CAD) of polyps in CT colonography (CTC) is the reduction of false-positive detections (FPs) without a concomitant reduction in sensitivity. A large number of FPs is likely to confound the radiologist's task of image interpretation, lower the radiologist's efficiency, and cause radiologists to lose their confidence in CAD as a useful tool. Major sources of FPs generated by CAD schemes include haustral folds, residual stool, rectal tubes, the ileocecal valve, and extra-colonic structures such as the small bowel and stomach. Our purpose in this study was to develop a method for the removal of various types of FPs in CAD of polyps while maintaining a high sensitivity. To achieve this, we developed a "mixture of expert" three-dimensional (3D) massive-training artificial neural networks (MTANNs) consisting of four 3D MTANNs that were designed to differentiate between polyps and four categories of FPs: (1) rectal tubes, (2) stool with bubbles, (3) colonic walls with haustral folds, and (4) solid stool. Each expert 3D MTANN was trained with examples from a specific non-polyp category along with typical polyps. The four expert 3D MTANNs were combined with a mixing artificial neural network (ANN) such that different types of FPs could be removed. Our database consisted of 146 CTC datasets obtained from 73 patients whose colons were prepared by standard pre-colonoscopy cleansing. Each patient was scanned in both supine and prone positions. Radiologists established the locations of polyps through the use of optical-colonoscopy reports. Fifteen patients had 28 polyps, 15 of which were 5-9 mm and 13 were 10-25 mm in size. The CTC cases were subjected to our previously reported CAD method consisting of centerline-based extraction of the colon, shape-based detection of polyp candidates, and a Bayesian-ANN-based classification of polyps. The original CAD method yielded 96.4% (27/28) by-polyp sensitivity with an average of 3.1 (224/73) FPs per patient. The mixture of expert 3D MTANNs removed 63% (142/224) of the FPs without the loss of any true positive; thus, the FP rate of our CAD scheme was improved to 1.1 (82/73) FPs per patient while the original sensitivity was maintained. By use of the mixture of expert 3D MTANNs, the specificity of a CAD scheme for detection of polyps in CTC was substantially improved while a high sensitivity was maintained.
Collapse
Affiliation(s)
- Kenji Suzuki
- Department of Radiology, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637, USA.
| | | | | | | | | |
Collapse
|
18
|
Suzuki K, Yoshida H, Näppi J, Dachman AH. Massive-training artificial neural network (MTANN) for reduction of false positives in computer-aided detection of polyps: Suppression of rectal tubes. Med Phys 2006; 33:3814-24. [PMID: 17089846 DOI: 10.1118/1.2349839] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
One of the limitations of the current computer-aided detection (CAD) of polyps in CT colonography (CTC) is a relatively large number of false-positive (FP) detections. Rectal tubes (RTs) are one of the typical sources of FPs because a portion of a RT, especially a portion of a bulbous tip, often exhibits a cap-like shape that closely mimics the appearance of a small polyp. Radiologists can easily recognize and dismiss RT-induced FPs; thus, they may lose their confidence in CAD as an effective tool if the CAD scheme generates such "obvious" FPs due to RTs consistently. In addition, RT-induced FPs may distract radiologists from less common true positives in the rectum. Therefore, removal RT-induced FPs as well as other types of FPs is desirable while maintaining a high sensitivity in the detection of polyps. We developed a three-dimensional (3D) massive-training artificial neural network (MTANN) for distinction between polyps and RTs in 3D CTC volumetric data. The 3D MTANN is a supervised volume-processing technique which is trained with input CTC volumes and the corresponding "teaching" volumes. The teaching volume for a polyp contains a 3D Gaussian distribution, and that for a RT contains zeros for enhancement of polyps and suppression of RTs, respectively. For distinction between polyps and nonpolyps including RTs, a 3D scoring method based on a 3D Gaussian weighting function is applied to the output of the trained 3D MTANN. Our database consisted of CTC examinations of 73 patients, scanned in both supine and prone positions (146 CTC data sets in total), with optical colonoscopy as a reference standard for the presence of polyps. Fifteen patients had 28 polyps, 15 of which were 5-9 mm and 13 were 10-25 mm in size. These CTC cases were subjected to our previously reported CAD scheme that included centerline-based segmentation of the colon, shape-based detection of polyps, and reduction of FPs by use of a Bayesian neural network based on geometric and texture features. Application of this CAD scheme yielded 96.4% (27/28) by-polyp sensitivity with 3.1 (224/73) FPs per patient, among which 20 FPs were caused by RTs. To eliminate the FPs due to RTs and possibly other normal structures, we trained a 3D MTANN with ten representative polyps and ten RTs, and applied the trained 3D MTANN to the above CAD true- and false-positive detections. In the output volumes of the 3D MTANN, polyps were represented by distributions of bright voxels, whereas RTs and other normal structures partly similar to RTs appeared as darker voxels, indicating the ability of the 3D MTANN to suppress RTs as well as other normal structures effectively. Application of the 3D MTANN to the CAD detections showed that the 3D MTANN eliminated all RT-induced 20 FPs, as well as 53 FPs due to other causes, without removal of any true positives. Overall, the 3D MTANN was able to reduce the FP rate of the CAD scheme from 3.1 to 2.1 FPs per patient (33% reduction), while the original by-polyp sensitivity of 96.4% was maintained.
Collapse
Affiliation(s)
- Kenji Suzuki
- Department of Radiology, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637, USA.
| | | | | | | |
Collapse
|
19
|
Suzuki K, Abe H, MacMahon H, Doi K. Image-processing technique for suppressing ribs in chest radiographs by means of massive training artificial neural network (MTANN). IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:406-16. [PMID: 16608057 DOI: 10.1109/tmi.2006.871549] [Citation(s) in RCA: 80] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
When lung nodules overlap with ribs or clavicles in chest radiographs, it can be difficult for radiologists as well as computer-aided diagnostic (CAD) schemes to detect these nodules. In this paper, we developed an image-processing technique for suppressing the contrast of ribs and clavicles in chest radiographs by means of a multiresolution massive training artificial neural network (MTANN). An MTANN is a highly nonlinear filter that can be trained by use of input chest radiographs and the corresponding "teaching" images. We employed "bone" images obtained by use of a dual-energy subtraction technique as the teaching images. For effective suppression of ribs having various spatial frequencies, we developed a multiresolution MTANN consisting of multiresolution decomposition/composition techniques and three MTANNs for three different-resolution images. After training with input chest radiographs and the corresponding dual-energy bone images, the multiresolution MTANN was able to provide "bone-image-like" images which were similar to the teaching bone images. By subtracting the bone-image-like images from the corresponding chest radiographs, we were able to produce "soft-tissue-image-like" images where ribs and clavicles were substantially suppressed. We used a validation test database consisting of 118 chest radiographs with pulmonary nodules and an independent test database consisting of 136 digitized screen-film chest radiographs with 136 solitary pulmonary nodules collected from 14 medical institutions in this study. When our technique was applied to nontraining chest radiographs, ribs and clavicles in the chest radiographs were suppressed substantially, while the visibility of nodules and lung vessels was maintained. Thus, our image-processing technique for rib suppression by means of a multiresolution MTANN would be potentially useful for radiologists as well as for CAD schemes in detection of lung nodules on chest radiographs.
Collapse
Affiliation(s)
- Kenji Suzuki
- Kurt Rossmann Laboratories for Radiologic Image Research, Department of Radiology, The University of Chicago, 5841 S. Maryland Ave., Chicago, IL 60637, USA.
| | | | | | | |
Collapse
|