1
|
Guan H, Yap PT, Bozoki A, Liu M. Federated learning for medical image analysis: A survey. PATTERN RECOGNITION 2024; 151:110424. [PMID: 38559674 PMCID: PMC10976951 DOI: 10.1016/j.patcog.2024.110424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Machine learning in medical imaging often faces a fundamental dilemma, namely, the small sample size problem. Many recent studies suggest using multi-domain data pooled from different acquisition sites/centers to improve statistical power. However, medical images from different sites cannot be easily shared to build large datasets for model training due to privacy protection reasons. As a promising solution, federated learning, which enables collaborative training of machine learning models based on data from different sites without cross-site data sharing, has attracted considerable attention recently. In this paper, we conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis. We have systematically gathered research papers on federated learning and its applications in medical image analysis published between 2017 and 2023. Our search and compilation were conducted using databases from IEEE Xplore, ACM Digital Library, Science Direct, Springer Link, Web of Science, Google Scholar, and PubMed. In this survey, we first introduce the background of federated learning for dealing with privacy protection and collaborative learning issues. We then present a comprehensive review of recent advances in federated learning methods for medical image analysis. Specifically, existing methods are categorized based on three critical aspects of a federated learning system, including client end, server end, and communication techniques. In each category, we summarize the existing federated learning methods according to specific research problems in medical image analysis and also provide insights into the motivations of different approaches. In addition, we provide a review of existing benchmark medical imaging datasets and software platforms for current federated learning research. We also conduct an experimental study to empirically evaluate typical federated learning methods for medical image analysis. This survey can help to better understand the current research status, challenges, and potential research opportunities in this promising research field.
Collapse
Affiliation(s)
- Hao Guan
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Andrea Bozoki
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
2
|
Al-Otaibi S, Rehman A, Raza A, Alyami J, Saba T. CVG-Net: novel transfer learning based deep features for diagnosis of brain tumors using MRI scans. PeerJ Comput Sci 2024; 10:e2008. [PMID: 38855235 PMCID: PMC11157570 DOI: 10.7717/peerj-cs.2008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 04/01/2024] [Indexed: 06/11/2024]
Abstract
Brain tumors present a significant medical challenge, demanding accurate and timely diagnosis for effective treatment planning. These tumors disrupt normal brain functions in various ways, giving rise to a broad spectrum of physical, cognitive, and emotional challenges. The daily increase in mortality rates attributed to brain tumors underscores the urgency of this issue. In recent years, advanced medical imaging techniques, particularly magnetic resonance imaging (MRI), have emerged as indispensable tools for diagnosing brain tumors. Brain MRI scans provide high-resolution, non-invasive visualization of brain structures, facilitating the precise detection of abnormalities such as tumors. This study aims to propose an effective neural network approach for the timely diagnosis of brain tumors. Our experiments utilized a multi-class MRI image dataset comprising 21,672 images related to glioma tumors, meningioma tumors, and pituitary tumors. We introduced a novel neural network-based feature engineering approach, combining 2D convolutional neural network (2DCNN) and VGG16. The resulting 2DCNN-VGG16 network (CVG-Net) extracted spatial features from MRI images using 2DCNN and VGG16 without human intervention. The newly created hybrid feature set is then input into machine learning models to diagnose brain tumors. We have balanced the multi-class MRI image features data using the Synthetic Minority Over-sampling Technique (SMOTE) approach. Extensive research experiments demonstrate that utilizing the proposed CVG-Net, the k-neighbors classifier outperformed state-of-the-art studies with a k-fold accuracy performance score of 0.96. We also applied hyperparameter tuning to enhance performance for multi-class brain tumor diagnosis. Our novel proposed approach has the potential to revolutionize early brain tumor diagnosis, providing medical professionals with a cost-effective and timely diagnostic mechanism.
Collapse
Affiliation(s)
- Shaha Al-Otaibi
- Department of Information Systems, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Ali Raza
- Institute of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan, Pakistan
| | - Jaber Alyami
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
3
|
Hegde N, Shishir M, Shashank S, Dayananda P, Latte MV. A Survey on Machine Learning and Deep Learning-based Computer-Aided Methods for Detection of Polyps in CT Colonography. Curr Med Imaging 2021; 17:3-15. [PMID: 32294045 DOI: 10.2174/2213335607999200415141427] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 02/09/2020] [Accepted: 02/27/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Colon cancer generally begins as a neoplastic growth of tissue, called polyps, originating from the inner lining of the colon wall. Most colon polyps are considered harmless but over the time, they can evolve into colon cancer, which, when diagnosed in later stages, is often fatal. Hence, time is of the essence in the early detection of polyps and the prevention of colon cancer. METHODS To aid this endeavor, many computer-aided methods have been developed, which use a wide array of techniques to detect, localize and segment polyps from CT Colonography images. In this paper, a comprehensive state-of-the-art method is proposed and categorize this work broadly using the available classification techniques using Machine Learning and Deep Learning. CONCLUSION The performance of each of the proposed approach is analyzed with existing methods and also how they can be used to tackle the timely and accurate detection of colon polyps.
Collapse
Affiliation(s)
- Niharika Hegde
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | - M Shishir
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | - S Shashank
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | - P Dayananda
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | | |
Collapse
|
4
|
Biswas B, Bhattacharyya S, Chakrabarti A, Dey KN, Platos J, Snasel V. Colonoscopy contrast-enhanced by intuitionistic fuzzy soft sets for polyp cancer localization. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106492] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
5
|
Wang S, Wang Q, Shao Y, Qu L, Lian C, Lian J, Shen D. Iterative Label Denoising Network: Segmenting Male Pelvic Organs in CT From 3D Bounding Box Annotations. IEEE Trans Biomed Eng 2020; 67:2710-2720. [PMID: 31995472 DOI: 10.1109/tbme.2020.2969608] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Obtaining accurate segmentation of the prostate and nearby organs at risk (e.g., bladder and rectum) in CT images is critical for radiotherapy of prostate cancer. Currently, the leading automatic segmentation algorithms are based on Fully Convolutional Networks (FCNs), which achieve remarkable performance but usually need large-scale datasets with high-quality voxel-wise annotations for full supervision of the training. Unfortunately, such annotations are difficult to acquire, which becomes a bottleneck to build accurate segmentation models in real clinical applications. In this paper, we propose a novel weakly supervised segmentation approach that only needs 3D bounding box annotations covering the organs of interest to start the training. Obviously, the bounding box includes many non-organ voxels that carry noisy labels to mislead the segmentation model. To this end, we propose the label denoising module and embed it into the iterative training scheme of the label denoising network (LDnet) for segmentation. The labels of the training voxels are predicted by the tentative LDnet, while the label denoising module identifies the voxels with unreliable labels. As only the good training voxels are preserved, the iteratively re-trained LDnet can refine its segmentation capability gradually. Our results are remarkable, i.e., reaching ∼ 94% (prostate), ∼ 91% (bladder), and ∼ 86% (rectum) of the Dice Similarity Coefficients (DSCs), compared to the case of fully supervised learning upon high-quality voxel-wise annotations and also superior to several state-of-the-art approaches. To our best knowledge, this is the first work to achieve voxel-wise segmentation in CT images from simple 3D bounding box annotations, which can greatly reduce many labeling efforts and meet the demands of the practical clinical applications.
Collapse
|
6
|
Manjunath KN, Siddalingaswamy PC, Prabhu GK. Measurement of smaller colon polyp in CT colonography images using morphological image processing. Int J Comput Assist Radiol Surg 2017; 12:1845-1855. [PMID: 28573348 DOI: 10.1007/s11548-017-1615-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Accepted: 05/16/2017] [Indexed: 11/29/2022]
Abstract
PURPOSE Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. METHODS A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. RESULTS The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even <5 mm were also detected. The results were validated qualitatively and quantitatively using both 2D MPR and 3D view. Implementation was done on a high-performance computer with parallel processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. CONCLUSIONS The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].
Collapse
Affiliation(s)
- K N Manjunath
- Faculty, Computer Science and Engineering, Manipal Institute of Technology, Manipal University, Manipal, 576104, India.
| | - P C Siddalingaswamy
- Faculty, Computer Science and Engineering, Manipal Institute of Technology, Manipal University, Manipal, 576104, India
| | - G K Prabhu
- Faculty, Biomedical Engineering, Manipal Institute of Technology, Manipal University, Manipal, 576104, India
| |
Collapse
|
7
|
Attraction Propagation: A User-Friendly Interactive Approach for Polyp Segmentation in Colonoscopy Images. PLoS One 2016; 11:e0155371. [PMID: 27191849 PMCID: PMC4871526 DOI: 10.1371/journal.pone.0155371] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2015] [Accepted: 04/27/2016] [Indexed: 11/19/2022] Open
Abstract
The article raised a user-friendly interactive approach-Attraction Propagation (AP) in segmentation of colorectal polyps. Compared with other interactive approaches, the AP relied on only one foreground seed to get different shapes of polyps, and it can be compatible with pre-processing stage of Computer-Aided Diagnosis (CAD) under the systematically procedure of Optical Colonoscopy (OC). The experimental design was based on challenging distinct datasets that totally includes 1691 OC images, and the results demonstrated that no matter in accuracy or calculating speed, the AP performed better than the state-of-the-art.
Collapse
|
8
|
Shinagawa Y, Metaxas DN. Multi-Instance Deep Learning: Discover Discriminative Local Anatomies for Bodypart Recognition. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1332-1343. [PMID: 26863652 DOI: 10.1109/tmi.2016.2524985] [Citation(s) in RCA: 88] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In general image recognition problems, discriminative information often lies in local image patches. For example, most human identity information exists in the image patches containing human faces. The same situation stays in medical images as well. "Bodypart identity" of a transversal slice-which bodypart the slice comes from-is often indicated by local image information, e.g., a cardiac slice and an aorta arch slice are only differentiated by the mediastinum region. In this work, we design a multi-stage deep learning framework for image classification and apply it on bodypart recognition. Specifically, the proposed framework aims at: 1) discover the local regions that are discriminative and non-informative to the image classification problem, and 2) learn a image-level classifier based on these local regions. We achieve these two tasks by the two stages of learning scheme, respectively. In the pre-train stage, a convolutional neural network (CNN) is learned in a multi-instance learning fashion to extract the most discriminative and and non-informative local patches from the training slices. In the boosting stage, the pre-learned CNN is further boosted by these local patches for image classification. The CNN learned by exploiting the discriminative local appearances becomes more accurate than those learned from global image context. The key hallmark of our method is that it automatically discovers the discriminative and non-informative local patches through multi-instance deep learning. Thus, no manual annotation is required. Our method is validated on a synthetic dataset and a large scale CT dataset. It achieves better performances than state-of-the-art approaches, including the standard deep CNN.
Collapse
|
9
|
Roth HR, Lu L, Liu J, Yao J, Seff A, Cherry K, Kim L, Summers RM. Improving Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1170-81. [PMID: 26441412 PMCID: PMC7340334 DOI: 10.1109/tmi.2015.2482920] [Citation(s) in RCA: 259] [Impact Index Per Article: 32.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities ∼ 100% of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57% to 70%, 43% to 77%, and 58% to 75% at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively.
Collapse
|
10
|
Tajbakhsh N, Gurudu SR, Liang J. Automated Polyp Detection in Colonoscopy Videos Using Shape and Context Information. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:630-44. [PMID: 26462083 DOI: 10.1109/tmi.2015.2487997] [Citation(s) in RCA: 231] [Impact Index Per Article: 28.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
This paper presents the culmination of our research in designing a system for computer-aided detection (CAD) of polyps in colonoscopy videos. Our system is based on a hybrid context-shape approach, which utilizes context information to remove non-polyp structures and shape information to reliably localize polyps. Specifically, given a colonoscopy image, we first obtain a crude edge map. Second, we remove non-polyp edges from the edge map using our unique feature extraction and edge classification scheme. Third, we localize polyp candidates with probabilistic confidence scores in the refined edge maps using our novel voting scheme. The suggested CAD system has been tested using two public polyp databases, CVC-ColonDB, containing 300 colonoscopy images with a total of 300 polyp instances from 15 unique polyps, and ASU-Mayo database, which is our collection of colonoscopy videos containing 19,400 frames and a total of 5,200 polyp instances from 10 unique polyps. We have evaluated our system using free-response receiver operating characteristic (FROC) analysis. At 0.1 false positives per frame, our system achieves a sensitivity of 88.0% for CVC-ColonDB and a sensitivity of 48% for the ASU-Mayo database. In addition, we have evaluated our system using a new detection latency analysis where latency is defined as the time from the first appearance of a polyp in the colonoscopy video to the time of its first detection by our system. At 0.05 false positives per frame, our system yields a polyp detection latency of 0.3 seconds.
Collapse
|
11
|
Motai Y, Ma D, Docef A, Yoshida H. Smart Colonography for Distributed Medical Databases with Group Kernel Feature Analysis. ACM T INTEL SYST TEC 2015. [DOI: 10.1145/2668136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Computer-Aided Detection (CAD) of polyps in Computed Tomographic (CT) colonography is currently very limited since a single database at each hospital/institution doesn't provide sufficient data for training the CAD system's classification algorithm. To address this limitation, we propose to use multiple databases, (e.g., big data studies) to create multiple institution-wide databases using distributed computing technologies, which we call smart colonography. Smart colonography may be built by a larger colonography database networked through the participation of multiple institutions via distributed computing. The motivation herein is to create a distributed database that increases the detection accuracy of CAD diagnosis by covering many true-positive cases. Colonography data analysis is mutually accessible to increase the availability of resources so that the knowledge of radiologists is enhanced. In this article, we propose a scalable and efficient algorithm called Group Kernel Feature Analysis (GKFA), which can be applied to multiple cancer databases so that the overall performance of CAD is improved. The key idea behind the proposed GKFA method is to allow the feature space to be updated as the training proceeds with more data being fed from other institutions into the algorithm. Experimental results show that GKFA achieves very good classification accuracy.
Collapse
Affiliation(s)
| | | | - Alen Docef
- Virginia Commonwealth University, VA, USA
| | - Hiroyuki Yoshida
- Massachusetts General Hospital and Harvard Medical School, MA, USA
| |
Collapse
|
12
|
Detecting tympanostomy tubes from otoscopic images via offline and online training. Comput Biol Med 2015; 61:107-18. [PMID: 25889718 DOI: 10.1016/j.compbiomed.2015.03.025] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Revised: 03/22/2015] [Accepted: 03/23/2015] [Indexed: 01/10/2023]
Abstract
Tympanostomy tube placement has been commonly used nowadays as a surgical treatment for otitis media. Following the placement, regular scheduled follow-ups for checking the status of the tympanostomy tubes are important during the treatment. The complexity of performing the follow up care mainly lies on identifying the presence and patency of the tympanostomy tube. An automated tube detection program will largely reduce the care costs and enhance the clinical efficiency of the ear nose and throat specialists and general practitioners. In this paper, we develop a computer vision system that is able to automatically detect a tympanostomy tube in an otoscopic image of the ear drum. The system comprises an offline classifier training process followed by a real-time refinement stage performed at the point of care. The offline training process constructs a three-layer cascaded classifier with each layer reflecting specific characteristics of the tube. The real-time refinement process enables the end users to interact and adjust the system over time based on their otoscopic images and patient care. The support vector machine (SVM) algorithm has been applied to train all of the classifiers. Empirical evaluation of the proposed system on both high quality hospital images and low quality internet images demonstrates the effectiveness of the system. The offline classifier trained using 215 images could achieve a 90% accuracy in terms of classifying otoscopic images with and without a tympanostomy tube, and then the real-time refinement process could improve the classification accuracy by 3-5% based on additional 20 images.
Collapse
|
13
|
Xu JW, Suzuki K. Max-AUC feature selection in computer-aided detection of polyps in CT colonography. IEEE J Biomed Health Inform 2014; 18:585-93. [PMID: 24608058 PMCID: PMC4283828 DOI: 10.1109/jbhi.2013.2278023] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level.
Collapse
Affiliation(s)
- Jian-Wu Xu
- Department of Radiology, University of Chicago, Chicago, IL 60637 USA
| | - Kenji Suzuki
- Department of Radiology, University of Chicago, Chicago, IL 60637 USA
| |
Collapse
|
14
|
Bria A, Karssemeijer N, Tortorella F. Learning from unbalanced data: A cascade-based approach for detecting clustered microcalcifications. Med Image Anal 2014; 18:241-52. [DOI: 10.1016/j.media.2013.10.014] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2013] [Revised: 10/18/2013] [Accepted: 10/31/2013] [Indexed: 11/29/2022]
|
15
|
Calle-Alonso F, Pérez CJ, Arias-Nicolás JP, Martín J. Computer-aided diagnosis system: a Bayesian hybrid classification method. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2013; 112:104-113. [PMID: 23932384 DOI: 10.1016/j.cmpb.2013.05.029] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2012] [Revised: 04/29/2013] [Accepted: 05/26/2013] [Indexed: 06/02/2023]
Abstract
A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified.
Collapse
Affiliation(s)
- F Calle-Alonso
- Department of Mathematics, Faculty of Veterinary Medicine, University of Extremadura, Avda. de la Universidad s/n, 10003 Cáceres, Spain.
| | | | | | | |
Collapse
|
16
|
van Ravesteijn VF, Boellaard TN, van der Paardt MP, Serlie IWO, de Haan MC, Stoker J, van Vliet LJ, Vos FM. Electronic cleansing for 24-h limited bowel preparation CT colonography using principal curvature flow. IEEE Trans Biomed Eng 2013; 60:3036-45. [PMID: 23674411 DOI: 10.1109/tbme.2013.2262046] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
CT colonography (CTC) is one of the recommended methods for colorectal cancer screening. The subject's preparation is one of the most burdensome aspects of CTC with a cathartic bowel preparation. Tagging of the bowel content with an oral contrast medium facilitates CTC with limited bowel preparation. Unfortunately, such preparations adversely affect the 3-D image quality. Thus far, data acquired after very limited bowel preparation were evaluated with a 2-D reading strategy only. Existing cleansing algorithms do not work sufficiently well to allow a primary 3-D reading strategy. We developed an electronic cleansing algorithm, aimed to realize optimal 3-D image quality for low-dose CTC with 24-h limited bowel preparation. The method employs a principal curvature flow algorithm to remove heterogeneities within poorly tagged fecal residue. In addition, a pattern recognition-based approach is used to prevent polyp-like protrusions on the colon surface from being removed by the method. Two experts independently evaluated 40 CTC cases by means of a primary 2-D approach without involvement of electronic cleansing as well as by a primary 3-D method after electronic cleansing. The data contained four variations of 24-h limited bowel preparation and was based on a low radiation dose scanning protocol. The sensitivity for lesions ≥ 6 mm was significantly higher for the primary 3-D reading strategy (84%) than for the primary 2-D reading strategy (68%) (p = 0.031). The reading time was increased from 5:39 min (2-D) to 7:09 min (3-D) (p = 0.005); the readers' confidence was reduced from 2.3 (2-D) to 2.1 (3-D) ( p = 0.013) on a three-point Likert scale. Polyp conspicuity for cleansed submerged lesions was similar to not submerged lesions (p = 0.06). To our knowledge, this study is the first to describe and clinically validate an electronic cleansing algorithm that facilitates low-dose CTC with 24-h limited bowel preparation.
Collapse
|
17
|
|
18
|
Wang S, Summers RM. Machine learning and radiology. Med Image Anal 2012; 16:933-51. [PMID: 22465077 PMCID: PMC3372692 DOI: 10.1016/j.media.2012.02.005] [Citation(s) in RCA: 322] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2011] [Revised: 01/05/2012] [Accepted: 02/12/2012] [Indexed: 02/06/2023]
Abstract
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers.
Collapse
Affiliation(s)
- Shijun Wang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10 Room 1C224D MSC 1182, Bethesda, MD 20892-1182
| | - Ronald M. Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10 Room 1C224D MSC 1182, Bethesda, MD 20892-1182
| |
Collapse
|
19
|
Linguraru MG, Panjwani N, Fletcher JG, Summers RM. Automated image-based colon cleansing for laxative-free CT colonography computer-aided polyp detection. Med Phys 2012; 38:6633-42. [PMID: 22149845 DOI: 10.1118/1.3662918] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
PURPOSE To evaluate the performance of a computer-aided detection (CAD) system for detecting colonic polyps at noncathartic computed tomography colonography (CTC) in conjunction with an automated image-based colon cleansing algorithm. METHODS An automated colon cleansing algorithm was designed to detect and subtract tagged-stool, accounting for heterogeneity and poor tagging, to be used in conjunction with a colon CAD system. The method is locally adaptive and combines intensity, shape, and texture analysis with probabilistic optimization. CTC data from cathartic-free bowel preparation were acquired for testing and training the parameters. Patients underwent various colonic preparations with barium or Gastroview in divided doses over 48 h before scanning. No laxatives were administered and no dietary modifications were required. Cases were selected from a polyp-enriched cohort and included scans in which at least 90% of the solid stool was visually estimated to be tagged and each colonic segment was distended in either the prone or supine view. The CAD system was run comparatively with and without the stool subtraction algorithm. RESULTS The dataset comprised 38 CTC scans from prone and/or supine scans of 19 patients containing 44 polyps larger than 10 mm (22 unique polyps, if matched between prone and supine scans). The results are robust on fine details around folds, thin-stool linings on the colonic wall, near polyps and in large fluid/stool pools. The sensitivity of the CAD system is 70.5% per polyp at a rate of 5.75 false positives/scan without using the stool subtraction module. This detection improved significantly (p = 0.009) after automated colon cleansing on cathartic-free data to 86.4% true positive rate at 5.75 false positives/scan. CONCLUSIONS An automated image-based colon cleansing algorithm designed to overcome the challenges of the noncathartic colon significantly improves the sensitivity of colon CAD by approximately 15%.
Collapse
Affiliation(s)
- Marius George Linguraru
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 10 Center Drive, Bethesda, Maryland 20892, USA.
| | | | | | | |
Collapse
|
20
|
Wang S, Yao J, Petrick N, Summers RM. Combining Statistical and Geometric Features for Colonic Polyp Detection in CTC Based on Multiple Kernel Learning. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2011; 9:1-15. [PMID: 20953299 DOI: 10.1142/s1469026810002744] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Colon cancer is the second leading cause of cancer-related deaths in the United States. Computed tomographic colonography (CTC) combined with a computer aided detection system provides a feasible approach for improving colonic polyps detection and increasing the use of CTC for colon cancer screening. To distinguish true polyps from false positives, various features extracted from polyp candidates have been proposed. Most of these traditional features try to capture the shape information of polyp candidates or neighborhood knowledge about the surrounding structures (fold, colon wall, etc.). In this paper, we propose a new set of shape descriptors for polyp candidates based on statistical curvature information. These features called histograms of curvature features are rotation, translation and scale invariant and can be treated as complementing existing feature set. Then in order to make full use of the traditional geometric features (defined as group A) and the new statistical features (group B) which are highly heterogeneous, we employed a multiple kernel learning method based on semi-definite programming to learn an optimized classification kernel from the two groups of features. We conducted leave-one-patient-out test on a CTC dataset which contained scans from 66 patients. Experimental results show that a support vector machine (SVM) based on the combined feature set and the semi-definite optimization kernel achieved higher FROC performance compared to SVMs using the two groups of features separately. At a false positive per scan rate of 5, the sensitivity of the SVM using the combined features improved from 0.77 (Group A) and 0.73 (Group B) to 0.83 (p ≤ 0.01).
Collapse
Affiliation(s)
- Shijun Wang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10 Room 1C368X MSC 1182, Bethesda, MD 20892-1182
| | | | | | | |
Collapse
|
21
|
Liu J, Kabadi S, Van Uitert R, Petrick N, Deriche R, Summers RM. Improved computer-aided detection of small polyps in CT colonography using interpolation for curvature estimation. Med Phys 2011; 38:4276-84. [PMID: 21859029 DOI: 10.1118/1.3596529] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
PURPOSE Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation's effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. METHODS The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. RESULTS Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. CONCLUSIONS The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC.
Collapse
Affiliation(s)
- Jiamin Liu
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland 20892-1182, USA
| | | | | | | | | | | |
Collapse
|
22
|
Lee JG, Hyo Kim J, Hyung Kim S, Sun Park H, Ihn Choi B. A straightforward approach to computer-aided polyp detection using a polyp-specific volumetric feature in CT colonography. Comput Biol Med 2011; 41:790-801. [PMID: 21762887 DOI: 10.1016/j.compbiomed.2011.06.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2009] [Revised: 10/19/2010] [Accepted: 06/21/2011] [Indexed: 12/17/2022]
Abstract
This study presents a straightforward approach to computer-aided polyp detection and explores its advantages and future potential. A straightforward computer-aided polyp detection (CAD) scheme was developed that consisted of colon wall segmentation, a polyp-specific volumetric filter, and the counting and thresholding of cluster volume sizes. 65 patients had undergone the bowel cleaning scheme without fecal tagging and the optical colonoscopy (OC) and CT colonography (CTC) were performed. The polyp sizes determined by OC were used as reference measurements. The CTC dataset with 103 polyps were divided into training and test datasets. After tuning for the optimal parameter settings, the per-polyp sensitivities of the developed CAD scheme for clinically relevant polyps (≥ 6 mm) were 100% at 8.5 false positives (FPs)/patient using the training dataset, and 93.3% at 7.7 FPs/patient using the test dataset. The developed CAD scheme was found to have a relatively high detection performance, easily optimized parameter settings, and an easily understood internal operation.
Collapse
Affiliation(s)
- June-Goo Lee
- Interdisciplinary Program in Radiation Applied Life Science, Seoul National University College of Medicine, Seoul 110-799, South Korea
| | | | | | | | | |
Collapse
|
23
|
Xu JW, Suzuki K. Massive-training support vector regression and Gaussian process for false-positive reduction in computer-aided detection of polyps in CT colonography. Med Phys 2011; 38:1888-902. [PMID: 21626922 DOI: 10.1118/1.3562898] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE A massive-training artificial neural network (MTANN) has been developed for the reduction of false positives (FPs) in computer-aided detection (CADe) of polyps in CT colonography (CTC). A major limitation of the MTANN is the long training time. To address this issue, the authors investigated the feasibility of two state-of-the-art regression models, namely, support vector regression (SVR) and Gaussian process regression (GPR) models, in the massive-training framework and developed massive-training SVR (MTSVR) and massive-training GPR (MTGPR) for the reduction of FPs in CADe of polyps. METHODS The authors applied SVR and GPR as volume-processing techniques in the distinction of polyps from FP detections in a CTC CADe scheme. Unlike artificial neural networks (ANNs), both SVR and GPR are memory-based methods that store a part of or the entire training data for testing. Therefore, their training is generally fast and they are able to improve the efficiency of the massive-training methodology. Rooted in a maximum margin property, SVR offers excellent generalization ability and robustness to outliers. On the other hand, GPR approaches nonlinear regression from a Bayesian perspective, which produces both the optimal estimated function and the covariance associated with the estimation. Therefore, both SVR and GPR, as the state-of-the-art nonlinear regression models, are able to offer a performance comparable or potentially superior to that of ANN, with highly efficient training. Both MTSVR and MTGPR were trained directly with voxel values from CTC images. A 3D scoring method based on a 3D Gaussian weighting function was applied to the outputs of MTSVR and MTGPR for distinction between polyps and nonpolyps. To test the performance of the proposed models, the authors compared them to the original MTANN in the distinction between actual polyps and various types of FPs in terms of training time reduction and FP reduction performance. The authors' CTC database consisted of 240 CTC data sets obtained from 120 patients in the supine and prone positions. The training set consisted of 27 patients, 10 of which had 10 polyps. The authors selected 10 nonpolyps (i.e., FP sources) from the training set. These ten polyps and ten nonpolyps were used for training the proposed models. The testing set consisted of 93 patients, including 19 polyps in 7 patients and 86 negative patients with 474 FPs produced by an original CADe scheme. RESULTS With the MTSVR, the training time was reduced by a factor of 190, while a FP reduction performance [by-polyp sensitivity of 94.7% (18/19) with 2.5 (230/93) FPs/patient] comparable to that of the original MTANN [the same sensitivity with 2.6 (244/93) FPs/patient] was achieved. The classification performance in terms of the area under the receiver-operating-characteristic curve value of the MTGPR (0.82) was statistically significantly higher than that of the original MTANN (0.77), with a two-sided p-value of 0.03. The MTGPR yielded a 94.7% (18/19) by-polyp sensitivity at a FP rate of 2.5 (235/93) per patient and reduced the training time by a factor of 1.3. CONCLUSIONS Both MTSVR and MTGPR improve the efficiency of the training in the massive-training framework while maintaining a comparable performance.
Collapse
Affiliation(s)
- Jian-Wu Xu
- Department of Radiology, The University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637, USA.
| | | |
Collapse
|
24
|
Liu M, Lu L, Bi J, Raykar V, Wolf M, Salganicoff M. Robust Large Scale Prone-Supine Polyp Matching Using Local Features: A Metric Learning Approach. LECTURE NOTES IN COMPUTER SCIENCE 2011; 14:75-82. [DOI: 10.1007/978-3-642-23626-6_10] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
25
|
Liu M, Lu L, Ye X, Yu S, Salganicoff M. Sparse Classification for Computer Aided Diagnosis Using Learned Dictionaries. LECTURE NOTES IN COMPUTER SCIENCE 2011; 14:41-8. [DOI: 10.1007/978-3-642-23626-6_6] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
26
|
Abstract
Computer-aided polyp detection aims to improve the accuracy of the colonography interpretation. The computer searches the colonic wall to look for polyplike protrusions and presents a list of suspicious areas to a physician for further analysis. Computer-aided polyp detection has developed rapidly in the past decade in the laboratory setting and has sensitivities comparable with those of experts. Computer-aided polyp detection tends to help inexperienced readers more than experienced ones and may also lead to small reductions in specificity. In its currently proposed use as an adjunct to standard image interpretation, computer-aided polyp detection serves as a spellchecker rather than an efficiency enhancer.
Collapse
Affiliation(s)
- Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C368X MSC 1182, Bethesda, MD 20892-1182, USA.
| |
Collapse
|
27
|
van Wijk C, van Ravesteijn VF, Vos FM, van Vliet LJ. Detection and segmentation of colonic polyps on implicit isosurfaces by second principal curvature flow. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:688-698. [PMID: 20199908 DOI: 10.1109/tmi.2009.2031323] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Today's computer aided detection systems for computed tomography colonography (CTC) enable automated detection and segmentation of colorectal polyps. We present a paradigm shift by proposing a method that measures the amount of protrudedness of a candidate object in a scale adaptive fashion. One of the main results is that the performance of the candidate detection depends only on one parameter, the amount of protrusion. Additionally the method yields correct polyp segmentation without the need of an additional segmentation step. The supervised pattern recognition involves a clear distinction between size related features and features related to shape or intensity. A Mahalanobis transformation of the latter facilitates ranking of the objects using a logistic classifier. We evaluate two implementations of the method on 84 patients with a total of 57 polyps larger than or equal to 6 mm. We obtained a performance of 95% sensitivity at four false positives per scan for polyps larger than or equal to 6 mm.
Collapse
Affiliation(s)
- Cees van Wijk
- Quantitative Imaging Group, Delft University of Technology, NL-2628 CJ Delft, The Netherlands
| | | | | | | |
Collapse
|
28
|
A Robust and Fast System for CTC Computer-Aided Detection of Colorectal Lesions. ALGORITHMS 2010. [DOI: 10.3390/a3010021] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
29
|
Grigorescu SE, Nevo ST, Liedenbaum MH, Truyen R, Stoker J, van Vliet LJ, Vos FM. Automated detection and segmentation of large lesions in CT colonography. IEEE Trans Biomed Eng 2009; 57:675-84. [PMID: 19884071 DOI: 10.1109/tbme.2009.2035632] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Computerized tomographic colonography is a minimally invasive technique for the detection of colorectal polyps and carcinoma. Computer-aided diagnosis (CAD) schemes are designed to help radiologists locating colorectal lesions in an efficient and accurate manner. Large lesions are often initially detected as multiple small objects, due to which such lesions may be missed or misclassified by CAD systems. We propose a novel method for automated detection and segmentation of all large lesions, i.e., large polyps as well as carcinoma. Our detection algorithm is incorporated in a classical CAD system. Candidate detection comprises preselection based on a local measure for protrusion and clustering based on geodesic distance. The generated clusters are further segmented and analyzed. The segmentation algorithm is a thresholding operation in which the threshold is adaptively selected. The segmentation provides a size measurement that is used to compute the likelihood of a cluster to be a large lesion. The large lesion detection algorithm was evaluated on data from 35 patients having 41 large lesions (19 of which malignant) confirmed by optical colonoscopy. At five false positive (FP) per scan, the classical system achieved a sensitivity of 78%, while the system augmented with the large lesion detector achieved 83% sensitivity. For malignant lesions, the performance at five FP/scan was increased from 79% to 95%. The good results on malignant lesions demonstrate that the proposed algorithm may provide relevant additional information for the clinical decision process.
Collapse
Affiliation(s)
- Simona E Grigorescu
- Department of Imaging Science and Technology, Delft University of Technology, Delft, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|