1
|
Rapid Endoscopic Diagnosis of Benign Ulcerative Colorectal Diseases With an Artificial Intelligence Contextual Framework. Gastroenterology 2024:S0016-5085(24)00365-2. [PMID: 38583724 DOI: 10.1053/j.gastro.2024.03.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 03/22/2024] [Accepted: 03/28/2024] [Indexed: 04/09/2024]
Abstract
BACKGROUND & AIMS Benign ulcerative colorectal diseases (UCDs) such as ulcerative colitis, Crohn's disease, ischemic colitis, and intestinal tuberculosis share similar phenotypes with different etiologies and treatment strategies. To accurately diagnose closely related diseases like UCDs, we hypothesize that contextual learning is critical in enhancing the ability of the artificial intelligence models to differentiate the subtle differences in lesions amidst the vastly divergent spatial contexts. METHODS White-light colonoscopy datasets of patients with confirmed UCDs and healthy controls were retrospectively collected. We developed a Multiclass Contextual Classification (MCC) model that can differentiate among the mentioned UCDs and healthy controls by incorporating the tissue object contexts surrounding the individual lesion region in a scene and spatial information from other endoscopic frames (video-level) into a unified framework. Internal and external datasets were used to validate the model's performance. RESULTS Training datasets included 762 patients, and the internal and external testing cohorts included 257 patients and 293 patients, respectively. Our MCC model provided a rapid reference diagnosis on internal test sets with a high averaged area under the receiver operating characteristic curve (image-level: 0.950 and video-level: 0.973) and balanced accuracy (image-level: 76.1% and video-level: 80.8%), which was superior to junior endoscopists (accuracy: 71.8%, P < .0001) and similar to experts (accuracy: 79.7%, P = .732). The MCC model achieved an area under the receiver operating characteristic curve of 0.988 and balanced accuracy of 85.8% using external testing datasets. CONCLUSIONS These results enable this model to fit in the routine endoscopic workflow, and the contextual framework to be adopted for diagnosing other closely related diseases.
Collapse
|
2
|
AGA Clinical Practice Update on the Role of Artificial Intelligence in Colon Polyp Diagnosis and Management: Commentary. Gastroenterology 2023; 165:1568-1573. [PMID: 37855759 DOI: 10.1053/j.gastro.2023.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 06/06/2023] [Accepted: 07/17/2023] [Indexed: 10/20/2023]
Abstract
DESCRIPTION The purpose of this American Gastroenterological Association (AGA) Institute Clinical Practice Update (CPU) is to review the available evidence and provide expert commentary on the current landscape of artificial intelligence in the evaluation and management of colorectal polyps. METHODS This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee (CPUC) and the AGA Governing Board to provide timely guidance on a topic of high clinical importance to the AGA membership and underwent internal peer review by the CPUC and external peer review through standard procedures of Gastroenterology. This Expert Commentary incorporates important as well as recently published studies in this field, and it reflects the experiences of the authors who are experienced endoscopists with expertise in the field of artificial intelligence and colorectal polyps.
Collapse
|
3
|
Impact of Artificial Intelligence on Colonoscopy Surveillance After Polyp Removal: A Pooled Analysis of Randomized Trials. Clin Gastroenterol Hepatol 2023; 21:949-959.e2. [PMID: 36038128 DOI: 10.1016/j.cgh.2022.08.022] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 08/08/2022] [Accepted: 08/11/2022] [Indexed: 02/07/2023]
Abstract
BACKGROUND AND AIMS Artificial intelligence (AI) tools aimed at improving polyp detection have been shown to increase the adenoma detection rate during colonoscopy. However, it is unknown how increased polyp detection rates by AI affect the burden of patient surveillance after polyp removal. METHODS We conducted a pooled analysis of 9 randomized controlled trials (5 in China, 2 in Italy, 1 in Japan, and 1 in the United States) comparing colonoscopy with or without AI detection aids. The primary outcome was the proportion of patients recommended to undergo intensive surveillance (ie, 3-year interval). We analyzed intervals for AI and non-AI colonoscopies for the U.S. and European recommendations separately. We estimated proportions by calculating relative risks using the Mantel-Haenszel method. RESULTS A total of 5796 patients (51% male, mean 53 years of age) were included; 2894 underwent AI-assisted colonoscopy and 2902 non-AI colonoscopy. When following U.S. guidelines, the proportion of patients recommended intensive surveillance increased from 8.4% (95% CI, 7.4%-9.5%) in the non-AI group to 11.3% (95% CI, 10.2%-12.6%) in the AI group (absolute difference, 2.9% [95% CI, 1.4%-4.4%]; risk ratio, 1.35 [95% CI, 1.16-1.57]). When following European guidelines, it increased from 6.1% (95% CI, 5.3%-7.0%) to 7.4% (95% CI, 6.5%-8.4%) (absolute difference, 1.3% [95% CI, 0.01%-2.6%]; risk ratio, 1.22 [95% CI, 1.01-1.47]). CONCLUSIONS The use of AI during colonoscopy increased the proportion of patients requiring intensive colonoscopy surveillance by approximately 35% in the United States and 20% in Europe (absolute increases of 2.9% and 1.3%, respectively). While this may contribute to improved cancer prevention, it significantly adds patient burden and healthcare costs.
Collapse
|
4
|
Computer-Aided Diagnosis of Various Diseases Using Ultrasonography Images. Curr Med Imaging 2023; 20:CMIR-EPUB-130003. [PMID: 36876845 DOI: 10.2174/1573405619666230306101012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 11/17/2022] [Accepted: 12/06/2022] [Indexed: 03/07/2023]
Abstract
This paper is an exhaustive survey of computer-aided diagnosis (CAD) system-based automatic detection of several diseases from ultrasound images. CAD plays a vital role in the automatic and early detection of diseases. Health monitoring, medical database management, and picture archiving systems became very feasible with CAD, assisting radiologists in making decisions over any imaging modality. Imaging modalities mainly rely on machine learning and deep learning algorithms for early and accurate disease detection. CAD approaches are described in this paper in terms of their significant tools; digital image processing (DIP), machine learning (ML), and deep learning (DL). Ultrasonography (USG) already has many advantages over other imaging modalities; therefore, CAD analysis of USG assists radiologists in studying it more clearly, leading to USG application over various body parts. So, in this paper, we have included a review of those major diseases whose detection supports "ML algorithm" based diagnosis from USG images. ML algorithm follows feature extraction, selection, and classification in the required class. The literature survey of these diseases is grouped into the carotid region, transabdominal & pelvic region, musculoskeletal region, and thyroid region. These regions also differ in the types of transducers employed for scanning. Based on the literature survey, we have concluded that texture-based extracted features passed to support vector machine (SVM) classifier results in good classification accuracy. However, the emerging deep learning-based disease classification trend signifies more preciseness and automation for feature extraction and classification. Still, classification accuracy depends on the number of images used for training the model. This motivated us to highlight some of the significant shortcomings of automated disease diagnosis techniques. Research challenges in CAD-based automatic diagnosis system design and limitations in imaging through USG modality are mentioned as separate topics in this paper, indicating future scope or improvement in this field. The success rate of machine learning approaches in USG-based automatic disease detection motivated this review paper to describe different parameters behind machine learning and deep learning algorithms towards improving USG diagnostic performance.
Collapse
|
5
|
Computer-aided hepatocellular carcinoma detection on the hepatobiliary phase of gadoxetic acid-enhanced magnetic resonance imaging using a convolutional neural network: Feasibility evaluation with multi-sequence data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107032. [PMID: 35930863 DOI: 10.1016/j.cmpb.2022.107032] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 05/27/2022] [Accepted: 07/17/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVES Diagnosis of hepatocellular carcinoma (HCC) on liver MRI needs analysis of multi-sequence images. However, developing computer-aided detection (CAD) for every single sequence requires considerable time and labor for image segmentation. Therefore, we developed CAD for HCC on the hepatobiliary phase (HBP) of gadoxetic acid-enhanced magnetic resonance imaging (MRI) using a convolutional neural network (CNN) and evaluated its feasibility on multi-sequence, multi-unit, and multi-center data. METHODS Patients who underwent gadoxetic acid-enhanced MRI and surgery for HCC in Korea University Anam Hospital (KUAH) and Korea University Guro Hospital (KUGH) were reviewed. Finally, 170 nodules from 155 consecutive patients from KUAH and 28 nodules from 28 patients randomly selected from KUGH were included. Regions of interests were drawn on the whole HCC volume on HBP, T1-weighted (T1WI), T2-weighted (T2WI), and portal venous phase (PVP) images. The CAD was developed from the HBP images of KUAH using customized-nnUNet and post-processed for false-positive reduction. Internal and external validation of the CAD was performed with HBP, T1WI, T2WI, and PVP of KUAH and KUGH. RESULTS The figure of merit and recall of the jackknife alternative free-response receiver operating characteristic of the CAD for HBP, T1WI, T2WI, and PVP at false-positive rate 0.5 were (0.87 and 87.0), (0.73 and 73.3), (0.13 and 13.3), and (0.67 and 66.7) in KUAH and (0.86 and 86.0), (0.61 and 53.6), (0.07 and 0.07), and (0.57 and 53.6) in KUGH, respectively. CONCLUSIONS The CAD for HCC on gadoxetic acid-enhanced MRI developed by CNN from HBP detected HCCs feasibly on HBP, T1WI, and PVP of gadoxetic acid-enhanced MRI obtained from multiple units and centers. This result imply that the CAD developed using single MRI sequence may be applied to other similar sequences and this will reduce labor and time for CAD development in multi-sequence MRI.
Collapse
|
6
|
Computer-Aided Breast Cancer Diagnosis: A Study of Breast Imaging Modalities and Mammogram Repositories. Curr Med Imaging 2022; 19:456-468. [PMID: 35726812 DOI: 10.2174/1573405618666220621123156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 04/22/2022] [Accepted: 05/10/2022] [Indexed: 11/22/2022]
Abstract
The accurate assessment or diagnosis of breast cancer depends on image acquisition and image analysis and interpretation. The accurate assessment or diagnosis of breast cancer depends on image acquisition and image analysis and interpretation. The expert radiologist makes image interpretation, and this process has been greatly benefited by computer technology. For image acquisition, various imaging modalities have been developed and used over the years. This research examines several imaging modalities and their associated benefits and drawbacks. Commonly used parameters such as sensitivity and specificity are also offered to evaluate the usefulness of different imaging modalities. The main focus of the research is on mammograms. Despite the availability of breast cancer datasets of imaging modalities such as MRI, ultrasounds, and thermograms, mammogram datasets are used mainly by the domain researcher. They are considered an international gold standard for the early detection of breast cancer. We discussed and analyzed widely used and publicly available mammogram repositories. We further discussed some common key constraints related to mammogram datasets to develop the deep learning based computer-aided diagnosis (CADx) systems for breast cancer. The ideas for their improvements have also been presented.
Collapse
|
7
|
Thyroid Ultrasound-Image Dataset. Stud Health Technol Inform 2022; 294:397-402. [PMID: 35612104 DOI: 10.3233/shti220482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Thyroid Computer-Aided Diagnosis (CAD) systems have been developed to assist radiologists in improving efficiency, reliability, and diagnosis performance. Often the performance of these CAD systems is evaluated with different datasets that make it incomparable. A valuable thyroid ultrasound (US) dataset is presented in this work. This dataset consists of 2450 thyroid US images from 2018 to 2020 in Prospective Epidemiological Research Studies in Mashhad, Iran (PERSIAN), a large national cohort study. These US images have the ROI of thyroid nodules and the associated American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TIRADS) features by expert physicians provided in XML format. Dataset's images are categorized into five groups based on the ACR-TIRADS (Tirads1-Tirads5). The presented dataset is expected to be a valuable resource to develop and assess thyroid CAD systems to help radiologists better diagnose.
Collapse
|
8
|
A two-stage multiresolution neural network for automatic diagnosis of hepatic echinococcosis from ultrasound images: A multicenter study. Med Phys 2022; 49:3199-3212. [PMID: 35192193 DOI: 10.1002/mp.15548] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 02/08/2022] [Accepted: 02/12/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Hepatic echinococcosis is a parasitic disease. Ultrasound imaging is a crucially important tool for the diagnosis of this disease. Based on ultrasonic manifestations, hepatic echinococcosis can be classified into many subtypes. However, the subtyping is non-trivial due to the challenges of complex sonographic textures and the large intra-class and small inter-class differences. The purpose of this study is to develop a computer aided diagnosis system for hepatic echinococcosis based on ultrasound images. METHODS We collected a multicenter ultrasound dataset containing 9112 images from 5028 patients who were diagnosed with hepatic echinococcosis (the largest cohort to date) and developed a two-stage multiresolution neural network for the automatic diagnosis of hepatic echinococcosis into nine subtypes as suggested by WHO. Our method was based on YOLO3 with two additional strategies to improve its performance: coarse grouping and multiresolution sampling. Considering that some subtypes are inherently very similar and difficult to be differentiated, in the first stage we detected and classified lesions into four coarse groups, instead of making a direct classification into nine classes. In the second stage, we performed fine-grained classification within each coarse group. Multiple views with different resolutions were sampled from the detected lesions and were input into Darknet53. The softmax outputs for the multiresolution views were averaged to generate the final output. RESULTS Both the proposed coarse grouping and multiresolution sampling strategies proved to be effective and improved the classification performance by a large margin compared with the setting without using the two strategies. Using five-fold cross validation, our method achieved 87.1%, 86.2%, and 86.5% in the average recall, precision and F1-score, respectively, and outperformed other state-of-the-art methods remarkably. CONCLUSIONS The experimental results demonstrate the great promise of our method for classifying hepatic echinococcosis. Our method can be used as an effective tool to facilitate large-scale screening for hepatic echinococcosis in high-risk, resource-poor areas, thus contributing to early diagnosis of this disease and resulting in more successful treatment. This article is protected by copyright. All rights reserved.
Collapse
|
9
|
DIAROP: Automated Deep Learning-Based Diagnostic Tool for Retinopathy of Prematurity. Diagnostics (Basel) 2021; 11:2034. [PMID: 34829380 PMCID: PMC8620568 DOI: 10.3390/diagnostics11112034] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 09/24/2021] [Accepted: 11/01/2021] [Indexed: 12/12/2022] Open
Abstract
Retinopathy of Prematurity (ROP) affects preterm neonates and could cause blindness. Deep Learning (DL) can assist ophthalmologists in the diagnosis of ROP. This paper proposes an automated and reliable diagnostic tool based on DL techniques called DIAROP to support the ophthalmologic diagnosis of ROP. It extracts significant features by first obtaining spatial features from the four Convolution Neural Networks (CNNs) DL techniques using transfer learning and then applying Fast Walsh Hadamard Transform (FWHT) to integrate these features. Moreover, DIAROP explores the best-integrated features extracted from the CNNs that influence its diagnostic capability. The results of DIAROP indicate that DIAROP achieved an accuracy of 93.2% and an area under receiving operating characteristic curve (AUC) of 0.98. Furthermore, DIAROP performance is compared with recent ROP diagnostic tools. Its promising performance shows that DIAROP may assist the ophthalmologic diagnosis of ROP.
Collapse
|
10
|
Automatic Detection of Coronavirus Disease (COVID-19) in X-ray and CT Images: A Machine Learning Based Approach. Biocybern Biomed Eng 2021; 41:867-879. [PMID: 34108787 PMCID: PMC8179118 DOI: 10.1016/j.bbe.2021.05.013] [Citation(s) in RCA: 91] [Impact Index Per Article: 30.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 05/13/2021] [Indexed: 12/23/2022]
Abstract
The newly identified Coronavirus pneumonia, subsequently termed COVID-19, is highly transmittable and pathogenic with no clinically approved antiviral drug or vaccine available for treatment. The most common symptoms of COVID-19 are dry cough, sore throat, and fever. Symptoms can progress to a severe form of pneumonia with critical complications, including septic shock, pulmonary edema, acute respiratory distress syndrome and multi-organ failure. While medical imaging is not currently recommended in Canada for primary diagnosis of COVID-19, computer-aided diagnosis systems could assist in the early detection of COVID-19 abnormalities and help to monitor the progression of the disease, potentially reduce mortality rates. In this study, we compare popular deep learning-based feature extraction frameworks for automatic COVID-19 classification. To obtain the most accurate feature, which is an essential component of learning, MobileNet, DenseNet, Xception, ResNet, InceptionV3, InceptionResNetV2, VGGNet, NASNet were chosen amongst a pool of deep convolutional neural networks. The extracted features were then fed into several machine learning classifiers to classify subjects as either a case of COVID-19 or a control. This approach avoided task-specific data pre-processing methods to support a better generalization ability for unseen data. The performance of the proposed method was validated on a publicly available COVID-19 dataset of chest X-ray and CT images. The DenseNet121 feature extractor with Bagging tree classifier achieved the best performance with 99% classification accuracy. The second-best learner was a hybrid of the a ResNet50 feature extractor trained by LightGBM with an accuracy of 98.
Collapse
|
11
|
Development and Validation of a Convolutional Neural Network for Automated Detection of Scaphoid Fractures on Conventional Radiographs. Radiol Artif Intell 2021; 3:e200260. [PMID: 34350413 DOI: 10.1148/ryai.2021200260] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 03/19/2021] [Accepted: 03/30/2021] [Indexed: 12/25/2022]
Abstract
Purpose To compare the performance of a convolutional neural network (CNN) to that of 11 radiologists in detecting scaphoid bone fractures on conventional radiographs of the hand, wrist, and scaphoid. Materials and Methods At two hospitals (hospitals A and B), three datasets consisting of conventional hand, wrist, and scaphoid radiographs were retrospectively retrieved: a dataset of 1039 radiographs (775 patients [mean age, 48 years ± 23 {standard deviation}; 505 female patients], period: 2017-2019, hospitals A and B) for developing a scaphoid segmentation CNN, a dataset of 3000 radiographs (1846 patients [mean age, 42 years ± 22; 937 female patients], period: 2003-2019, hospital B) for developing a scaphoid fracture detection CNN, and a dataset of 190 radiographs (190 patients [mean age, 43 years ± 20; 77 female patients], period: 2011-2020, hospital A) for testing the complete fracture detection system. Both CNNs were applied consecutively: The segmentation CNN localized the scaphoid and then passed the relevant region to the detection CNN for fracture detection. In an observer study, the performance of the system was compared with that of 11 radiologists. Evaluation metrics included the Dice similarity coefficient (DSC), Hausdorff distance (HD), sensitivity, specificity, positive predictive value (PPV), and area under the receiver operating characteristic curve (AUC). Results The segmentation CNN achieved a DSC of 97.4% ± 1.4 with an HD of 1.31 mm ± 1.03. The detection CNN had sensitivity of 78% (95% CI: 70, 86), specificity of 84% (95% CI: 77, 92), PPV of 83% (95% CI: 77, 90), and AUC of 0.87 (95% CI: 0.81, 0.91). There was no difference between the AUC of the CNN and that of the radiologists (0.87 [95% CI: 0.81, 0.91] vs 0.83 [radiologist range: 0.79-0.85]; P = .09). Conclusion The developed CNN achieved radiologist-level performance in detecting scaphoid bone fractures on conventional radiographs of the hand, wrist, and scaphoid.Keywords: Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Feature Detection-Vision-Application Domain, Computer-Aided DiagnosisSee also the commentary by Li and Torriani in this issue.Supplemental material is available for this article.©RSNA, 2021.
Collapse
|
12
|
Distinguishing Adenocarcinomas from Granulomas in the CT scan of the chest: performance degradation evaluation in the automatic segmentation framework. BMC Res Notes 2021; 14:87. [PMID: 33750438 PMCID: PMC7942003 DOI: 10.1186/s13104-021-05502-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Accepted: 02/25/2021] [Indexed: 11/10/2022] Open
Abstract
OBJECTIVE The most common histopathologic malignant and benign nodules are Adenocarcinoma and Granuloma, respectively, which have different standards of care. In this paper, we propose an automatic framework for the diagnosis of the Adenocarcinomas and the Granulomas in the CT scans of the chest from a private dataset. We use the radiomic features of the nodules and the attached vessel tortuosity for the diagnosis. The private dataset includes 22 CTs for each nodule type, i.e., adenocarcinoma and granuloma. The dataset contains the CTs of the non-smoker patients who are between 30 and 60 years old. To automatically segment the delineated nodule area and the attached vessels area, we apply a morphological-based approach. For distinguishing the malignancy of the segmented nodule, two texture features of the nodule, the curvature Mean and the number of the attached vessels are extracted. RESULTS We compare our framework with the state-of-the-art feature selection methods for differentiating Adenocarcinomas from Granulomas. These methods employ only the shape features of the nodule, the texture features of the nodule, or the torsion features of the attached vessels along with the radiomic features of the nodule. The accuracy of our framework is improved by considering the four selected features.
Collapse
|
13
|
Use of Endoscopic Impression, Artificial Intelligence, and Pathologist Interpretation to Resolve Discrepancies Between Endoscopy and Pathology Analyses of Diminutive Colorectal Polyps. Gastroenterology 2020; 158:783-785.e1. [PMID: 31863741 DOI: 10.1053/j.gastro.2019.10.024] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 10/17/2019] [Accepted: 10/23/2019] [Indexed: 02/07/2023]
|
14
|
Web-Based Spine Segmentation Using Deep Learning in Computed Tomography Images. Healthc Inform Res 2020; 26:61-67. [PMID: 32082701 PMCID: PMC7010941 DOI: 10.4258/hir.2020.26.1.61] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 01/16/2020] [Accepted: 01/18/2020] [Indexed: 12/28/2022] Open
Abstract
Objectives Back pain, especially lower back pain, is experienced in 60% to 80% of adults at some points during their lives. Various studies have found that lower back pain is a very common problem among adolescents, and the highest incidence rates are for adults in their 30s. There has been a remarkable increase in using computer-aided diagnosis to assist doctors in the interpretation of medical images. Spine segmentation in computed tomography (CT) scans using algorithmic methods allows improved diagnosis of back pain. Methods In this study, we developed a web-based automatic spine segmentation method using deep learning and obtained the dice coefficient by comparison with the predicted image. Our method is based on convolutional neural networks for segmentation. More specifically, we train a hierarchical data format file using U-Net architecture and then insert the test data label to perform segmentation. Thus, we obtained more specific and detailed results. A total of 344 CT images were used in the experiment. Of these, 330 were used for learning, and the remaining 14 for testing. Results Our method achieved an average dice coefficient of 90.4%, a precision of 96.81%, and an F1-score of 91.64%. Conclusions The proposed web-based deep learning approach can be very practical and accurate for spine segmentation as a diagnostic method.
Collapse
|
15
|
An experimental study on breast lesion detection and classification from ultrasound images using deep learning architectures. BMC Med Imaging 2019; 19:51. [PMID: 31262255 PMCID: PMC6604293 DOI: 10.1186/s12880-019-0349-x] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Accepted: 06/11/2019] [Indexed: 11/17/2022] Open
Abstract
Background Computer-aided diagnosis (CAD) in the medical field has received more and more attention in recent years. One important CAD application is to detect and classify breast lesions in ultrasound images. Traditionally, the process of CAD for breast lesions classification is mainly composed of two separated steps: i) locate the lesion region of interests (ROI); ii) classify the located region of interests (ROI) to see if they are benign or not. However, due to the complex structure of breast and the existence of noise in the ultrasound images, traditional handcrafted feature based methods usually can not achieve satisfactory result. Methods With the recent advance of deep learning, the performance of object detection and classification has been boosted to a great extent. In this paper, we aim to systematically evaluate the performance of several existing state-of-the-art object detection and classification methods for breast lesions CAD. To achieve that, we have collected a new dataset consisting of 579 benign and 464 malignant lesion cases with the corresponding ultrasound images manually annotated by experienced clinicians. We evaluate different deep learning architectures and conduct comprehensive experiments on our newly collected dataset. Results For the lesion regions detecting task, Single Shot MultiBox Detector with the input size as 300×300 (SSD300) achieves the best performance in terms of average precision rate (APR), average recall rate (ARR) and F1 score. For the classification task, DenseNet is more suitable for our problems. Conclusions Our experiments reveal that better and more efficient detection and convolutional neural network (CNN) frameworks is one important factor for better performance of detecting and classification task of the breast lesion. Another significant factor for improving the performance of detecting and classification task, which is transfer learning from the large-scale annotated ImageNet to classify breast lesion.
Collapse
|
16
|
SCREEN-DR - Software Architecture for the Diabetic Retinopathy Screening. Stud Health Technol Inform 2018; 247:396-400. [PMID: 29677990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Diabetic Retinopathy (DR) is a common complication of diabetes that may lead to blindness if not treated. However, since DR evolves without any symptoms in the initial stages, early detection and treatment can only be achieved through routine checks. This article presents the collaborative platform of the SCREEN-DR project that promotes partnership between physicians and researchers in the scope of a regional DR screening program. The role of researchers is to create classification algorithms to evaluate image quality, discard non-pathological cases, locate possible lesions and grade DR severity. Physicians are responsible for annotating datasets, including the visual delineation of lesions. The collaborative platform collects the studies, indexes the images metadata, and manages the creation of datasets and the respective annotation process. An advanced searching mechanism supports multimodal queries over annotated datasets and exporting of results for feeding artificial intelligence algorithms.
Collapse
|
17
|
The inter-observer reading variability in anti-nuclear antibodies indirect (ANA) immunofluorescence test: A multicenter evaluation and a review of the literature. Autoimmun Rev 2017; 16:1224-1229. [PMID: 29037905 DOI: 10.1016/j.autrev.2017.10.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Accepted: 08/17/2017] [Indexed: 01/18/2023]
Abstract
Recently there has been an increase demand for Computer-Aided Diagnosis (CAD) tools to support clinicians in the field of Indirect ImmunoFluorescence (IIF), as the novel digital imaging reading approach can help to overcome the reader subjectivity. Nevertheless, a large multicenter evaluation of the inter-observer reading variability in this field is still missing. This work fills this gap as we evaluated 556 consecutive samples, for a total of 1679 images, collected in three laboratories with IIF expertise using HEp-2 cell substrate (MBL) at 1:80 screening dilution according to conventional procedures. In each laboratory, the images were blindly classified by two experts into three intensity classes: positive, negative, and weak positive. Positive and weak positive ANA-IIF results were categorized by the predominant fluorescence pattern among six main classes. Data were pairwise analyzed and the inter-observer reading variability was measured by Cohen's kappa test, revealing a pairwise agreement little further away than substantial both for fluorescence intensity and for staining pattern recognition (k=0.602 and k=0.627, respectively). We also noticed that the inter-observer reading variability decreases when it is measured with respect to a gold standard classification computed on the basis of labels assigned by the three laboratories. These data show that laboratory agreement improves using digital images and comparing each single human evaluation to potential reference data, suggesting that a solid gold standard is essential to properly make use of CAD systems in routine work lab.
Collapse
|
18
|
Towards a Holistic Cortical Thickness Descriptor: Heat Kernel-Based Grey Matter Morphology Signatures. Neuroimage 2017; 147:360-380. [PMID: 28033566 PMCID: PMC5303630 DOI: 10.1016/j.neuroimage.2016.12.014] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2016] [Revised: 12/05/2016] [Accepted: 12/07/2016] [Indexed: 11/19/2022] Open
Abstract
In this paper, we propose a heat kernel based regional shape descriptor that may be capable of better exploiting volumetric morphological information than other available methods, thereby improving statistical power on brain magnetic resonance imaging (MRI) analysis. The mechanism of our analysis is driven by the graph spectrum and the heat kernel theory, to capture the volumetric geometry information in the constructed tetrahedral meshes. In order to capture profound brain grey matter shape changes, we first use the volumetric Laplace-Beltrami operator to determine the point pair correspondence between white-grey matter and CSF-grey matter boundary surfaces by computing the streamlines in a tetrahedral mesh. Secondly, we propose multi-scale grey matter morphology signatures to describe the transition probability by random walk between the point pairs, which reflects the inherent geometric characteristics. Thirdly, a point distribution model is applied to reduce the dimensionality of the grey matter morphology signatures and generate the internal structure features. With the sparse linear discriminant analysis, we select a concise morphology feature set with improved classification accuracies. In our experiments, the proposed work outperformed the cortical thickness features computed by FreeSurfer software in the classification of Alzheimer's disease and its prodromal stage, i.e., mild cognitive impairment, on publicly available data from the Alzheimer's Disease Neuroimaging Initiative. The multi-scale and physics based volumetric structure feature may bring stronger statistical power than some traditional methods for MRI-based grey matter morphology analysis.
Collapse
|
19
|
Computer-Aided Detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: a review. Comput Biol Med 2015; 60:8-31. [PMID: 25747341 DOI: 10.1016/j.compbiomed.2015.02.009] [Citation(s) in RCA: 117] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2014] [Revised: 02/11/2015] [Accepted: 02/12/2015] [Indexed: 12/30/2022]
Abstract
Prostate cancer is the second most diagnosed cancer of men all over the world. In the last few decades, new imaging techniques based on Magnetic Resonance Imaging (MRI) have been developed to improve diagnosis. In practise, diagnosis can be affected by multiple factors such as observer variability and visibility and complexity of the lesions. In this regard, computer-aided detection and computer-aided diagnosis systems have been designed to help radiologists in their clinical practice. Research on computer-aided systems specifically focused for prostate cancer is a young technology and has been part of a dynamic field of research for the last 10 years. This survey aims to provide a comprehensive review of the state-of-the-art in this lapse of time, focusing on the different stages composing the work-flow of a computer-aided system. We also provide a comparison between studies and a discussion about the potential avenues for future research. In addition, this paper presents a new public online dataset which is made available to the research community with the aim of providing a common evaluation framework to overcome some of the current limitations identified in this survey.
Collapse
|
20
|
Observer Variability in BI-RADS Ultrasound Features and Its Influence on Computer-Aided Diagnosis of Breast Masses. ACTA ACUST UNITED AC 2015; 4:1-8. [PMID: 34306838 PMCID: PMC8298005 DOI: 10.4236/abcr.2015.41001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Objective: Computer classification of sonographic BI-RADS features can aid differentiation of the malignant and benign masses. However, the variability in the diagnosis due to the differences in the observed features between the observations is not known. The goal of this study is to measure the variation in sonographic features between multiple observations and determine the effect of features variation on computer-aided diagnosis of the breast masses. Materials and Methods: Ultrasound images of biopsy proven solid breast masses were analyzed in three independent observations for BI-RADS sonographic features. The BI-RADS features from each observation were used with Bayes classifier to determine probability of malignancy. The observer agreement in the sonographic features was measured by kappa coefficient and the difference in the diagnostic performances between observations was determined by the area under the ROC curve, Az, and interclass correlation coefficient. Results: While some features were repeatedly observed, κ = 0.95, other showed a significant variation, κ = 0.16. For all features, combined intra-observer agreement was substantial, κ = 0.77. The agreement, however, decreased steadily to 0.66 and 0.56 as time between the observations increased from 1 to 2 and 3 months, respectively. Despite the variation in features between observations the probabilities of malignancy estimates from Bayes classifier were robust and consistently yielded same level of diagnostic performance, Az was 0.772 – 0.817 for sonographic features alone and 0.828 – 0.849 for sonographic features and age combined. The difference in the performance, ΔAz, between the observations for the two groups was small (0.003 – 0.044) and was not statistically significant (p < 0.05). Interclass correlation coefficient for the observations was 0.822 (CI: 0.787 – 0.853) for BI-RADS sonographic features alone and for those combined with age was 0.833 (CI: 0.800 – 0.862). Conclusion: Despite the differences in the BI- RADS sonographic features between different observations, the diagnostic performance of computer-aided analysis for differentiating breast masses did not change. Through continual retraining, the computer-aided analysis provides consistent diagnostic performance independent of the variations in the observed sonographic features.
Collapse
|