1
|
Qiu E, Vejdani-Jahromi M, Kaliaev A, Fazelpour S, Goodman D, Ryoo I, Andreu-Arasa VC, Fujima N, Buch K, Sakai O. Fully automated 3D machine learning model for HPV status characterization in oropharyngeal squamous cell carcinomas based on CT images. Am J Otolaryngol 2024; 45:104357. [PMID: 38703612 DOI: 10.1016/j.amjoto.2024.104357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 04/25/2024] [Indexed: 05/06/2024]
Abstract
BACKGROUND Human papillomavirus (HPV) status plays a major role in predicting oropharyngeal squamous cell carcinoma (OPSCC) survival. This study assesses the accuracy of a fully automated 3D convolutional neural network (CNN) in predicting HPV status using CT images. METHODS Pretreatment CT images from OPSCC patients were used to train a 3D DenseNet-121 model to predict HPV-p16 status. Performance was evaluated by the ROC Curve (AUC), sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and F1 score. RESULTS The network achieved a mean AUC of 0.80 ± 0.06. The best-preforming fold had a sensitivity of 0.86 and specificity of 0.92 at the Youden's index. The PPV, NPV, and F1 scores are 0.97, 0.71, and 0.82, respectively. CONCLUSIONS A fully automated CNN can characterize the HPV status of OPSCC patients with high sensitivity and specificity. Further refinement of this algorithm has the potential to provide a non-invasive tool to guide clinical management.
Collapse
Affiliation(s)
- Edwin Qiu
- Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States of America
| | - Maryam Vejdani-Jahromi
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States of America; Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Artem Kaliaev
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States of America
| | - Sherwin Fazelpour
- Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States of America
| | - Deniz Goodman
- Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States of America
| | - Inseon Ryoo
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States of America; Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America; Department of Radiology, Korea University Guro Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| | - V Carlota Andreu-Arasa
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States of America; Department of Radiology, VA Boston Healthcare System, MA, United States of America
| | - Noriyuki Fujima
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States of America; Hokkaido University Hospital, Department of Diagnostic and Interventional Radiology, Sapporo, Japan
| | - Karen Buch
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Osamu Sakai
- Department of Radiology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States of America; Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America; Department of Otolaryngology-Head and Neck Surgery, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, United States of America; Department of Radiation Oncology, Boston Medical Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, United States of America; Department of Radiology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, United States of America.
| |
Collapse
|
2
|
Garcés-Jiménez A, Polo-Luque ML, Gómez-Pulido JA, Rodríguez-Puyol D, Gómez-Pulido JM. Predictive health monitoring: Leveraging artificial intelligence for early detection of infectious diseases in nursing home residents through discontinuous vital signs analysis. Comput Biol Med 2024; 174:108469. [PMID: 38636331 DOI: 10.1016/j.compbiomed.2024.108469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 04/08/2024] [Accepted: 04/09/2024] [Indexed: 04/20/2024]
Abstract
This research addresses the problem of detecting acute respiratory, urinary tract, and other infectious diseases in elderly nursing home residents using machine learning algorithms. The study analyzes data extracted from multiple vital signs and other contextual information for diagnostic purposes. The daily data collection process encounters sampling constraints due to weekends, holidays, shift changes, staff turnover, and equipment breakdowns, resulting in numerous nulls, repeated readings, outliers, and meaningless values. The short time series generated also pose a challenge to analysis, preventing the extraction of seasonal information or consistent trends. Blind data collection results in most of the data coming from periods when residents are healthy, resulting in excessively imbalanced data. This study proposes a data cleaning process and then builds a mechanism that reproduces the basal activity of the residents to improve the classification of the disease. The results show that the proposed basal module-assisted machine learning techniques allow anticipating diagnostics 2, 3 or 4 days before doctors decide to start treatment with antibiotics, achieving a performance measured by the area-under-the-curve metric of 0.857. The contributions of this work are: (1) a new data cleaning process; (2) the analysis of contextual information to improve data quality; (3) the generation of a baseline measure for relative comparison; and (4) the use of either binary (disease/no disease) or multiclass classification, differentiating among types of infections and showing the advantages of multiclass versus binary classification. From a medical point of view, the anticipated detection of infectious diseases in institutionalized individuals is brand new.
Collapse
Affiliation(s)
- Alberto Garcés-Jiménez
- Department of Computer Science, Universidad de Alcalá, Politechnic School, Alcala de Henares, 28805, Spain
| | - María-Luz Polo-Luque
- Department of Nursing and Physiotherapy, Universidad de Alcalá, Faculty of Medicine and Health Sciences, Alcala de Henares, 28805, Spain
| | - Juan A Gómez-Pulido
- Department of Technologies of Computers and Communications, Universidad de Extremadura, School of Technology, Cáceres, 10003, Spain.
| | - Diego Rodríguez-Puyol
- Department of Medicine and Medical Specialties, Research Foundation of the University Hospital Príncipe de Asturias, Campus Científico Tecnológico, Alcala de Henares, 28805, Spain
| | - José M Gómez-Pulido
- Department of Computer Science, Universidad de Alcalá, Politechnic School, Alcala de Henares, 28805, Spain
| |
Collapse
|
3
|
Russo C, Bria A, Marrocco C. GravityNet for end-to-end small lesion detection. Artif Intell Med 2024; 150:102842. [PMID: 38553147 DOI: 10.1016/j.artmed.2024.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 03/01/2024] [Accepted: 03/11/2024] [Indexed: 04/02/2024]
Abstract
This paper introduces a novel one-stage end-to-end detector specifically designed to detect small lesions in medical images. Precise localization of small lesions presents challenges due to their appearance and the diverse contextual backgrounds in which they are found. To address this, our approach introduces a new type of pixel-based anchor that dynamically moves towards the targeted lesion for detection. We refer to this new architecture as GravityNet, and the novel anchors as gravity points since they appear to be "attracted" by the lesions. We conducted experiments on two well-established medical problems involving small lesions to evaluate the performance of the proposed approach: microcalcifications detection in digital mammograms and microaneurysms detection in digital fundus images. Our method demonstrates promising results in effectively detecting small lesions in these medical imaging tasks.
Collapse
Affiliation(s)
- Ciro Russo
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| |
Collapse
|
4
|
Lee J, Ahn S, Kim H, An J, Sim J. A robust model training strategy using hard negative mining in a weakly labeled dataset for lymphatic invasion in gastric cancer. J Pathol Clin Res 2024; 10:e355. [PMID: 38116763 PMCID: PMC10766063 DOI: 10.1002/cjp2.355] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 10/23/2023] [Accepted: 11/22/2023] [Indexed: 12/21/2023]
Abstract
Gastric cancer is a significant public health concern, emphasizing the need for accurate evaluation of lymphatic invasion (LI) for determining prognosis and treatment options. However, this task is time-consuming, labor-intensive, and prone to intra- and interobserver variability. Furthermore, the scarcity of annotated data presents a challenge, particularly in the field of digital pathology. Therefore, there is a demand for an accurate and objective method to detect LI using a small dataset, benefiting pathologists. In this study, we trained convolutional neural networks to classify LI using a four-step training process: (1) weak model training, (2) identification of false positives, (3) hard negative mining in a weakly labeled dataset, and (4) strong model training. To overcome the lack of annotated datasets, we applied a hard negative mining approach in a weakly labeled dataset, which contained only final diagnostic information, resembling the typical data found in hospital databases, and improved classification performance. Ablation studies were performed to simulate the lack of datasets and severely unbalanced datasets, further confirming the effectiveness of our proposed approach. Notably, our results demonstrated that, despite the small number of annotated datasets, efficient training was achievable, with the potential to extend to other image classification approaches used in medicine.
Collapse
Affiliation(s)
- Jonghyun Lee
- Department of Medical and Digital EngineeringHanyang University College of EngineeringSeoulRepublic of Korea
- Department of PathologyKorea University Anam Hospital, Korea University College of MedicineSeoulRepublic of Korea
| | - Sangjeong Ahn
- Department of PathologyKorea University Anam Hospital, Korea University College of MedicineSeoulRepublic of Korea
| | - Hyun‐Soo Kim
- Department of Pathology and Translational GenomicsSamsung Medical Center, Sungkyunkwan University School of MedicineSeoulRepublic of Korea
| | - Jungsuk An
- Department of PathologyKorea University Anam Hospital, Korea University College of MedicineSeoulRepublic of Korea
| | - Jongmin Sim
- Department of PathologyKorea University Anam Hospital, Korea University College of MedicineSeoulRepublic of Korea
| |
Collapse
|
5
|
Gayatri E, Aarthy SL. Reduction of overfitting on the highly imbalanced ISIC-2019 skin dataset using deep learning frameworks. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:53-68. [PMID: 38189730 DOI: 10.3233/xst-230204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
BACKGROUND With the rapid growth of Deep Neural Networks (DNN) and Computer-Aided Diagnosis (CAD), more significant works have been analysed for cancer related diseases. Skin cancer is the most hazardous type of cancer that cannot be diagnosed in the early stages. OBJECTIVE The diagnosis of skin cancer is becoming a challenge to dermatologists as an abnormal lesion looks like an ordinary nevus at the initial stages. Therefore, early identification of lesions (origin of skin cancer) is essential and helpful for treating skin cancer patients effectively. The enormous development of automated skin cancer diagnosis systems significantly supports dermatologists. METHODS This paper performs a classification of skin cancer by utilising various deep-learning frameworks after resolving the class Imbalance problem in the ISIC-2019 dataset. A fine-tuned ResNet-50 model is used to evaluate the performance of original data, augmented data, and after by adding the focal loss. Focal loss is the best technique to solve overfitting problems by assigning weights to hard misclassified images. RESULTS Finally, augmented data with focal loss is given a good classification performance with 98.85% accuracy, 95.52% precision, and 95.93% recall. Matthews Correlation coefficient (MCC) is the best metric to evaluate the quality of multi-class images. It has given outstanding performance by using augmented data and focal loss.
Collapse
Affiliation(s)
| | - S L Aarthy
- SCOPE, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| |
Collapse
|
6
|
Huang YL, Liu XQ, Huang Y, Jin FY, Zhao Q, Wu QY, Ma KL. Application of cloud server-based machine learning for assisting pathological structure recognition in IgA nephropathy. J Clin Pathol 2023:jcp-2023-209215. [PMID: 38123970 DOI: 10.1136/jcp-2023-209215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Accepted: 11/26/2023] [Indexed: 12/23/2023]
Abstract
BACKGROUND Machine learning (ML) models can help assisting diagnosis by rapidly localising and classifying regions of interest (ROIs) within whole slide images (WSIs). Effective ML models for clinical decision support require a substantial dataset of 'real' data, and in reality, it should be robust, user-friendly and universally applicable. METHODS WSIs of primary IgAN were collected and annotated. The H-AI-L algorithm which could facilitate direct WSI viewing and potential ROI detection for clinicians was built on the cloud server of matpool, a shared internet-based service platform. Model performance was evaluated using F1-score, precision, recall and Matthew's correlation coefficient (MCC). RESULTS The F1-score of glomerular localisation in WSIs was 0.85 and 0.89 for the initial and pretrained models, respectively, with corresponding recall values of 0.79 and 0.83, and precision scores of 0.92 and 0.97. Dichotomous differentiation between global sclerotic (GS) and other glomeruli revealed F1-scores of 0.70 and 0.91, and MCC values of 0.55 and 0.87, for the initial and pretrained models, respectively. The overall F1-score of multiclassification was 0.81 for the pretrained models. The total glomerular recall rate was 0.96, with F1-scores of 0.68, 0.56 and 0.26 for GS, segmental glomerulosclerosis and crescent (C), respectively. Interstitial fibrosis/tubular atrophy lesion similarity between the true label and model predictions was 0.75. CONCLUSIONS Our results underscore the efficacy of the ML integration algorithm in segmenting ROIs in IgAN WSIs, and the internet-based model deployment is in favour of widespread adoption and utilisation across multiple centres and increased volumes of WSIs.
Collapse
Affiliation(s)
- Yu-Lin Huang
- Institute of Nephrology, Zhongda Hospital, School of Medicine, Southeast University, Nanjing, China
| | - Xiao Qi Liu
- Department of Nephrology, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Yang Huang
- Institute of Nephrology, Zhongda Hospital, School of Medicine, Southeast University, Nanjing, China
| | - Feng Yong Jin
- Institute of Nephrology, Zhongda Hospital, School of Medicine, Southeast University, Nanjing, China
| | - Qing Zhao
- Institute of Nephrology, Zhongda Hospital, School of Medicine, Southeast University, Nanjing, China
| | - Qin Yi Wu
- Institute of Nephrology, Zhongda Hospital, School of Medicine, Southeast University, Nanjing, China
| | - Kun Ling Ma
- Department of Nephrology, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
7
|
Cantone M, Marrocco C, Tortorella F, Bria A. Learnable DoG convolutional filters for microcalcification detection. Artif Intell Med 2023; 143:102629. [PMID: 37673567 DOI: 10.1016/j.artmed.2023.102629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 06/13/2023] [Accepted: 07/17/2023] [Indexed: 09/08/2023]
Abstract
Difference of Gaussians (DoG) convolutional filters are one of the earliest image processing methods employed for detecting microcalcifications on mammogram images before machine and deep learning methods became widespread. DoG is a blob enhancement filter that consists in subtracting one Gaussian-smoothed version of an image from another less Gaussian-smoothed version of the same image. Smoothing with a Gaussian kernel suppresses high-frequency spatial information, thus DoG can be regarded as a band-pass filter. However, due to their small size and overimposed breast tissue, microcalcifications vary greatly in contrast-to-noise ratio and sharpness. This makes it difficult to find a single DoG configuration that enhances all microcalcifications. In this work, we propose a convolutional network, named DoG-MCNet, where the first layer automatically learns a bank of DoG filters parameterized by their associated standard deviations. We experimentally show that when employed for microcalcification detection, our DoG layer acts as a learnable bank of band-pass preprocessing filters and improves detection performance by 4.86% AUFROC over baseline MCNet and 1.53% AUFROC over state-of-the-art multicontext ensemble of CNNs.
Collapse
Affiliation(s)
- Marco Cantone
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, Cassino, FR 03043, Italy.
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, Cassino, FR 03043, Italy.
| | - Francesco Tortorella
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, Fisciano, SA 84084, Italy.
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, Cassino, FR 03043, Italy.
| |
Collapse
|
8
|
Wang X, Shi X, Meng X, Zhang Z, Zhang C. A universal lesion detection method based on partially supervised learning. Front Pharmacol 2023; 14:1084155. [PMID: 37593177 PMCID: PMC10427860 DOI: 10.3389/fphar.2023.1084155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Accepted: 07/13/2023] [Indexed: 08/19/2023] Open
Abstract
Partially supervised learning (PSL) is urgently necessary to explore to construct an efficient universal lesion detection (ULD) segmentation model. An annotated dataset is crucial but hard to acquire because of too many Computed tomography (CT) images and the lack of professionals in computer-aided detection/diagnosis (CADe/CADx). To address this problem, we propose a novel loss function to reduce the proportion of negative anchors which is extremely likely to classify the lesion area (positive samples) as a negative bounding box, further leading to an unexpected performance. Before calculating loss, we generate a mask to intentionally choose fewer negative anchors which will backward wrongful loss to the network. During the process of loss calculation, we set a parameter to reduce the proportion of negative samples, and it significantly reduces the adverse effect of misclassification on the model. Our experiments are implemented in a 3D framework by feeding a partially annotated dataset named DeepLesion, a large-scale public dataset for universal lesion detection from CT. We implement a lot of experiments to choose the most suitable parameter, and the result shows that the proposed method has greatly improved the performance of a ULD detector. Our code can be obtained at https://github.com/PLuld0/PLuldl.
Collapse
Affiliation(s)
- Xun Wang
- Department of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
- High Performance Computer Research Center, University of Chinese Academy of Sciences, Beijing, China
| | - Xin Shi
- Department of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
| | - Xiangyu Meng
- Department of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
| | - Zhiyuan Zhang
- Department of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
| | - Chaogang Zhang
- Department of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
| |
Collapse
|
9
|
Glänzer L, Masalkhi HE, Roeth AA, Schmitz-Rode T, Slabu I. Vessel Delineation Using U-Net: A Sparse Labeled Deep Learning Approach for Semantic Segmentation of Histological Images. Cancers (Basel) 2023; 15:3773. [PMID: 37568589 PMCID: PMC10417575 DOI: 10.3390/cancers15153773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/20/2023] [Accepted: 07/21/2023] [Indexed: 08/13/2023] Open
Abstract
Semantic segmentation is an important imaging analysis method enabling the identification of tissue structures. Histological image segmentation is particularly challenging, having large structural information while providing only limited training data. Additionally, labeling these structures to generate training data is time consuming. Here, we demonstrate the feasibility of a semantic segmentation using U-Net with a novel sparse labeling technique. The basic U-Net architecture was extended by attention gates, residual and recurrent links, and dropout regularization. To overcome the high class imbalance, which is intrinsic to histological data, under- and oversampling and data augmentation were used. In an ablation study, various architectures were evaluated, and the best performing model was identified. This model contains attention gates, residual links, and a dropout regularization of 0.125. The segmented images show accurate delineations of the vascular structures (with a precision of 0.9088 and an AUC-ROC score of 0.9717), and the segmentation algorithm is robust to images containing staining variations and damaged tissue. These results demonstrate the feasibility of sparse labeling in combination with the modified U-Net architecture.
Collapse
Affiliation(s)
- Lukas Glänzer
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany; (L.G.); (H.E.M.); (T.S.-R.)
| | - Husam E. Masalkhi
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany; (L.G.); (H.E.M.); (T.S.-R.)
| | - Anjali A. Roeth
- Department of Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstrasse 30, 52074 Aachen, Germany;
- Department of Surgery, Maastricht University, P. Debyelaan 25, 6229 Maastricht, The Netherlands
| | - Thomas Schmitz-Rode
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany; (L.G.); (H.E.M.); (T.S.-R.)
| | - Ioana Slabu
- Institute of Applied Medical Engineering, Helmholtz Institute, Medical Faculty, RWTH Aachen University, Pauwelsstraße 20, 52074 Aachen, Germany; (L.G.); (H.E.M.); (T.S.-R.)
| |
Collapse
|
10
|
Shu YC, Lo YC, Chiu HC, Chen LR, Lin CY, Wu WT, Özçakar L, Chang KV. Deep learning algorithm for predicting subacromial motion trajectory: Dynamic shoulder ultrasound analysis. ULTRASONICS 2023; 134:107057. [PMID: 37290256 DOI: 10.1016/j.ultras.2023.107057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 05/14/2023] [Accepted: 05/24/2023] [Indexed: 06/10/2023]
Abstract
Subacromial motion metrics can be extracted from dynamic shoulder ultrasonography, which is useful for identifying abnormal motion patterns in painful shoulders. However, frame-by-frame manual labeling of anatomical landmarks in ultrasound images is time consuming. The present study aims to investigate the feasibility of a deep learning algorithm for extracting subacromial motion metrics from dynamic ultrasonography. Dynamic ultrasound imaging was retrieved by asking 17 participants to perform cyclic shoulder abduction and adduction along the scapular plane, whereby the trajectory of the humeral greater tubercle (in relation to the lateral acromion) was depicted by the deep learning algorithm. Extraction of the subacromial motion metrics was conducted using a convolutional neural network (CNN) or a self-transfer learning-based (STL)-CNN with or without an autoencoder (AE). The mean absolute error (MAE) compared with the manually-labeled data (ground truth) served as the main outcome variable. Using eight-fold cross-validation, the average MAE was proven to be significantly higher in the group using CNN than in those using STL-CNN or STL-CNN+AE for the relative difference between the greater tubercle and lateral acromion on the horizontal axis. The MAE for the localization of the two aforementioned landmarks on the vertical axis also seemed to be enlarged in those using CNN compared with those using STL-CNN. In the testing dataset, the errors in relation to the ground truth for the minimal vertical acromiohumeral distance were 0.081-0.333 cm using CNN, compared with 0.002-0.007 cm using STL-CNN. We successfully demonstrated the feasibility of a deep learning algorithm for automatic detection of the greater tubercle and lateral acromion during dynamic shoulder ultrasonography. Our framework also demonstrated the capability of capturing the minimal vertical acromiohumeral distance, which is the most important indicator of subacromial motion metrics in daily clinical practice.
Collapse
Affiliation(s)
- Yi-Chung Shu
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Yu-Cheng Lo
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Hsiao-Chi Chiu
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Lan-Rong Chen
- Department of Physical Medicine and Rehabilitation and Community and Geriatric Research Center, National Taiwan University Hospital, Bei-Hu Branch, Taipei, Taiwan
| | - Che-Yu Lin
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Wei-Ting Wu
- Department of Physical Medicine and Rehabilitation and Community and Geriatric Research Center, National Taiwan University Hospital, Bei-Hu Branch, Taipei, Taiwan; Department of Physical Medicine and Rehabilitation, National Taiwan University College of Medicine, Taipei, Taiwan
| | - Levent Özçakar
- Department of Physical and Rehabilitation Medicine, Hacettepe University Medical School, Ankara, Turkey
| | - Ke-Vin Chang
- Department of Physical Medicine and Rehabilitation and Community and Geriatric Research Center, National Taiwan University Hospital, Bei-Hu Branch, Taipei, Taiwan; Department of Physical Medicine and Rehabilitation, National Taiwan University College of Medicine, Taipei, Taiwan; Center for Regional Anesthesia and Pain Medicine, Wang-Fang Hospital, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
11
|
Jiang Z, Polf JC, Barajas CA, Gobbert MK, Ren L. A feasibility study of enhanced prompt gamma imaging for range verification in proton therapy using deep learning. Phys Med Biol 2023; 68:10.1088/1361-6560/acbf9a. [PMID: 36848674 PMCID: PMC10173868 DOI: 10.1088/1361-6560/acbf9a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Accepted: 02/27/2023] [Indexed: 03/01/2023]
Abstract
Background and objective. Range uncertainty is a major concern affecting the delivery precision in proton therapy. The Compton camera (CC)-based prompt-gamma (PG) imaging is a promising technique to provide 3Din vivorange verification. However, the conventional back-projected PG images suffer from severe distortions due to the limited view of the CC, significantly limiting its clinical utility. Deep learning has demonstrated effectiveness in enhancing medical images from limited-view measurements. But different from other medical images with abundant anatomical structures, the PGs emitted along the path of a proton pencil beam take up an extremely low portion of the 3D image space, presenting both the attention and the imbalance challenge for deep learning. To solve these issues, we proposed a two-tier deep learning-based method with a novel weighted axis-projection loss to generate precise 3D PG images to achieve accurate proton range verification.Materials and methods: the proposed method consists of two models: first, a localization model is trained to define a region-of-interest (ROI) in the distorted back-projected PG image that contains the proton pencil beam; second, an enhancement model is trained to restore the true PG emissions with additional attention on the ROI. In this study, we simulated 54 proton pencil beams (energy range: 75-125 MeV, dose level: 1 × 109protons/beam and 3 × 108protons/beam) delivered at clinical dose rates (20 kMU min-1and 180 kMU min-1) in a tissue-equivalent phantom using Monte-Carlo (MC). PG detection with a CC was simulated using the MC-Plus-Detector-Effects model. Images were reconstructed using the kernel-weighted-back-projection algorithm, and were then enhanced by the proposed method.Results. The method effectively restored the 3D shape of the PG images with the proton pencil beam range clearly visible in all testing cases. Range errors were within 2 pixels (4 mm) in all directions in most cases at a higher dose level. The proposed method is fully automatic, and the enhancement takes only ∼0.26 s.Significance. Overall, this preliminary study demonstrated the feasibility of the proposed method to generate accurate 3D PG images using a deep learning framework, providing a powerful tool for high-precisionin vivorange verification of proton therapy.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Jerimy C. Polf
- Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD, 21201, USA
| | - Carlos A. Barajas
- Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, 21250, USA
| | - Matthias K. Gobbert
- Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, 21250, USA
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, 27710, USA
- Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD, 21201, USA
| |
Collapse
|
12
|
Jang M, Bae HJ, Kim M, Park SY, Son AY, Choi SJ, Choe J, Choi HY, Hwang HJ, Noh HN, Seo JB, Lee SM, Kim N. Image Turing test and its applications on synthetic chest radiographs by using the progressive growing generative adversarial network. Sci Rep 2023; 13:2356. [PMID: 36759636 PMCID: PMC9911730 DOI: 10.1038/s41598-023-28175-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Accepted: 01/13/2023] [Indexed: 02/11/2023] Open
Abstract
The generative adversarial network (GAN) is a promising deep learning method for generating images. We evaluated the generation of highly realistic and high-resolution chest radiographs (CXRs) using progressive growing GAN (PGGAN). We trained two PGGAN models using normal and abnormal CXRs, solely relying on normal CXRs to demonstrate the quality of synthetic CXRs that were 1000 × 1000 pixels in size. Image Turing tests were evaluated by six radiologists in a binary fashion using two independent validation sets to judge the authenticity of each CXR, with a mean accuracy of 67.42% and 69.92% for the first and second trials, respectively. Inter-reader agreements were poor for the first (κ = 0.10) and second (κ = 0.14) Turing tests. Additionally, a convolutional neural network (CNN) was used to classify normal or abnormal CXR using only real images and/or synthetic images mixed datasets. The accuracy of the CNN model trained using a mixed dataset of synthetic and real data was 93.3%, compared to 91.0% for the model built using only the real data. PGGAN was able to generate CXRs that were identical to real CXRs, and this showed promise to overcome imbalances between classes in CNN training.
Collapse
Affiliation(s)
- Miso Jang
- Department of Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | | | - Minjee Kim
- Promedius Inc., Seoul, Republic of Korea
| | - Seo Young Park
- Department of Statistics and Data Science, Korea National Open University, Seoul, Republic of Korea
| | - A-Yeon Son
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine and Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Se Jin Choi
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine and Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Jooae Choe
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine and Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Hye Young Choi
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine and Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Hye Jeon Hwang
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine and Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Han Na Noh
- Department of Health Screening and Promotion Center, Asan Medical Center, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine and Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine and Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine and Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea.
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea.
| |
Collapse
|
13
|
Cantone M, Marrocco C, Tortorella F, Bria A. Convolutional Networks and Transformers for Mammography Classification: An Experimental Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:1229. [PMID: 36772268 PMCID: PMC9921468 DOI: 10.3390/s23031229] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/13/2023] [Accepted: 01/18/2023] [Indexed: 05/31/2023]
Abstract
Convolutional Neural Networks (CNN) have received a large share of research in mammography image analysis due to their capability of extracting hierarchical features directly from raw data. Recently, Vision Transformers are emerging as viable alternative to CNNs in medical imaging, in some cases performing on par or better than their convolutional counterparts. In this work, we conduct an extensive experimental study to compare the most recent CNN and Vision Transformer architectures for whole mammograms classification. We selected, trained and tested 33 different models, 19 convolutional- and 14 transformer-based, on the largest publicly available mammography image database OMI-DB. We also performed an analysis of the performance at eight different image resolutions and considering all the individual lesion categories in isolation (masses, calcifications, focal asymmetries, architectural distortions). Our findings confirm the potential of visual transformers, which performed on par with traditional CNNs like ResNet, but at the same time show a superiority of modern convolutional networks like EfficientNet.
Collapse
Affiliation(s)
- Marco Cantone
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| | - Francesco Tortorella
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, SA, Italy
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| |
Collapse
|
14
|
Yu X, Wang SH, Zhang YD. Multiple-level thresholding for breast mass detection. JOURNAL OF KING SAUD UNIVERSITY. COMPUTER AND INFORMATION SCIENCES 2023; 35:115-130. [PMID: 37220564 PMCID: PMC7614559 DOI: 10.1016/j.jksuci.2022.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Detection of breast mass plays a very important role in making the diagnosis of breast cancer. For faster detection of breast cancer caused by breast mass, we developed a novel and efficient patch-based breast mass detection system for mammography images. The proposed framework is comprised of three modules, including pre-processing, multiple-level breast tissue segmentation, and final breast mass detection. An improved Deeplabv3+ model for pectoral muscle removal is deployed in pre-processing. We then proposed a multiple-level thresholding segmentation method to segment breast mass and obtained the connected components (ConCs), where the corresponding image patch to each ConC is extracted for mass detection. In the final detection stage, each image patch is classified into breast mass and breast tissue background by trained deep learning models. The patches that are classified as breast mass are then taken as the candidates for breast mass. To reduce the false positive rate in the detection results, we applied the non-maximum suppression algorithm to combine the overlapped detection results. Once an image patch is considered a breast mass, the accurate detection result can then be retrieved from the corresponding ConC in the segmented images. Moreover, a coarse segmentation result can be simultaneously retrieved after detection. Compared to the state-of-the-art methods, the proposed method achieved comparable performance. On CBIS-DDSM, the proposed method achieved a detection sensitivity of 0.87 at 2.86 FPI (False Positive rate per Image), while the sensitivity reached 0.96 on INbreast with an FPI of only 1.29.
Collapse
Affiliation(s)
- Xiang Yu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LEI 7RH, United Kingdom
| | - Shui-Hua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LEI 7RH, United Kingdom
| | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LEI 7RH, United Kingdom
| |
Collapse
|
15
|
The class imbalance problem in deep learning. Mach Learn 2022. [DOI: 10.1007/s10994-022-06268-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
16
|
Walsh R, Tardy M. A Comparison of Techniques for Class Imbalance in Deep Learning Classification of Breast Cancer. Diagnostics (Basel) 2022; 13:67. [PMID: 36611358 PMCID: PMC9818528 DOI: 10.3390/diagnostics13010067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
Tools based on deep learning models have been created in recent years to aid radiologists in the diagnosis of breast cancer from mammograms. However, the datasets used to train these models may suffer from class imbalance, i.e., there are often fewer malignant samples than benign or healthy cases, which can bias the model towards the healthy class. In this study, we systematically evaluate several popular techniques to deal with this class imbalance, namely, class weighting, over-sampling, and under-sampling, as well as a synthetic lesion generation approach to increase the number of malignant samples. These techniques are applied when training on three diverse Full-Field Digital Mammography datasets, and tested on in-distribution and out-of-distribution samples. The experiments show that a greater imbalance is associated with a greater bias towards the majority class, which can be counteracted by any of the standard class imbalance techniques. On the other hand, these methods provide no benefit to model performance with respect to Area Under the Curve of the Recall Operating Characteristic (AUC-ROC), and indeed under-sampling leads to a reduction of 0.066 in AUC in the case of a 19:1 benign to malignant imbalance. Our synthetic lesion methodology leads to better performance in most cases, with increases of up to 0.07 in AUC on out-of-distribution test sets over the next best experiment.
Collapse
Affiliation(s)
- Ricky Walsh
- ISTIC, Campus Beaulieu, Université de Rennes 1, 35700 Rennes, France
- Hera-MI SAS, 44800 Saint-Herblain, France
| | - Mickael Tardy
- Hera-MI SAS, 44800 Saint-Herblain, France
- Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, 44000 Nantes, France
| |
Collapse
|
17
|
Piao C, Lv M, Wang S, Zhou R, Wang Y, Wei J, Liu J. Multi-objective data enhancement for deep learning-based ultrasound analysis. BMC Bioinformatics 2022; 23:438. [PMID: 36266626 PMCID: PMC9583467 DOI: 10.1186/s12859-022-04985-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/10/2022] [Indexed: 11/10/2022] Open
Abstract
Recently, Deep Learning based automatic generation of treatment recommendation has been attracting much attention. However, medical datasets are usually small, which may lead to over-fitting and inferior performances of deep learning models. In this paper, we propose multi-objective data enhancement method to indirectly scale up the medical data to avoid over-fitting and generate high quantity treatment recommendations. Specifically, we define a main and several auxiliary tasks on the same dataset and train a specific model for each of these tasks to learn different aspects of knowledge in limited data scale. Meanwhile, a Soft Parameter Sharing method is exploited to share learned knowledge among models. By sharing the knowledge learned by auxiliary tasks to the main task, the proposed method can take different semantic distributions into account during the training process of the main task. We collected an ultrasound dataset of thyroid nodules that contains Findings, Impressions and Treatment Recommendations labeled by professional doctors. We conducted various experiments on the dataset to validate the proposed method and justified its better performance than existing methods.
Collapse
Affiliation(s)
- Chengkai Piao
- College of Computer Science, Nankai University, Tianjin, China
| | - Mengyue Lv
- Department of Ultrasound, Cangzhou Municipal Haixing Hospital, Cangzhou, China
| | - Shujie Wang
- Department of Ultrasound, Cangzhou Municipal Haixing Hospital, Cangzhou, China
| | - Rongyan Zhou
- Department of Ultrasound, Cangzhou Municipal Haixing Hospital, Cangzhou, China
| | - Yuchen Wang
- College of Computer Science, Nankai University, Tianjin, China
| | - Jinmao Wei
- College of Computer Science, Nankai University, Tianjin, China.
| | - Jian Liu
- College of Computer Science, Nankai University, Tianjin, China.
| |
Collapse
|
18
|
Dubey S, Dixit M. Recent developments on computer aided systems for diagnosis of diabetic retinopathy: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:14471-14525. [PMID: 36185322 PMCID: PMC9510498 DOI: 10.1007/s11042-022-13841-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 04/27/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
Diabetes is a long-term condition in which the pancreas quits producing insulin or the body's insulin isn't utilised properly. One of the signs of diabetes is Diabetic Retinopathy. Diabetic retinopathy is the most prevalent type of diabetes, if remains unaddressed, diabetic retinopathy can affect all diabetics and become very serious, raising the chances of blindness. It is a chronic systemic condition that affects up to 80% of patients for more than ten years. Many researchers believe that if diabetes individuals are diagnosed early enough, they can be rescued from the condition in 90% of cases. Diabetes damages the capillaries, which are microscopic blood vessels in the retina. On images, blood vessel damage is usually noticeable. Therefore, in this study, several traditional, as well as deep learning-based approaches, are reviewed for the classification and detection of this particular diabetic-based eye disease known as diabetic retinopathy, and also the advantage of one approach over the other is also described. Along with the approaches, the dataset and the evaluation metrics useful for DR detection and classification are also discussed. The main finding of this study is to aware researchers about the different challenges occurs while detecting diabetic retinopathy using computer vision, deep learning techniques. Therefore, a purpose of this review paper is to sum up all the major aspects while detecting DR like lesion identification, classification and segmentation, security attacks on the deep learning models, proper categorization of datasets and evaluation metrics. As deep learning models are quite expensive and more prone to security attacks thus, in future it is advisable to develop a refined, reliable and robust model which overcomes all these aspects which are commonly found while designing deep learning models.
Collapse
Affiliation(s)
- Shradha Dubey
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| | - Manish Dixit
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| |
Collapse
|
19
|
Huang HN, Zhang T, Yang CT, Sheen YJ, Chen HM, Chen CJ, Tseng MW. Image segmentation using transfer learning and Fast R-CNN for diabetic foot wound treatments. Front Public Health 2022; 10:969846. [PMID: 36203688 PMCID: PMC9530356 DOI: 10.3389/fpubh.2022.969846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 08/15/2022] [Indexed: 01/25/2023] Open
Abstract
Diabetic foot ulcers (DFUs) are considered the most challenging forms of chronic ulcerations to handle their multifactorial nature. It is necessary to establish a comprehensive treatment plan, accurate, and systematic evaluation of a patient with a DFU. This paper proposed an image recognition of diabetic foot wounds to support the effective execution of the treatment plan. In the severity of a diabetic foot ulcer, we refer to the current qualitative evaluation method commonly used in clinical practice, developed by the International Working Group on the Diabetic Foot: PEDIS index, and the evaluation made by physicians. The deep neural network, convolutional neural network, object recognition, and other technologies are applied to analyze the classification, location, and size of wounds by image analysis technology. The image features are labeled with the help of the physician. The Object Detection Fast R-CNN method is applied to these wound images to build and train machine learning modules and evaluate their effectiveness. In the assessment accuracy, it can be indicated that the wound image detection data can be as high as 90%.
Collapse
Affiliation(s)
- Huang-Nan Huang
- Department of Applied Mathematics, Tunghai University, Taichung, Taiwan
| | - Tianyi Zhang
- Department of Computer Science, Tunghai University, Taichung, Taiwan
| | - Chao-Tung Yang
- Department of Computer Science, Tunghai University, Taichung, Taiwan,Research Center for Smart Sustainable Circular Economy, Tunghai University, Taichung, Taiwan,*Correspondence: Chao-Tung Yang
| | - Yi-Jing Sheen
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Taichung Veterans General Hospital, Taichung, Taiwan,Yi-Jing Sheen
| | - Hsian-Min Chen
- Department of Medical Research, Center for Quantitative Imaging in Medicine (CQUIM), Taichung Veterans General Hospital, Taichung, Taiwan,Hsian-Min Chen
| | - Chur-Jen Chen
- Department of Applied Mathematics, Tunghai University, Taichung, Taiwan
| | - Meng-Wen Tseng
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Taichung Veterans General Hospital, Taichung, Taiwan
| |
Collapse
|
20
|
Gong EJ, Bang CS, Lee JJ, Yang YJ, Baik GH. Impact of the Volume and Distribution of Training Datasets in the Development of Deep-Learning Models for the Diagnosis of Colorectal Polyps in Endoscopy Images. J Pers Med 2022; 12:jpm12091361. [PMID: 36143146 PMCID: PMC9505038 DOI: 10.3390/jpm12091361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/13/2022] [Accepted: 08/19/2022] [Indexed: 11/16/2022] Open
Abstract
Background: Establishment of an artificial intelligence model in gastrointestinal endoscopy has no standardized dataset. The optimal volume or class distribution of training datasets has not been evaluated. An artificial intelligence model was previously created by the authors to classify endoscopic images of colorectal polyps into four categories, including advanced colorectal cancer, early cancers/high-grade dysplasia, tubular adenoma, and nonneoplasm. The aim of this study was to evaluate the impact of the volume and distribution of training dataset classes in the development of deep-learning models for colorectal polyp histopathology prediction from endoscopic images. Methods: The same 3828 endoscopic images that were used to create earlier models were used. An additional 6838 images were used to find the optimal volume and class distribution for a deep-learning model. Various amounts of data volume and class distributions were tried to establish deep-learning models. The training of deep-learning models uniformly used no-code platform Neuro-T. Accuracy was the primary outcome on four-class prediction. Results: The highest internal-test classification accuracy in the original dataset, doubled dataset, and tripled dataset was commonly shown by doubling the proportion of data for fewer categories (2:2:1:1 for advanced colorectal cancer: early cancers/high-grade dysplasia: tubular adenoma: non-neoplasm). Doubling the proportion of data for fewer categories in the original dataset showed the highest accuracy (86.4%, 95% confidence interval: 85.0–97.8%) compared to that of the doubled or tripled dataset. The total required number of images in this performance was only 2418 images. Gradient-weighted class activation mapping confirmed that the part that the deep-learning model pays attention to coincides with the part that the endoscopist pays attention to. Conclusion: As a result of a data-volume-dependent performance plateau in the classification model of colonoscopy, a dataset that has been doubled or tripled is not always beneficial to training. Deep-learning models would be more accurate if the proportion of fewer category lesions was increased.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea
- Correspondence: ; Tel.: +82-33-240-5821; Fax: +82-33-241-8064
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon 24253, Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Young Joo Yang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea
| |
Collapse
|
21
|
A Priori Determining the Performance of the Customized Naïve Associative Classifier for Business Data Classification Based on Data Complexity Measures. MATHEMATICS 2022. [DOI: 10.3390/math10152740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
In the supervised classification area, the algorithm selection problem (ASP) refers to determining the a priori performance of a given classifier in some specific problem, as well as the finding of which is the most suitable classifier for some tasks. Recently, this topic has attracted the attention of international research groups because a very promising vein of research has emerged: the application of some measures of data complexity in the pattern classification algorithms. This paper aims to analyze the response of the Customized Naïve Associative Classifier (CNAC) in data taken from the business area when some measures of data complexity are introduced. To perform this analysis, we used classification datasets from real-world related to business, 22 in total; then, we computed the value of nine measures of data complexity to compare the performance of the CNAC against other algorithms of the state of the art. A very important aspect of performing this task is the creation of an artificial dataset for meta-learning purposes, in which we considered the performance of CNAC, and then we trained a decision tree as meta learner. As shown, the CNAC classifier obtained the best results for 10 out of 22 datasets of the experimental study.
Collapse
|
22
|
Dehkordi HA, Nezhad AS, Kashiani H, Shokouhi SB, Ayatollahi A. Multi-expert human action recognition with hierarchical super-class learning. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
23
|
Fu X, Bates PA. Application of deep learning methods: From molecular modelling to patient classification. Exp Cell Res 2022; 418:113278. [PMID: 35810775 DOI: 10.1016/j.yexcr.2022.113278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 06/16/2022] [Accepted: 07/05/2022] [Indexed: 11/28/2022]
Abstract
We are now well into the information driven age with complex, heterogeneous, datasets in the biological sciences continuing to grow at a rapid pace. Moreover, distilling of such datasets, to find new governing principles, are underway. Leading the surge are new and exciting algorithmic developments in computer simulation and machine learning, most notably for the latter, those centred on deep learning. However, practical applications of cell centric computations within the biological sciences, even when carefully benchmarked against existing experimental datasets, remain challenging. Here we discuss the application of deep learning methodologies to support our understanding of cell functionality and as an aid to patient classification. Whilst comprehensive end-to-end deep learning approaches that utilise knowledge of the cell and its molecular components to aid human disease classification are yet to be implemented, important for opening the door to more effective molecular and cell-based therapies, we illustrate that many deep learning applications have been developed to tackle components of such an ambitious pipeline. We end our discussion on what the future may hold, especially how an integrated framework of computer simulations and deep learning, in conjunction with wet-bench experimentation, could enable to reveal the governing principles underlying cell functionalities within the tissue environments cells operate.
Collapse
Affiliation(s)
- Xiao Fu
- Biomolecular Modelling Laboratory, The Francis Crick Institute, 1 Midland Rd, London, NW1 1AT, UK.
| | - Paul A Bates
- Biomolecular Modelling Laboratory, The Francis Crick Institute, 1 Midland Rd, London, NW1 1AT, UK.
| |
Collapse
|
24
|
Barragán-Montero A, Bibal A, Dastarac MH, Draguet C, Valdés G, Nguyen D, Willems S, Vandewinckele L, Holmström M, Löfman F, Souris K, Sterpin E, Lee JA. Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency. Phys Med Biol 2022; 67:10.1088/1361-6560/ac678a. [PMID: 35421855 PMCID: PMC9870296 DOI: 10.1088/1361-6560/ac678a] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/14/2022] [Indexed: 01/26/2023]
Abstract
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Adrien Bibal
- PReCISE, NaDI Institute, Faculty of Computer Science, UNamur and CENTAL, ILC, UCLouvain, Belgium
| | - Margerie Huet Dastarac
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Camille Draguet
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, United States of America
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| |
Collapse
|
25
|
Xu L, Yang C, Zhang F, Cheng X, Wei Y, Fan S, Liu M, He X, Deng J, Xie T, Wang X, Liu M, Song B. Deep Learning Using CT Images to Grade Clear Cell Renal Cell Carcinoma: Development and Validation of a Prediction Model. Cancers (Basel) 2022; 14:cancers14112574. [PMID: 35681555 PMCID: PMC9179576 DOI: 10.3390/cancers14112574] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/21/2022] [Accepted: 04/29/2022] [Indexed: 02/06/2023] Open
Abstract
Simple Summary Clear cell renal cell carcinoma (ccRCC) pathologic grade identification is essential to both monitoring patients’ conditions and constructing individualized subsequent treatment strategies. However, biopsies are typically used to obtain the pathological grade, entailing tremendous physical and mental suffering as well as heavy economic burden, not to mention the increased risk of complications. Our study explores a new way to provide grade assessment of ccRCC on the basis of the individual’s appearance on CT images. A deep learning (DL) method that includes self-supervised learning is constructed to identify patients with high grade for ccRCC. We confirmed that our grading network can accurately differentiate between different grades of CT scans of ccRCC patients using a cohort of 706 patients from West China Hospital. The promising diagnostic performance indicates that our DL framework is an effective, non-invasive and labor-saving method for decoding CT images, offering a valuable means for ccRCC grade stratification and individualized patient treatment. Abstract This retrospective study aimed to develop and validate deep-learning-based models for grading clear cell renal cell carcinoma (ccRCC) patients. A cohort enrolling 706 patients (n = 706) with pathologically verified ccRCC was used in this study. A temporal split was applied to verify our models: the first 83.9% of the cases (years 2010–2017) for development and the last 16.1% (year 2018–2019) for validation (development cohort: n = 592; validation cohort: n = 114). Here, we demonstrated a deep learning(DL) framework initialized by a self-supervised pre-training method, developed with the addition of mixed loss strategy and sample reweighting to identify patients with high grade for ccRCC. Four types of DL networks were developed separately and further combined with different weights for better prediction. The single DL model achieved up to an area under curve (AUC) of 0.864 in the validation cohort, while the ensembled model yielded the best predictive performance with an AUC of 0.882. These findings confirms that our DL approach performs either favorably or comparably in terms of grade assessment of ccRCC with biopsies whilst enjoying the non-invasive and labor-saving property.
Collapse
Affiliation(s)
- Lifeng Xu
- The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People’s Hospital, Quzhou 324000, China; (L.X.); (F.Z.)
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China; (C.Y.); (X.C.); (S.F.); (M.L.); (J.D.); (T.X.); (X.W.); (M.L.)
| | - Chun Yang
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China; (C.Y.); (X.C.); (S.F.); (M.L.); (J.D.); (T.X.); (X.W.); (M.L.)
- University of Electronic Science and Technology of China, Chengdu 610000, China
| | - Feng Zhang
- The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People’s Hospital, Quzhou 324000, China; (L.X.); (F.Z.)
| | - Xuan Cheng
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China; (C.Y.); (X.C.); (S.F.); (M.L.); (J.D.); (T.X.); (X.W.); (M.L.)
- University of Electronic Science and Technology of China, Chengdu 610000, China
| | - Yi Wei
- West China Hospital, Sichuan University, Chengdu 610000, China;
| | - Shixiao Fan
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China; (C.Y.); (X.C.); (S.F.); (M.L.); (J.D.); (T.X.); (X.W.); (M.L.)
- University of Electronic Science and Technology of China, Chengdu 610000, China
| | - Minghui Liu
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China; (C.Y.); (X.C.); (S.F.); (M.L.); (J.D.); (T.X.); (X.W.); (M.L.)
- University of Electronic Science and Technology of China, Chengdu 610000, China
| | - Xiaopeng He
- West China Hospital, Sichuan University, Chengdu 610000, China;
- Affiliated Hospital of Southwest Medical University, Luzhou 646000, China
- Correspondence: (X.H.); (B.S.)
| | - Jiali Deng
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China; (C.Y.); (X.C.); (S.F.); (M.L.); (J.D.); (T.X.); (X.W.); (M.L.)
- University of Electronic Science and Technology of China, Chengdu 610000, China
| | - Tianshu Xie
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China; (C.Y.); (X.C.); (S.F.); (M.L.); (J.D.); (T.X.); (X.W.); (M.L.)
- University of Electronic Science and Technology of China, Chengdu 610000, China
| | - Xiaomin Wang
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China; (C.Y.); (X.C.); (S.F.); (M.L.); (J.D.); (T.X.); (X.W.); (M.L.)
- University of Electronic Science and Technology of China, Chengdu 610000, China
| | - Ming Liu
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China; (C.Y.); (X.C.); (S.F.); (M.L.); (J.D.); (T.X.); (X.W.); (M.L.)
- University of Electronic Science and Technology of China, Chengdu 610000, China
| | - Bin Song
- West China Hospital, Sichuan University, Chengdu 610000, China;
- Correspondence: (X.H.); (B.S.)
| |
Collapse
|
26
|
State-of-the-art retinal vessel segmentation with minimalistic models. Sci Rep 2022; 12:6174. [PMID: 35418576 PMCID: PMC9007957 DOI: 10.1038/s41598-022-09675-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 03/10/2022] [Indexed: 01/03/2023] Open
Abstract
The segmentation of retinal vasculature from eye fundus images is a fundamental task in retinal image analysis. Over recent years, increasingly complex approaches based on sophisticated Convolutional Neural Network architectures have been pushing performance on well-established benchmark datasets. In this paper, we take a step back and analyze the real need of such complexity. We first compile and review the performance of 20 different techniques on some popular databases, and we demonstrate that a minimalistic version of a standard U-Net with several orders of magnitude less parameters, carefully trained and rigorously evaluated, closely approximates the performance of current best techniques. We then show that a cascaded extension (W-Net) reaches outstanding performance on several popular datasets, still using orders of magnitude less learnable weights than any previously published work. Furthermore, we provide the most comprehensive cross-dataset performance analysis to date, involving up to 10 different databases. Our analysis demonstrates that the retinal vessel segmentation is far from solved when considering test images that differ substantially from the training data, and that this task represents an ideal scenario for the exploration of domain adaptation techniques. In this context, we experiment with a simple self-labeling strategy that enables moderate enhancement of cross-dataset performance, indicating that there is still much room for improvement in this area. Finally, we test our approach on Artery/Vein and vessel segmentation from OCTA imaging problems, where we again achieve results well-aligned with the state-of-the-art, at a fraction of the model complexity available in recent literature. Code to reproduce the results in this paper is released.
Collapse
|
27
|
Ali H, Haq IU, Cui L, Feng J. MSAL-Net: improve accurate segmentation of nuclei in histopathology images by multiscale attention learning network. BMC Med Inform Decis Mak 2022; 22:90. [PMID: 35379228 PMCID: PMC8978355 DOI: 10.1186/s12911-022-01826-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 03/24/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The digital pathology images obtain the essential information about the patient's disease, and the automated nuclei segmentation results can help doctors make better decisions about diagnosing the disease. With the speedy advancement of convolutional neural networks in image processing, deep learning has been shown to play a significant role in the various analysis of medical images, such as nuclei segmentation, mitosis detection and segmentation etc. Recently, several U-net based methods have been developed to solve the automated nuclei segmentation problems. However, these methods fail to deal with the weak features representation from the initial layers and introduce the noise into the decoder path. In this paper, we propose a multiscale attention learning network (MSAL-Net), where the dense dilated convolutions block captures more comprehensive nuclei context information, and a newly modified decoder part is introduced, which integrates with efficient channel attention and boundary refinement modules to effectively learn spatial information for better prediction and further refine the nuclei cell of boundaries. RESULTS Both qualitative and quantitative results are obtained on the publicly available MoNuseg dataset. Extensive experiment results verify that our proposed method significantly outperforms state-of-the-art methods as well as the vanilla Unet method in the segmentation task. Furthermore, we visually demonstrate the effect of our modified decoder part. CONCLUSION The MSAL-Net shows superiority with a novel decoder to segment the touching and blurred background nuclei cells obtained from histopathology images with better performance for accurate decoding.
Collapse
Affiliation(s)
- Haider Ali
- School of Information Science and Technology, Northwest University, Xian, China
| | - Imran ul Haq
- School of Information Science and Technology, Northwest University, Xian, China
| | - Lei Cui
- School of Information Science and Technology, Northwest University, Xian, China
| | - Jun Feng
- School of Information Science and Technology, Northwest University, Xian, China
| |
Collapse
|
28
|
Deep convolutional neural networks for computer-aided breast cancer diagnostic: a survey. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06804-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
29
|
|
30
|
Inamdar MA, Raghavendra U, Gudigar A, Chakole Y, Hegde A, Menon GR, Barua P, Palmer EE, Cheong KH, Chan WY, Ciaccio EJ, Acharya UR. A Review on Computer Aided Diagnosis of Acute Brain Stroke. SENSORS (BASEL, SWITZERLAND) 2021; 21:8507. [PMID: 34960599 PMCID: PMC8707263 DOI: 10.3390/s21248507] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 12/05/2021] [Accepted: 12/09/2021] [Indexed: 01/01/2023]
Abstract
Amongst the most common causes of death globally, stroke is one of top three affecting over 100 million people worldwide annually. There are two classes of stroke, namely ischemic stroke (due to impairment of blood supply, accounting for ~70% of all strokes) and hemorrhagic stroke (due to bleeding), both of which can result, if untreated, in permanently damaged brain tissue. The discovery that the affected brain tissue (i.e., 'ischemic penumbra') can be salvaged from permanent damage and the bourgeoning growth in computer aided diagnosis has led to major advances in stroke management. Abiding to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines, we have surveyed a total of 177 research papers published between 2010 and 2021 to highlight the current status and challenges faced by computer aided diagnosis (CAD), machine learning (ML) and deep learning (DL) based techniques for CT and MRI as prime modalities for stroke detection and lesion region segmentation. This work concludes by showcasing the current requirement of this domain, the preferred modality, and prospective research areas.
Collapse
Affiliation(s)
- Mahesh Anil Inamdar
- Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Udupi Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (Y.C.)
| | - Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (Y.C.)
| | - Yashas Chakole
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (Y.C.)
| | - Ajay Hegde
- Department of Neurosurgery, Kasturba Medical College, Manipal Academy of Higher Education, Manipal 576104, India; (A.H.); (G.R.M.)
| | - Girish R. Menon
- Department of Neurosurgery, Kasturba Medical College, Manipal Academy of Higher Education, Manipal 576104, India; (A.H.); (G.R.M.)
| | - Prabal Barua
- School of Management & Enterprise, University of Southern Queensland, Toowoomba, QLD 4350, Australia;
- Faculty of Engineering and Information Technology, University of Technology, Sydney, NSW 2007, Australia
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia
| | - Elizabeth Emma Palmer
- School of Women’s and Children’s Health, University of New South Wales, Sydney, NSW 2052, Australia;
| | - Kang Hao Cheong
- Science, Mathematics and Technology Cluster, Singapore University of Technology and Design, Singapore 487372, Singapore;
| | - Wai Yee Chan
- Department of Biomedical Imaging, Research Imaging Centre, University of Malaya, Kuala Lumpur 59100, Malaysia;
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia;
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore 599491, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
| |
Collapse
|
31
|
A novel method for image segmentation: two-stage decoding network with boundary attention. INT J MACH LEARN CYB 2021. [DOI: 10.1007/s13042-021-01459-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
32
|
Viegas L, Domingues I, Mendes M. Study on Data Partition for Delimitation of Masses in Mammography. J Imaging 2021; 7:174. [PMID: 34564100 PMCID: PMC8470756 DOI: 10.3390/jimaging7090174] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 08/26/2021] [Accepted: 08/26/2021] [Indexed: 11/20/2022] Open
Abstract
Mammography is the primary medical imaging method used for routine screening and early detection of breast cancer in women. However, the process of manually inspecting, detecting, and delimiting the tumoral massess in 2D images is a very time-consuming task, subject to human errors due to fatigue. Therefore, integrated computer-aided detection systems have been proposed, based on modern computer vision and machine learning methods. In the present work, mammogram images from the publicly available Inbreast dataset are first converted to pseudo-color and then used to train and test a Mask R-CNN deep neural network. The most common approach is to start with a dataset and split the images into train and test set randomly. However, since there are often two or more images of the same case in the dataset, the way the dataset is split may have an impact on the results. Our experiments show that random partition of the data can produce unreliable training, so the dataset must be split using case-wise partition for more stable results. In experimental results, the method achieves an average true positive rate of 0.936 with 0.063 standard deviation using random partition and 0.908 with 0.002 standard deviation using case-wise partition, showing that case-wise partition must be used for more reliable results.
Collapse
Affiliation(s)
- Luís Viegas
- Polytechnic of Coimbra—ISEC, Rua Pedro Nunes, Quinta da Nora, 3030-199 Coimbra, Portugal;
| | - Inês Domingues
- Medical Physics, Radiobiology and Radiation Protection Group, IPO Porto Research Centre (CI-IPOP), 4200-072 Porto, Portugal;
| | - Mateus Mendes
- Polytechnic of Coimbra—ISEC, Rua Pedro Nunes, Quinta da Nora, 3030-199 Coimbra, Portugal;
- ISR (Instituto de Sistemas e Robótica), Departamento de Engenharia Electrotécnica e de Computadores da UC, University of Coimbra, 3004-531 Coimbra, Portugal
| |
Collapse
|
33
|
Chan EOT, Pradere B, Teoh JYC. The use of artificial intelligence for the diagnosis of bladder cancer: a review and perspectives. Curr Opin Urol 2021; 31:397-403. [PMID: 33978604 DOI: 10.1097/mou.0000000000000900] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
PURPOSE OF REVIEW White light cystoscopy is the current standard for primary diagnosis and surveillance of bladder cancer. However, cancer changes can be subtle and may be easily missed. With the advancement of deep learning (DL), image recognition by artificial intelligence (AI) proves a high accuracy for image-based diagnosis. AI can be a solution to enhance bladder cancer diagnosis on cystoscopy. RECENT FINDINGS An algorithm that classifies cystoscopic images into normal and tumour images is essential for AI cystoscopy. To develop this AI-based system requires a training dataset, an appropriate type of DL algorithm for the learning process and a specific outcome classification. A large data volume with minimal class imbalance, data accuracy and representativeness are pre-requisite for a good dataset. Algorithms developed during the past two years to detect bladder tumour achieved high performance with a pooled sensitivity of 89.7% and specificity of 96.1%. The area under the curve ranged from 0.960 to 0.980, and the accuracy ranged from 85.6 to 96.9%. There were also favourable results in the various attempts to enhance detection of flat lesions or carcinoma-in-situ. SUMMARY AI cystoscopy is a possible solution in clinical practice to enhance bladder cancer diagnosis, improve tumour clearance during transurethral resection of bladder tumour and detect recurrent tumours upon surveillance.
Collapse
Affiliation(s)
- Erica On-Ting Chan
- S.H. Ho Urology Centre, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong, China
| | - Benjamin Pradere
- Department of Urology, Medical University of Vienna, Vienna, Austria
| | - Jeremy Yuen-Chun Teoh
- S.H. Ho Urology Centre, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
34
|
Deep learning-based automated detection for diabetic retinopathy and diabetic macular oedema in retinal fundus photographs. Eye (Lond) 2021; 36:1433-1441. [PMID: 34211137 DOI: 10.1038/s41433-021-01552-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 03/24/2021] [Accepted: 04/13/2021] [Indexed: 02/07/2023] Open
Abstract
OBJECTIVES To present and validate a deep ensemble algorithm to detect diabetic retinopathy (DR) and diabetic macular oedema (DMO) using retinal fundus images. METHODS A total of 8739 retinal fundus images were collected from a retrospective cohort of 3285 patients. For detecting DR and DMO, a multiple improved Inception-v4 ensembling approach was developed. We measured the algorithm's performance and made a comparison with that of human experts on our primary dataset, while its generalization was assessed on the publicly available Messidor-2 dataset. Also, we investigated systematically the impact of the size and number of input images used in training on model's performance, respectively. Further, the time budget of training/inference versus model performance was analyzed. RESULTS On our primary test dataset, the model achieved an 0.992 (95% CI, 0.989-0.995) AUC corresponding to 0.925 (95% CI, 0.916-0.936) sensitivity and 0.961 (95% CI, 0.950-0.972) specificity for referable DR, while the sensitivity and specificity for ophthalmologists ranged from 0.845 to 0.936, and from 0.912 to 0.971, respectively. For referable DMO, our model generated an AUC of 0.994 (95% CI, 0.992-0.996) with a 0.930 (95% CI, 0.919-0.941) sensitivity and 0.971 (95% CI, 0.965-0.978) specificity, whereas ophthalmologists obtained sensitivities ranging between 0.852 and 0.946, and specificities ranging between 0.926 and 0.985. CONCLUSION This study showed that the deep ensemble model exhibited excellent performance in detecting DR and DMO, and had good robustness and generalization, which could potentially help support and expand DR/DMO screening programs.
Collapse
|
35
|
Liu X, Guo Z, Cao J, Tang J. MDC-net: A new convolutional neural network for nucleus segmentation in histopathology images with distance maps and contour information. Comput Biol Med 2021; 135:104543. [PMID: 34146800 DOI: 10.1016/j.compbiomed.2021.104543] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 05/28/2021] [Accepted: 05/29/2021] [Indexed: 11/25/2022]
Abstract
Accurate segmentation of nuclei in digital pathology images can assist doctors in diagnosing diseases and evaluating subsequent treatments. Manual segmentation of nuclei from pathology images is time-consuming because of the large number of nuclei and is also error-prone. Therefore, accurate and automatic nucleus segmentation methods are required. Owing to the large variations in the characterization of nuclei, it is difficult to accurately segment nuclei using traditional methods. In this study, we propose a new method for nucleus segmentation. The proposed method uses a deep fully convolutional neural network to perform end-to-end segmentation on pathological tissue slices. Multiple short residual connections were used to fuse feature maps from different scales to better utilize the context information. Dilated convolutions with different dilation ratios were used to increase the receptive fields. In addition, we incorporated the distance map and contour information into the segmentation method to segment touching nuclei, which is difficult via traditional segmentation methods. Finally, post-processing was used to improve the segmentation results. The results demonstrate that our segmentation method can obtain comparable or better performance than other state-of-the-art methods on the public nuclei histopathology datasets.
Collapse
Affiliation(s)
- Xiaoming Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China; Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, China.
| | - Zhengsheng Guo
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
| | - Jun Cao
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
| | - Jinshan Tang
- Department of Applied Computing, College of Computing, Michigan Technological University, Houghton, MI, 49931, USA; Center for Biocomputing and Digital Heath, Institute of Computing and Cybersystems, & Health Research Institute, Michigan Technological University, Houghton, MI, 49931, USA.
| |
Collapse
|
36
|
Van Molle P, Verbelen T, Vankeirsbilck B, De Vylder J, Diricx B, Kimpe T, Simoens P, Dhoedt B. Leveraging the Bhattacharyya coefficient for uncertainty quantification in deep neural networks. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05789-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
AbstractModern deep learning models achieve state-of-the-art results for many tasks in computer vision, such as image classification and segmentation. However, its adoption into high-risk applications, e.g. automated medical diagnosis systems, happens at a slow pace. One of the main reasons for this is that regular neural networks do not capture uncertainty. To assess uncertainty in classification, several techniques have been proposed casting neural network approaches in a Bayesian setting. Amongst these techniques, Monte Carlo dropout is by far the most popular. This particular technique estimates the moments of the output distribution through sampling with different dropout masks. The output uncertainty of a neural network is then approximated as the sample variance. In this paper, we highlight the limitations of such a variance-based uncertainty metric and propose an novel approach. Our approach is based on the overlap between output distributions of different classes. We show that our technique leads to a better approximation of the inter-class output confusion. We illustrate the advantages of our method using benchmark datasets. In addition, we apply our metric to skin lesion classification—a real-world use case—and show that this yields promising results.
Collapse
|
37
|
Pei W, Xue B, Shang L, Zhang M. Genetic programming for development of cost-sensitive classifiers for binary high-dimensional unbalanced classification. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2020.106989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
38
|
Artificial intelligence applications in medical imaging: A review of the medical physics research in Italy. Phys Med 2021; 83:221-241. [DOI: 10.1016/j.ejmp.2021.04.010] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 03/31/2021] [Accepted: 04/03/2021] [Indexed: 02/06/2023] Open
|
39
|
Abdelrahman L, Al Ghamdi M, Collado-Mesa F, Abdel-Mottaleb M. Convolutional neural networks for breast cancer detection in mammography: A survey. Comput Biol Med 2021; 131:104248. [PMID: 33631497 DOI: 10.1016/j.compbiomed.2021.104248] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 01/08/2021] [Accepted: 01/25/2021] [Indexed: 12/17/2022]
Abstract
Despite its proven record as a breast cancer screening tool, mammography remains labor-intensive and has recognized limitations, including low sensitivity in women with dense breast tissue. In the last ten years, Neural Network advances have been applied to mammography to help radiologists increase their efficiency and accuracy. This survey aims to present, in an organized and structured manner, the current knowledge base of convolutional neural networks (CNNs) in mammography. The survey first discusses traditional Computer Assisted Detection (CAD) and more recently developed CNN-based models for computer vision in mammography. It then presents and discusses the literature on available mammography training datasets. The survey then presents and discusses current literature on CNNs for four distinct mammography tasks: (1) breast density classification, (2) breast asymmetry detection and classification, (3) calcification detection and classification, and (4) mass detection and classification, including presenting and comparing the reported quantitative results for each task and the pros and cons of the different CNN-based approaches. Then, it offers real-world applications of CNN CAD algorithms by discussing current Food and Drug Administration (FDA) approved models. Finally, this survey highlights the potential opportunities for future work in this field. The material presented and discussed in this survey could serve as a road map for developing CNN-based solutions to improve mammographic detection of breast cancer further.
Collapse
Affiliation(s)
- Leila Abdelrahman
- University of Miami, Department of Electrical and Computer Engineering, Memorial Dr, Coral Gables, FL, 33146, USA
| | - Manal Al Ghamdi
- Umm Al-Qura University, Department of Computer Science, Alawali, Mecca, 24381, Saudi Arabia
| | - Fernando Collado-Mesa
- University of Miami Miller School of Medicine, Department of Radiology, 1115 NW 14th Street Miami, FL, 33136, USA
| | - Mohamed Abdel-Mottaleb
- University of Miami, Department of Electrical and Computer Engineering, Memorial Dr, Coral Gables, FL, 33146, USA.
| |
Collapse
|
40
|
Gao C, Ye H, Cao F, Wen C, Zhang Q, Zhang F. Multiscale fused network with additive channel–spatial attention for image segmentation. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.106754] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|