1
|
Tan J, Yuan J, Fu X, Bai Y. Colonoscopy polyp classification via enhanced scattering wavelet Convolutional Neural Network. PLoS One 2024; 19:e0302800. [PMID: 39392783 PMCID: PMC11469526 DOI: 10.1371/journal.pone.0302800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 08/26/2024] [Indexed: 10/13/2024] Open
Abstract
Among the most common cancers, colorectal cancer (CRC) has a high death rate. The best way to screen for colorectal cancer (CRC) is with a colonoscopy, which has been shown to lower the risk of the disease. As a result, Computer-aided polyp classification technique is applied to identify colorectal cancer. But visually categorizing polyps is difficult since different polyps have different lighting conditions. Different from previous works, this article presents Enhanced Scattering Wavelet Convolutional Neural Network (ESWCNN), a polyp classification technique that combines Convolutional Neural Network (CNN) and Scattering Wavelet Transform (SWT) to improve polyp classification performance. This method concatenates simultaneously learnable image filters and wavelet filters on each input channel. The scattering wavelet filters can extract common spectral features with various scales and orientations, while the learnable filters can capture image spatial features that wavelet filters may miss. A network architecture for ESWCNN is designed based on these principles and trained and tested using colonoscopy datasets (two public datasets and one private dataset). An n-fold cross-validation experiment was conducted for three classes (adenoma, hyperplastic, serrated) achieving a classification accuracy of 96.4%, and 94.8% accuracy in two-class polyp classification (positive and negative). In the three-class classification, correct classification rates of 96.2% for adenomas, 98.71% for hyperplastic polyps, and 97.9% for serrated polyps were achieved. The proposed method in the two-class experiment reached an average sensitivity of 96.7% with 93.1% specificity. Furthermore, we compare the performance of our model with the state-of-the-art general classification models and commonly used CNNs. Six end-to-end models based on CNNs were trained using 2 dataset of video sequences. The experimental results demonstrate that the proposed ESWCNN method can effectively classify polyps with higher accuracy and efficacy compared to the state-of-the-art CNN models. These findings can provide guidance for future research in polyp classification.
Collapse
Affiliation(s)
- Jun Tan
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
- Guangdong Province Key Laboratory of Computational Science, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jiamin Yuan
- Health construction administration center, Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, Guangdong, China
- The Second Affiliated Hospital of Guangzhou University of Traditional Chinese Medicine(TCM), Guangzhou, Guangdong, China
| | - Xiaoyong Fu
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yilin Bai
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
- China Southern Airlines, Guangzhou, Guangdong, China
| |
Collapse
|
2
|
Coskun A. Diagnosis Based on Population Data versus Personalized Data: The Evolving Paradigm in Laboratory Medicine. Diagnostics (Basel) 2024; 14:2135. [PMID: 39410539 PMCID: PMC11475514 DOI: 10.3390/diagnostics14192135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Revised: 09/17/2024] [Accepted: 09/18/2024] [Indexed: 10/20/2024] Open
Abstract
The diagnosis of diseases is a complex process involving the integration of multiple parameters obtained from various sources, including laboratory findings. The interpretation of laboratory data is inherently comparative, necessitating reliable references for accurate assessment. Different types of references, such as reference intervals, decision limits, action limits, and reference change values, are essential tools in the interpretation of laboratory data. Although these references are used to interpret individual laboratory data, they are typically derived from population data, which raises concerns about their reliability and consequently the accuracy of interpretation of individuals' laboratory data. The accuracy of diagnosis is critical to all subsequent steps in medical practice, making the estimate of reliable references a priority. For more precise interpretation, references should ideally be derived from an individual's own data rather than from population averages. This manuscript summarizes the current sources of references used in laboratory data interpretation, examines the references themselves, and discusses the transition from population-based laboratory medicine to personalized laboratory medicine.
Collapse
Affiliation(s)
- Abdurrahman Coskun
- Department of Medical Biochemistry, School of Medicine, Acıbadem Mehmet Ali Aydinlar University, 34752 Istanbul, Turkey
| |
Collapse
|
3
|
Raymond MJ, Biswal B, Pipaliya RM, Rowley MA, Meyer TA. Convolutional Neural Network-Based Deep Learning Engine for Mastoidectomy Instrument Recognition and Movement Tracking. Otolaryngol Head Neck Surg 2024; 170:1555-1560. [PMID: 38520201 DOI: 10.1002/ohn.733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 02/22/2024] [Accepted: 02/29/2024] [Indexed: 03/25/2024]
Abstract
OBJECTIVE To develop a convolutional neural network-based computer vision model to recognize and track 2 mastoidectomy surgical instruments-the drill and the suction-irrigator-from intraoperative video recordings of mastoidectomies. STUDY DESIGN Technological development and model validation. SETTING Academic center. METHODS Ten 1-minute videos of mastoidectomies done for cochlear implantation by varying levels of resident surgeons were collected. For each video, containing 900 frames, an open-access computer vision annotation tool was used to annotate the drill and suction-irrigator class images with bounding boxes. A mastoidectomy instrument tracking module, which extracts the center coordinates of bounding boxes, was developed using a feature pyramid network and layered with DETECTRON, an open-access faster-region-based convolutional neural network. Eight videos were used to train the model, and 2 videos were used for testing. Outcome measures included Intersection over Union (IoU) ratio, accuracy, and average precision. RESULTS For an IoU of 0.5, the mean average precision for the drill was 99% and 86% for the suction-irrigator. The model proved capable of generating maps of drill and suction-irrigator stroke direction and distance for the entirety of each video. CONCLUSIONS This computer vision model can identify and track the drill and suction-irrigator from videos of intraoperative mastoidectomies performed by residents with excellent precision. It can now be employed to retrospectively study objective mastoidectomy measures of expert and resident surgeons, such as drill and suction-irrigator stroke concentration, economy of motion, speed, and coordination, setting the stage for characterization of objective expectations for safe and efficient mastoidectomies.
Collapse
Affiliation(s)
- Mallory J Raymond
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Jacksonville, USA
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic Florida, Jacksonville, Florida, USA
| | - Biswajit Biswal
- Computer Science and Mathematics, South Carolina State University, Orangeburg, South Carolina, USA
| | - Royal M Pipaliya
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Jacksonville, USA
- Department of Otolaryngology-Head and Neck Surgery, University of Arizona, Tucson, Arizona, USA
| | - Mark A Rowley
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Jacksonville, USA
| | - Ted A Meyer
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Jacksonville, USA
| |
Collapse
|
4
|
Al-Otaibi S, Rehman A, Mujahid M, Alotaibi S, Saba T. Efficient-gastro: optimized EfficientNet model for the detection of gastrointestinal disorders using transfer learning and wireless capsule endoscopy images. PeerJ Comput Sci 2024; 10:e1902. [PMID: 38660212 PMCID: PMC11041956 DOI: 10.7717/peerj-cs.1902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 01/31/2024] [Indexed: 04/26/2024]
Abstract
Gastrointestinal diseases cause around two million deaths globally. Wireless capsule endoscopy is a recent advancement in medical imaging, but manual diagnosis is challenging due to the large number of images generated. This has led to research into computer-assisted methodologies for diagnosing these images. Endoscopy produces thousands of frames for each patient, making manual examination difficult, laborious, and error-prone. An automated approach is essential to speed up the diagnosis process, reduce costs, and potentially save lives. This study proposes transfer learning-based efficient deep learning methods for detecting gastrointestinal disorders from multiple modalities, aiming to detect gastrointestinal diseases with superior accuracy and reduce the efforts and costs of medical experts. The Kvasir eight-class dataset was used for the experiment, where endoscopic images were preprocessed and enriched with augmentation techniques. An EfficientNet model was optimized via transfer learning and fine tuning, and the model was compared to the most widely used pre-trained deep learning models. The model's efficacy was tested on another independent endoscopic dataset to prove its robustness and reliability.
Collapse
Affiliation(s)
- Shaha Al-Otaibi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Muhammad Mujahid
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Sarah Alotaibi
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
5
|
Modi B, Sharma M, Hemani H, Joshi H, Kumar P, Narayanan S, Shah R. Analysis of Vocal Signatures of COVID-19 in Cough Sounds: A Newer Diagnostic Approach Using Artificial Intelligence. Cureus 2024; 16:e56412. [PMID: 38638791 PMCID: PMC11024064 DOI: 10.7759/cureus.56412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/07/2024] [Indexed: 04/20/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) based models are explored increasingly in the medical field. The highly contagious pandemic of coronavirus disease 2019 (COVID-19) affected the world and availability of diagnostic tools high resolution computed tomography (HRCT) and/or real-time reverse transcriptase-polymerase chain reaction (RTPCR) was very limited, costly and time consuming. Therefore, the use of AI in COVID-19 for diagnosis using cough sounds can be efficacious and cost effective for screening in clinic or hospital and help in early diagnosis and further management of patients. OBJECTIVES To develop an accurate and fast voice-processing AI software to determine voice-based signatures in discriminating COVID-19 and non-COVID-19 cough sounds for screening of COVID-19. METHODOLOGY A prospective study involving 117 patients was performed based on online and/or offline voice data collection of cough sounds of COVID-19 patients in isolation ward of a tertiary care teaching hospital and non-COVID-19 participants using a smart phone. A website-based AI software was developed to identify the cough sounds as COVID-19 or non-COVID-19. The data were divided into three segments including training set, validation set and test set. A pre-processing algorithm was utilized and combined with Short Time Fourier Transform feature representation and Logistic regression model. A precise software was used to identify vocal signatures and K-fold cross validation was carried out. RESULT A total of 117 audio recordings of cough sounds were collected through the developed website after inclusion-exclusion criteria out of which 52 have been marked belonging to COVID-19 positive, while 65 were marked as COVID-19 negative/unsure /never had COVID-19, which were assumed to be COVID-19 negative based on RT-PCR test results. The mean and standard error values for the accuracies attained at the end of each experiment in training, validation and testing set were found to be 67.34%±0.22, 58.57%±1.11 and 64.60%±1.79 respectively. The weight values were found to be positive which were contributing towards predicting the samples as COVID-19 positive with large spikes around 7.5 kHz, 7.8 kHz, 8.6 kHz and 11 kHz which can be used for classification. CONCLUSION The proposed AI based approach can be a helpful screening tool for COVID-19 using vocal sounds of cough. It can help the health system by reducing the cost burden and improving overall diagnosis and management of the disease.
Collapse
Affiliation(s)
- Bhavesh Modi
- Department of Community and Family Medicine, All India Institute of Medical Sciences, Rajkot, IND
| | - Manika Sharma
- Department of Atomic Energy, Institute of Plasma Research, Gandhinagar, IND
| | - Harsh Hemani
- Department of Atomic Energy, Bhabha Atomic Research Centre, Visakhapatnam, IND
| | - Hemant Joshi
- Department of Atomic Energy, Institute of Plasma Research, Gandhinagar, IND
| | - Prashant Kumar
- Department of Atomic Energy, Institute of Plasma Research, Gandhinagar, IND
| | - Sakthivel Narayanan
- Department of Atomic Energy, Bhabha Atomic Research Centre, Visakhapatnam, IND
| | - Rima Shah
- Department of Pharmacology, All India Institute of Medical Sciences, Rajkot, IND
| |
Collapse
|
6
|
Vishwanathaiah S, Fageeh HN, Khanagar SB, Maganur PC. Artificial Intelligence Its Uses and Application in Pediatric Dentistry: A Review. Biomedicines 2023; 11:biomedicines11030788. [PMID: 36979767 PMCID: PMC10044793 DOI: 10.3390/biomedicines11030788] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 02/26/2023] [Accepted: 03/01/2023] [Indexed: 03/08/2023] Open
Abstract
In the global epidemic era, oral problems significantly impact a major population of children. The key to a child’s optimal health is early diagnosis, prevention, and treatment of these disorders. In recent years, the field of artificial intelligence (AI) has seen tremendous pace and progress. As a result, AI’s infiltration is witnessed even in those areas that were traditionally thought to be best left to human specialists. The ultimate ability to improve patient care and make precise diagnoses of illnesses has revolutionized the world of healthcare. In the field of dentistry, the competence to execute treatment measures while still providing appropriate patient behavior counseling is in high demand, particularly in the field of pediatric dental care. As a result, we decided to conduct this review specifically to examine the applications of AI models in pediatric dentistry. A comprehensive search of the subjects was done using a wide range of databases to look for studies that have been published in peer-reviewed journals from its inception until 31 December 2022. After the application of the criteria, only 25 of the 351 articles were taken into consideration for this review. According to the literature, AI is frequently used in pediatric dentistry for the purpose of making an accurate diagnosis and assisting clinicians, dentists, and pediatric dentists in clinical decision making, developing preventive strategies, and establishing an appropriate treatment plan.
Collapse
Affiliation(s)
- Satish Vishwanathaiah
- Department of Preventive Dental Sciences, Division of Pediatric Dentistry, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
- Correspondence: (S.V.); (P.C.M.); Tel.: +966-542635434 (S.V.); +966-505916621 (P.C.M.)
| | - Hytham N. Fageeh
- Department of Preventive Dental Sciences, Division of Periodontics, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
| | - Sanjeev B. Khanagar
- Preventive Dental Science Department, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
| | - Prabhadevi C. Maganur
- Department of Preventive Dental Sciences, Division of Pediatric Dentistry, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
- Correspondence: (S.V.); (P.C.M.); Tel.: +966-542635434 (S.V.); +966-505916621 (P.C.M.)
| |
Collapse
|
7
|
Ghaffar Nia N, Kaplanoglu E, Nasab A. Evaluation of artificial intelligence techniques in disease diagnosis and prediction. DISCOVER ARTIFICIAL INTELLIGENCE 2023. [PMCID: PMC9885935 DOI: 10.1007/s44163-023-00049-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
AbstractA broad range of medical diagnoses is based on analyzing disease images obtained through high-tech digital devices. The application of artificial intelligence (AI) in the assessment of medical images has led to accurate evaluations being performed automatically, which in turn has reduced the workload of physicians, decreased errors and times in diagnosis, and improved performance in the prediction and detection of various diseases. AI techniques based on medical image processing are an essential area of research that uses advanced computer algorithms for prediction, diagnosis, and treatment planning, leading to a remarkable impact on decision-making procedures. Machine Learning (ML) and Deep Learning (DL) as advanced AI techniques are two main subfields applied in the healthcare system to diagnose diseases, discover medication, and identify patient risk factors. The advancement of electronic medical records and big data technologies in recent years has accompanied the success of ML and DL algorithms. ML includes neural networks and fuzzy logic algorithms with various applications in automating forecasting and diagnosis processes. DL algorithm is an ML technique that does not rely on expert feature extraction, unlike classical neural network algorithms. DL algorithms with high-performance calculations give promising results in medical image analysis, such as fusion, segmentation, recording, and classification. Support Vector Machine (SVM) as an ML method and Convolutional Neural Network (CNN) as a DL method is usually the most widely used techniques for analyzing and diagnosing diseases. This review study aims to cover recent AI techniques in diagnosing and predicting numerous diseases such as cancers, heart, lung, skin, genetic, and neural disorders, which perform more precisely compared to specialists without human error. Also, AI's existing challenges and limitations in the medical area are discussed and highlighted.
Collapse
Affiliation(s)
- Nafiseh Ghaffar Nia
- College of Engineering and Computer Science, The University of Tennessee at Chattanooga, Chattanooga, TN 37403 USA
| | - Erkan Kaplanoglu
- College of Engineering and Computer Science, The University of Tennessee at Chattanooga, Chattanooga, TN 37403 USA
| | - Ahad Nasab
- College of Engineering and Computer Science, The University of Tennessee at Chattanooga, Chattanooga, TN 37403 USA
| |
Collapse
|
8
|
Imran SMA, Saleem MW, Hameed MT, Hussain A, Naqvi RA, Lee SW. Feature preserving mesh network for semantic segmentation of retinal vasculature to support ophthalmic disease analysis. Front Med (Lausanne) 2023; 9:1040562. [PMID: 36714120 PMCID: PMC9880050 DOI: 10.3389/fmed.2022.1040562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 12/20/2022] [Indexed: 01/14/2023] Open
Abstract
Introduction Ophthalmic diseases are approaching an alarming count across the globe. Typically, ophthalmologists depend on manual methods for the analysis of different ophthalmic diseases such as glaucoma, Sickle cell retinopathy (SCR), diabetic retinopathy, and hypertensive retinopathy. All these manual assessments are not reliable, time-consuming, tedious, and prone to error. Therefore, automatic methods are desirable to replace conventional approaches. The accuracy of this segmentation of these vessels using automated approaches directly depends on the quality of fundus images. Retinal vessels are assumed as a potential biomarker for the diagnosis of many ophthalmic diseases. Mostly newly developed ophthalmic diseases contain minor changes in vasculature which is a critical job for the early detection and analysis of disease. Method Several artificial intelligence-based methods suggested intelligent solutions for automated retinal vessel detection. However, existing methods exhibited significant limitations in segmentation performance, complexity, and computational efficiency. Specifically, most of the existing methods failed in detecting small vessels owing to vanishing gradient problems. To overcome the stated problems, an intelligence-based automated shallow network with high performance and low cost is designed named Feature Preserving Mesh Network (FPM-Net) for the accurate segmentation of retinal vessels. FPM-Net employs a feature-preserving block that preserves the spatial features and helps in maintaining a better segmentation performance. Similarly, FPM-Net architecture uses a series of feature concatenation that also boosts the overall segmentation performance. Finally, preserved features, low-level input image information, and up-sampled spatial features are aggregated at the final concatenation stage for improved pixel prediction accuracy. The technique is reliable since it performs better on the DRIVE database, CHASE-DB1 database, and STARE dataset. Results and discussion Experimental outcomes confirm that FPM-Net outperforms state-of-the-art techniques with superior computational efficiency. In addition, presented results are achieved without using any preprocessing or postprocessing scheme. Our proposed method FPM-Net gives improvement results which can be observed with DRIVE datasets, it gives Se, Sp, and Acc as 0.8285, 0.98270, 0.92920, for CHASE-DB1 dataset 0.8219, 0.9840, 0.9728 and STARE datasets it produces 0.8618, 0.9819 and 0.9727 respectively. Which is a remarkable difference and enhancement as compared to the conventional methods using only 2.45 million trainable parameters.
Collapse
Affiliation(s)
| | | | | | - Abida Hussain
- Faculty of CS and IT, Superior University, Lahore, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul, Republic of Korea,*Correspondence: Rizwan Ali Naqvi ✉
| | - Seung Won Lee
- School of Medicine, Sungkyunkwan University, Suwon, Republic of Korea,Seung Won Lee ✉
| |
Collapse
|
9
|
Cuevas-Rodriguez EO, Galvan-Tejada CE, Maeda-Gutiérrez V, Moreno-Chávez G, Galván-Tejada JI, Gamboa-Rosales H, Luna-García H, Moreno-Baez A, Celaya-Padilla JM. Comparative study of convolutional neural network architectures for gastrointestinal lesions classification. PeerJ 2023; 11:e14806. [PMID: 36945355 PMCID: PMC10024900 DOI: 10.7717/peerj.14806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 01/05/2023] [Indexed: 03/18/2023] Open
Abstract
The gastrointestinal (GI) tract can be affected by different diseases or lesions such as esophagitis, ulcers, hemorrhoids, and polyps, among others. Some of them can be precursors of cancer such as polyps. Endoscopy is the standard procedure for the detection of these lesions. The main drawback of this procedure is that the diagnosis depends on the expertise of the doctor. This means that some important findings may be missed. In recent years, this problem has been addressed by deep learning (DL) techniques. Endoscopic studies use digital images. The most widely used DL technique for image processing is the convolutional neural network (CNN) due to its high accuracy for modeling complex phenomena. There are different CNNs that are characterized by their architecture. In this article, four architectures are compared: AlexNet, DenseNet-201, Inception-v3, and ResNet-101. To determine which architecture best classifies GI tract lesions, a set of metrics; accuracy, precision, sensitivity, specificity, F1-score, and area under the curve (AUC) were used. These architectures were trained and tested on the HyperKvasir dataset. From this dataset, a total of 6,792 images corresponding to 10 findings were used. A transfer learning approach and a data augmentation technique were applied. The best performing architecture was DenseNet-201, whose results were: 97.11% of accuracy, 96.3% sensitivity, 99.67% specificity, and 95% AUC.
Collapse
|
10
|
Parkash O, Siddiqui ATS, Jiwani U, Rind F, Padhani ZA, Rizvi A, Hoodbhoy Z, Das JK. Diagnostic accuracy of artificial intelligence for detecting gastrointestinal luminal pathologies: A systematic review and meta-analysis. Front Med (Lausanne) 2022; 9:1018937. [PMID: 36405592 PMCID: PMC9672666 DOI: 10.3389/fmed.2022.1018937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/03/2022] [Indexed: 11/06/2022] Open
Abstract
Background Artificial Intelligence (AI) holds considerable promise for diagnostics in the field of gastroenterology. This systematic review and meta-analysis aims to assess the diagnostic accuracy of AI models compared with the gold standard of experts and histopathology for the diagnosis of various gastrointestinal (GI) luminal pathologies including polyps, neoplasms, and inflammatory bowel disease. Methods We searched PubMed, CINAHL, Wiley Cochrane Library, and Web of Science electronic databases to identify studies assessing the diagnostic performance of AI models for GI luminal pathologies. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. We performed a meta-analysis and hierarchical summary receiver operating characteristic curves (HSROC). The risk of bias was assessed using Quality Assessment for Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Subgroup analyses were conducted based on the type of GI luminal disease, AI model, reference standard, and type of data used for analysis. This study is registered with PROSPERO (CRD42021288360). Findings We included 73 studies, of which 31 were externally validated and provided sufficient information for inclusion in the meta-analysis. The overall sensitivity of AI for detecting GI luminal pathologies was 91.9% (95% CI: 89.0–94.1) and specificity was 91.7% (95% CI: 87.4–94.7). Deep learning models (sensitivity: 89.8%, specificity: 91.9%) and ensemble methods (sensitivity: 95.4%, specificity: 90.9%) were the most commonly used models in the included studies. Majority of studies (n = 56, 76.7%) had a high risk of selection bias while 74% (n = 54) studies were low risk on reference standard and 67% (n = 49) were low risk for flow and timing bias. Interpretation The review suggests high sensitivity and specificity of AI models for the detection of GI luminal pathologies. There is a need for large, multi-center trials in both high income countries and low- and middle- income countries to assess the performance of these AI models in real clinical settings and its impact on diagnosis and prognosis. Systematic review registration [https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=288360], identifier [CRD42021288360].
Collapse
Affiliation(s)
- Om Parkash
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | | | - Uswa Jiwani
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Fahad Rind
- Head and Neck Oncology, The Ohio State University, Columbus, OH, United States
| | - Zahra Ali Padhani
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
| | - Arjumand Rizvi
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
| | - Jai K. Das
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
- *Correspondence: Jai K. Das,
| |
Collapse
|
11
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. Dexterous Identification of Carcinoma through ColoRectalCADx with Dichotomous Fusion CNN and UNet Semantic Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4325412. [PMID: 36262620 PMCID: PMC9576362 DOI: 10.1155/2022/4325412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 08/16/2022] [Accepted: 08/20/2022] [Indexed: 11/18/2022]
Abstract
Human colorectal disorders in the digestive tract are recognized by reference colonoscopy. The current system recognizes cancer through a three-stage system that utilizes two sets of colonoscopy data. However, identifying polyps by visualization has not been addressed. The proposed system is a five-stage system called ColoRectalCADx, which provides three publicly accessible datasets as input data for cancer detection. The three main datasets are CVC Clinic DB, Kvasir2, and Hyper Kvasir. After the image preprocessing stages, system experiments were performed with the seven prominent convolutional neural networks (CNNs) (end-to-end) and nine fusion CNN models to extract the spatial features. Afterwards, the end-to-end CNN and fusion features are executed. These features are derived from Discrete Wavelet Transform (DWT) and Vector Support Machine (SVM) classification, that was used to retrieve time and spatial frequency features. Experimentally, the results were obtained for five stages. For each of the three datasets, from stage 1 to stage 3, end-to-end CNN, DenseNet-201 obtained the best testing accuracy (98%, 87%, 84%), ((98%, 97%), (87%, 87%), (84%, 84%)), ((99.03%, 99%), (88.45%, 88%), (83.61%, 84%)). For each of the three datasets, from stage 2, CNN DaRD-22 fusion obtained the optimal test accuracy ((93%, 97%) (82%, 84%), (69%, 57%)). And for stage 4, ADaRDEV2-22 fusion achieved the best test accuracy ((95.73%, 94%), (81.20%, 81%), (72.56%, 58%)). For the input image segmentation datasets CVC Clinc-Seg, KvasirSeg, and Hyper Kvasir, malignant polyps were identified with the UNet CNN model. Here, the loss score datasets (CVC clinic DB was 0.7842, Kvasir2 was 0.6977, and Hyper Kvasir was 0.6910) were obtained.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Thulasi Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| |
Collapse
|
12
|
Pavlov V, Fyodorov S, Zavjalov S, Pervunina T, Govorov I, Komlichenko E, Deynega V, Artemenko V. Simplified Convolutional Neural Network Application for Cervix Type Classification via Colposcopic Images. Bioengineering (Basel) 2022; 9:bioengineering9060240. [PMID: 35735482 PMCID: PMC9219648 DOI: 10.3390/bioengineering9060240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 05/14/2022] [Accepted: 05/26/2022] [Indexed: 11/16/2022] Open
Abstract
The inner parts of the human body are usually inspected endoscopically using special equipment. For instance, each part of the female reproductive system can be examined endoscopically (laparoscopy, hysteroscopy, and colposcopy). The primary purpose of colposcopy is the early detection of malignant lesions of the cervix. Cervical cancer (CC) is one of the most common cancers in women worldwide, especially in middle- and low-income countries. Therefore, there is a growing demand for approaches that aim to detect precancerous lesions, ideally without quality loss. Despite its high efficiency, this method has some disadvantages, including subjectivity and pronounced dependence on the operator’s experience. The objective of the current work is to propose an alternative to overcoming these limitations by utilizing the neural network approach. The classifier is trained to recognize and classify lesions. The classifier has a high recognition accuracy and a low computational complexity. The classification accuracies for the classes normal, LSIL, HSIL, and suspicious for invasion were 95.46%, 79.78%, 94.16%, and 97.09%, respectively. We argue that the proposed architecture is simpler than those discussed in other articles due to the use of the global averaging level of the pool. Therefore, the classifier can be implemented on low-power computing platforms at a reasonable cost.
Collapse
Affiliation(s)
- Vitalii Pavlov
- Higher School of Applied Physics and Space Technologies, Peter the Great St. Petersburg Polytechnic University, 195251 St. Petersburg, Russia; (S.F.); (S.Z.)
- Personalised Medicine Centre, 197341 St. Petersburg, Russia; (T.P.); (I.G.); (E.K.); (V.D.); (V.A.)
- Correspondence:
| | - Stanislav Fyodorov
- Higher School of Applied Physics and Space Technologies, Peter the Great St. Petersburg Polytechnic University, 195251 St. Petersburg, Russia; (S.F.); (S.Z.)
| | - Sergey Zavjalov
- Higher School of Applied Physics and Space Technologies, Peter the Great St. Petersburg Polytechnic University, 195251 St. Petersburg, Russia; (S.F.); (S.Z.)
| | - Tatiana Pervunina
- Personalised Medicine Centre, 197341 St. Petersburg, Russia; (T.P.); (I.G.); (E.K.); (V.D.); (V.A.)
| | - Igor Govorov
- Personalised Medicine Centre, 197341 St. Petersburg, Russia; (T.P.); (I.G.); (E.K.); (V.D.); (V.A.)
| | - Eduard Komlichenko
- Personalised Medicine Centre, 197341 St. Petersburg, Russia; (T.P.); (I.G.); (E.K.); (V.D.); (V.A.)
| | - Viktor Deynega
- Personalised Medicine Centre, 197341 St. Petersburg, Russia; (T.P.); (I.G.); (E.K.); (V.D.); (V.A.)
| | - Veronika Artemenko
- Personalised Medicine Centre, 197341 St. Petersburg, Russia; (T.P.); (I.G.); (E.K.); (V.D.); (V.A.)
| |
Collapse
|
13
|
Fati SM, Senan EM, Azar AT. Hybrid and Deep Learning Approach for Early Diagnosis of Lower Gastrointestinal Diseases. SENSORS (BASEL, SWITZERLAND) 2022; 22:4079. [PMID: 35684696 PMCID: PMC9185306 DOI: 10.3390/s22114079] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 05/21/2022] [Accepted: 05/24/2022] [Indexed: 05/27/2023]
Abstract
Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of patients. Additionally, removing benign tumors in their early stages has more risks than benefits. Video endoscopy technology is essential for imaging the GI tract and identifying disorders such as bleeding, ulcers, polyps, and malignant tumors. Videography generates 5000 frames, which require extensive analysis and take a long time to follow all frames. Thus, artificial intelligence techniques, which have a higher ability to diagnose and assist physicians in making accurate diagnostic decisions, solve these challenges. In this study, many multi-methodologies were developed, where the work was divided into four proposed systems; each system has more than one diagnostic method. The first proposed system utilizes artificial neural networks (ANN) and feed-forward neural networks (FFNN) algorithms based on extracting hybrid features by three algorithms: local binary pattern (LBP), gray level co-occurrence matrix (GLCM), and fuzzy color histogram (FCH) algorithms. The second proposed system uses pre-trained CNN models which are the GoogLeNet and AlexNet based on the extraction of deep feature maps and their classification with high accuracy. The third proposed method uses hybrid techniques consisting of two blocks: the first block of CNN models (GoogLeNet and AlexNet) to extract feature maps; the second block is the support vector machine (SVM) algorithm for classifying deep feature maps. The fourth proposed system uses ANN and FFNN based on the hybrid features between CNN models (GoogLeNet and AlexNet) and LBP, GLCM and FCH algorithms. All the proposed systems achieved superior results in diagnosing endoscopic images for the early detection of lower gastrointestinal diseases. All systems produced promising results; the FFNN classifier based on the hybrid features extracted by GoogLeNet, LBP, GLCM and FCH achieved an accuracy of 99.3%, precision of 99.2%, sensitivity of 99%, specificity of 100%, and AUC of 99.87%.
Collapse
Affiliation(s)
- Suliman Mohamed Fati
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Ebrahim Mohammed Senan
- Department of Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004, India;
| | - Ahmad Taher Azar
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
- Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
| |
Collapse
|
14
|
Attention-based residual improved U-Net model for continuous blood pressure monitoring by using photoplethysmography signal. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103581] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
15
|
Zaborowicz M, Zaborowicz K, Biedziak B, Garbowski T. Deep Learning Neural Modelling as a Precise Method in the Assessment of the Chronological Age of Children and Adolescents Using Tooth and Bone Parameters. SENSORS 2022; 22:s22020637. [PMID: 35062599 PMCID: PMC8777593 DOI: 10.3390/s22020637] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 01/07/2022] [Accepted: 01/12/2022] [Indexed: 02/06/2023]
Abstract
Dental age is one of the most reliable methods for determining a patient’s age. The timing of teething, the period of tooth replacement, or the degree of tooth attrition is an important diagnostic factor in the assessment of an individual’s developmental age. It is used in orthodontics, pediatric dentistry, endocrinology, forensic medicine, and pathomorphology, but also in scenarios regarding international adoptions and illegal immigrants. The methods used to date are time-consuming and not very precise. For this reason, artificial intelligence methods are increasingly used to estimate the age of a patient. The present work is a continuation of the work of Zaborowicz et al. In the presented research, a set of 21 original indicators was used to create deep neural network models. The aim of this study was to verify the ability to generate a more accurate deep neural network model compared to models produced previously. The quality parameters of the produced models were as follows. The MAE error of the produced models, depending on the learning set used, was between 2.34 and 4.61 months, while the RMSE error was between 5.58 and 7.49 months. The correlation coefficient R2 ranged from 0.92 to 0.96.
Collapse
Affiliation(s)
- Maciej Zaborowicz
- Department of Biosystems Engineering, Poznan University of Life Sciences, Wojska Polskiego 50, 60-627 Poznan, Poland;
- Correspondence: (M.Z.); (K.Z.)
| | - Katarzyna Zaborowicz
- Department of Orthodontics and Craniofacial Anomalies, Poznan University of Medical Sciences, Collegium Maius, Fredry 10, 61-701 Poznan, Poland;
- Correspondence: (M.Z.); (K.Z.)
| | - Barbara Biedziak
- Department of Orthodontics and Craniofacial Anomalies, Poznan University of Medical Sciences, Collegium Maius, Fredry 10, 61-701 Poznan, Poland;
| | - Tomasz Garbowski
- Department of Biosystems Engineering, Poznan University of Life Sciences, Wojska Polskiego 50, 60-627 Poznan, Poland;
| |
Collapse
|
16
|
Kumar Y, Koul A, Singla R, Ijaz MF. Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2022; 14:8459-8486. [PMID: 35039756 PMCID: PMC8754556 DOI: 10.1007/s12652-021-03612-z] [Citation(s) in RCA: 147] [Impact Index Per Article: 73.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 11/18/2021] [Indexed: 05/03/2023]
Abstract
Artificial intelligence can assist providers in a variety of patient care and intelligent health systems. Artificial intelligence techniques ranging from machine learning to deep learning are prevalent in healthcare for disease diagnosis, drug discovery, and patient risk identification. Numerous medical data sources are required to perfectly diagnose diseases using artificial intelligence techniques, such as ultrasound, magnetic resonance imaging, mammography, genomics, computed tomography scan, etc. Furthermore, artificial intelligence primarily enhanced the infirmary experience and sped up preparing patients to continue their rehabilitation at home. This article covers the comprehensive survey based on artificial intelligence techniques to diagnose numerous diseases such as Alzheimer, cancer, diabetes, chronic heart disease, tuberculosis, stroke and cerebrovascular, hypertension, skin, and liver disease. We conducted an extensive survey including the used medical imaging dataset and their feature extraction and classification process for predictions. Preferred reporting items for systematic reviews and Meta-Analysis guidelines are used to select the articles published up to October 2020 on the Web of Science, Scopus, Google Scholar, PubMed, Excerpta Medical Database, and Psychology Information for early prediction of distinct kinds of diseases using artificial intelligence-based techniques. Based on the study of different articles on disease diagnosis, the results are also compared using various quality parameters such as prediction rate, accuracy, sensitivity, specificity, the area under curve precision, recall, and F1-score.
Collapse
Affiliation(s)
- Yogesh Kumar
- Department of Computer Engineering, Indus Institute of Technology and Engineering, Indus University, Ahmedabad, 382115 India
| | | | - Ruchi Singla
- Department of Research, Innovations, Sponsored Projects and Entrepreneurship, CGC Landran, Mohali, India
| | - Muhammad Fazal Ijaz
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, 05006 South Korea
| |
Collapse
|
17
|
Ayyaz MS, Lali MIU, Hussain M, Rauf HT, Alouffi B, Alyami H, Wasti S. Hybrid Deep Learning Model for Endoscopic Lesion Detection and Classification Using Endoscopy Videos. Diagnostics (Basel) 2021; 12:diagnostics12010043. [PMID: 35054210 PMCID: PMC8775223 DOI: 10.3390/diagnostics12010043] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 12/22/2021] [Accepted: 12/23/2021] [Indexed: 02/06/2023] Open
Abstract
In medical imaging, the detection and classification of stomach diseases are challenging due to the resemblance of different symptoms, image contrast, and complex background. Computer-aided diagnosis (CAD) plays a vital role in the medical imaging field, allowing accurate results to be obtained in minimal time. This article proposes a new hybrid method to detect and classify stomach diseases using endoscopy videos. The proposed methodology comprises seven significant steps: data acquisition, preprocessing of data, transfer learning of deep models, feature extraction, feature selection, hybridization, and classification. We selected two different CNN models (VGG19 and Alexnet) to extract features. We applied transfer learning techniques before using them as feature extractors. We used a genetic algorithm (GA) in feature selection, due to its adaptive nature. We fused selected features of both models using a serial-based approach. Finally, the best features were provided to multiple machine learning classifiers for detection and classification. The proposed approach was evaluated on a personally collected dataset of five classes, including gastritis, ulcer, esophagitis, bleeding, and healthy. We observed that the proposed technique performed superbly on Cubic SVM with 99.8% accuracy. For the authenticity of the proposed technique, we considered these statistical measures: classification accuracy, recall, precision, False Negative Rate (FNR), Area Under the Curve (AUC), and time. In addition, we provided a fair state-of-the-art comparison of our proposed technique with existing techniques that proves its worthiness.
Collapse
Affiliation(s)
- M Shahbaz Ayyaz
- Department of Computer Science, University of Gujrat, Gujrat 50700, Pakistan; (M.S.A.); (M.H.)
| | - Muhammad Ikram Ullah Lali
- Department of Information Sciences, University of Education Lahore, Lahore 41000, Pakistan; (M.I.U.L.); (S.W.)
| | - Mubbashar Hussain
- Department of Computer Science, University of Gujrat, Gujrat 50700, Pakistan; (M.S.A.); (M.H.)
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
- Correspondence:
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia; (B.A.); (H.A.)
| | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia; (B.A.); (H.A.)
| | - Shahbaz Wasti
- Department of Information Sciences, University of Education Lahore, Lahore 41000, Pakistan; (M.I.U.L.); (S.W.)
| |
Collapse
|
18
|
Gastrointestinal Disease Classification in Endoscopic Images Using Attention-Guided Convolutional Neural Networks. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112311136] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Gastrointestinal (GI) diseases constitute a leading problem in the human digestive system. Consequently, several studies have explored automatic classification of GI diseases as a means of minimizing the burden on clinicians and improving patient outcomes, for both diagnostic and treatment purposes. The challenge in using deep learning-based (DL) approaches, specifically a convolutional neural network (CNN), is that spatial information is not fully utilized due to the inherent mechanism of CNNs. This paper proposes the application of spatial factors in improving classification performance. Specifically, we propose a deep CNN-based spatial attention mechanism for the classification of GI diseases, implemented with encoder–decoder layers. To overcome the data imbalance problem, we adapt data-augmentation techniques. A total of 12,147 multi-sited, multi-diseased GI images, drawn from publicly available and private sources, were used to validate the proposed approach. Furthermore, a five-fold cross-validation approach was adopted to minimize inconsistencies in intra- and inter-class variability and to ensure that results were robustly assessed. Our results, compared with other state-of-the-art models in terms of mean accuracy (ResNet50 = 90.28, GoogLeNet = 91.38, DenseNets = 91.60, and baseline = 92.84), demonstrated better outcomes (Precision = 92.8, Recall = 92.7, F1-score = 92.8, and Accuracy = 93.19). We also implemented t-distributed stochastic neighbor embedding (t–SNE) and confusion matrix analysis techniques for better visualization and performance validation. Overall, the results showed that the attention mechanism improved the automatic classification of multi-sited GI disease images. We validated clinical tests based on the proposed method by overcoming previous limitations, with the goal of improving automatic classification accuracy in future work.
Collapse
|
19
|
Zaborowicz K, Biedziak B, Olszewska A, Zaborowicz M. Tooth and Bone Parameters in the Assessment of the Chronological Age of Children and Adolescents Using Neural Modelling Methods. SENSORS (BASEL, SWITZERLAND) 2021; 21:6008. [PMID: 34577221 PMCID: PMC8473021 DOI: 10.3390/s21186008] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 09/03/2021] [Accepted: 09/06/2021] [Indexed: 12/13/2022]
Abstract
The analog methods used in the clinical assessment of the patient's chronological age are subjective and characterized by low accuracy. When using those methods, there is a noticeable discrepancy between the chronological age and the age estimated based on relevant scientific studies. Innovations in the field of information technology are increasingly used in medicine, with particular emphasis on artificial intelligence methods. The paper presents research aimed at developing a new, effective methodology for the assessment of the chronological age using modern IT methods. In this paper, a study was conducted to determine the features of pantomographic images that support the determination of metric age, and neural models were produced to support the process of identifying the age of children and adolescents. The whole conducted work was a new methodology of metric age assessment. The result of the conducted study is a set of 21 original indicators necessary for the assessment of the chronological age with the use of computer image analysis and neural modelling, as well as three non-linear models of radial basis function networks (RBF), whose accuracy ranges from 96 to 99%. The result of the research are three neural models that determine the chronological age.
Collapse
Affiliation(s)
- Katarzyna Zaborowicz
- Department of Craniofacial Anomalies, Poznań University of Medical Sciences, Collegium Maius, Fredry 10, 61-701 Poznań, Poland
| | - Barbara Biedziak
- Department of Craniofacial Anomalies, Poznań University of Medical Sciences, Collegium Maius, Fredry 10, 61-701 Poznań, Poland
| | - Aneta Olszewska
- Department of Craniofacial Anomalies, Poznań University of Medical Sciences, Collegium Maius, Fredry 10, 61-701 Poznań, Poland
| | - Maciej Zaborowicz
- Department of Biosystems Engineering, Poznan University of Life Sciences, Wojska Polskiego 50, 60-637 Poznań, Poland
| |
Collapse
|
20
|
Sultan H, Owais M, Park C, Mahmood T, Haider A, Park KR. Artificial Intelligence-Based Recognition of Different Types of Shoulder Implants in X-ray Scans Based on Dense Residual Ensemble-Network for Personalized Medicine. J Pers Med 2021; 11:482. [PMID: 34072079 PMCID: PMC8229063 DOI: 10.3390/jpm11060482] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 05/15/2021] [Accepted: 05/24/2021] [Indexed: 01/10/2023] Open
Abstract
Re-operations and revisions are often performed in patients who have undergone total shoulder arthroplasty (TSA) and reverse total shoulder arthroplasty (RTSA). This necessitates an accurate recognition of the implant model and manufacturer to set the correct apparatus and procedure according to the patient's anatomy as personalized medicine. Owing to unavailability and ambiguity in the medical data of a patient, expert surgeons identify the implants through a visual comparison of X-ray images. False steps cause heedlessness, morbidity, extra monetary weight, and a waste of time. Despite significant advancements in pattern recognition and deep learning in the medical field, extremely limited research has been conducted on classifying shoulder implants. To overcome these problems, we propose a robust deep learning-based framework comprised of an ensemble of convolutional neural networks (CNNs) to classify shoulder implants in X-ray images of different patients. Through our rotational invariant augmentation, the size of the training dataset is increased 36-fold. The modified ResNet and DenseNet are then combined deeply to form a dense residual ensemble-network (DRE-Net). To evaluate DRE-Net, experiments were executed on a 10-fold cross-validation on the openly available shoulder implant X-ray dataset. The experimental results showed that DRE-Net achieved an accuracy, F1-score, precision, and recall of 85.92%, 84.69%, 85.33%, and 84.11%, respectively, which were higher than those of the state-of-the-art methods. Moreover, we confirmed the generalization capability of our network by testing it in an open-world configuration, and the effectiveness of rotational invariant augmentation.
Collapse
Affiliation(s)
| | | | | | | | | | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea; (H.S.); (M.O.); (C.P.); (T.M.); (A.H.)
| |
Collapse
|
21
|
Naz J, Sharif M, Yasmin M, Raza M, Khan MA. Detection and Classification of Gastrointestinal Diseases using Machine Learning. Curr Med Imaging 2021; 17:479-490. [PMID: 32988355 DOI: 10.2174/1573405616666200928144626] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Traditional endoscopy is an invasive and painful method of examining the gastrointestinal tract (GIT) not supported by physicians and patients. To handle this issue, video endoscopy (VE) or wireless capsule endoscopy (WCE) is recommended and utilized for GIT examination. Furthermore, manual assessment of captured images is not possible for an expert physician because it's a time taking task to analyze thousands of images thoroughly. Hence, there comes the need for a Computer-Aided-Diagnosis (CAD) method to help doctors analyze images. Many researchers have proposed techniques for automated recognition and classification of abnormality in captured images. METHODS In this article, existing methods for automated classification, segmentation and detection of several GI diseases are discussed. Paper gives a comprehensive detail about these state-of-theart methods. Furthermore, literature is divided into several subsections based on preprocessing techniques, segmentation techniques, handcrafted features based techniques and deep learning based techniques. Finally, issues, challenges and limitations are also undertaken. RESULTS A comparative analysis of different approaches for the detection and classification of GI infections. CONCLUSION This comprehensive review article combines information related to a number of GI diseases diagnosis methods at one place. This article will facilitate the researchers to develop new algorithms and approaches for early detection of GI diseases detection with more promising results as compared to the existing ones of literature.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | | |
Collapse
|
22
|
Li Y, Zhou D, Liu TT, Shen XZ. Application of deep learning in image recognition and diagnosis of gastric cancer. Artif Intell Gastrointest Endosc 2021; 2:12-24. [DOI: 10.37126/aige.v2.i2.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
In recent years, artificial intelligence has been extensively applied in the diagnosis of gastric cancer based on medical imaging. In particular, using deep learning as one of the mainstream approaches in image processing has made remarkable progress. In this paper, we also provide a comprehensive literature survey using four electronic databases, PubMed, EMBASE, Web of Science, and Cochrane. The literature search is performed until November 2020. This article provides a summary of the existing algorithm of image recognition, reviews the available datasets used in gastric cancer diagnosis and the current trends in applications of deep learning theory in image recognition of gastric cancer. covers the theory of deep learning on endoscopic image recognition. We further evaluate the advantages and disadvantages of the current algorithms and summarize the characteristics of the existing image datasets, then combined with the latest progress in deep learning theory, and propose suggestions on the applications of optimization algorithms. Based on the existing research and application, the label, quantity, size, resolutions, and other aspects of the image dataset are also discussed. The future developments of this field are analyzed from two perspectives including algorithm optimization and data support, aiming to improve the diagnosis accuracy and reduce the risk of misdiagnosis.
Collapse
Affiliation(s)
- Yu Li
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Da Zhou
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Tao-Tao Liu
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Xi-Zhong Shen
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| |
Collapse
|
23
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
24
|
Jheng YC, Wang YP, Lin HE, Sung KY, Chu YC, Wang HS, Jiang JK, Hou MC, Lee FY, Lu CL. A novel machine learning-based algorithm to identify and classify lesions and anatomical landmarks in colonoscopy images. Surg Endosc 2021; 36:640-650. [PMID: 33591447 DOI: 10.1007/s00464-021-08331-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 01/13/2021] [Indexed: 02/06/2023]
Abstract
OBJECTIVES Computer-aided diagnosis (CAD)-based artificial intelligence (AI) has been shown to be highly accurate for detecting and characterizing colon polyps. However, the application of AI to identify normal colon landmarks and differentiate multiple colon diseases has not yet been established. We aimed to develop a convolutional neural network (CNN)-based algorithm (GUTAID) to recognize different colon lesions and anatomical landmarks. METHODS Colonoscopic images were obtained to train and validate the AI classifiers. An independent dataset was collected for verification. The architecture of GUTAID contains two major sub-models: the Normal, Polyp, Diverticulum, Cecum and CAncer (NPDCCA) and Narrow-Band Imaging for Adenomatous/Hyperplastic polyps (NBI-AH) models. The development of GUTAID was based on the 16-layer Visual Geometry Group (VGG16) architecture and implemented on Google Cloud Platform. RESULTS In total, 7838 colonoscopy images were used for developing and validating the AI model. An additional 1273 images were independently applied to verify the GUTAID. The accuracy for GUTAID in detecting various colon lesions/landmarks is 93.3% for polyps, 93.9% for diverticula, 91.7% for cecum, 97.5% for cancer, and 83.5% for adenomatous/hyperplastic polyps. CONCLUSIONS A CNN-based algorithm (GUTAID) to identify colonic abnormalities and landmarks was successfully established with high accuracy. This GUTAID system can further characterize polyps for optical diagnosis. We demonstrated that AI classification methodology is feasible to identify multiple and different colon diseases.
Collapse
Affiliation(s)
- Ying-Chun Jheng
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Yen-Po Wang
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Institute of Brain Science, National Yang-Ming University School of Medicine, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Hung-En Lin
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Kuang-Yi Sung
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Yuan-Chia Chu
- Information Management Office, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Huann-Sheng Wang
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Colon and Rectum Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Jeng-Kai Jiang
- Division of Colon and Rectum Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Ming-Chih Hou
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan.,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Fa-Yauh Lee
- Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan
| | - Ching-Liang Lu
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taipei, Taiwan. .,Division of Gastroenterology, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan. .,Institute of Brain Science, National Yang-Ming University School of Medicine, Taipei, Taiwan. .,Faculty of Medicine, National Yang-Ming University School of Medicine, Taipei, Taiwan.
| |
Collapse
|
25
|
Wavelet Transform and Deep Convolutional Neural Network-Based Smart Healthcare System for Gastrointestinal Disease Detection. Interdiscip Sci 2021; 13:212-228. [PMID: 33566337 DOI: 10.1007/s12539-021-00417-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 01/16/2021] [Accepted: 01/23/2021] [Indexed: 12/19/2022]
Abstract
This work presents a smart healthcare system for the detection of various abnormalities present in the gastrointestinal (GI) region with the help of time-frequency analysis and convolutional neural network. In this regard, the KVASIR V2 dataset comprising of eight classes of GI-tract images such as Normal cecum, Normal pylorus, Normal Z-line, Esophagitis, Polyps, Ulcerative Colitis, Dyed and lifted polyp, and Dyed resection margins are used for training and validation. The initial phase of the work involves an image pre-processing step, followed by the extraction of approximate discrete wavelet transform coefficients. Each class of decomposed images is later given as input to a couple of considered convolutional neural network (CNN) models for training and testing in two different classification levels to recognize its predicted value. Afterward, the classification performance is measured through the following measuring indices: accuracy, precision, recall, specificity, and F1 score. The experimental result shows 97.25% and 93.75% of accuracy in the first level and second level of classification, respectively. Lastly, a comparative performance analysis is carried out with several other previously published works on a similar dataset where the proposed approach performs better than its contemporary methods.
Collapse
|
26
|
Li YD, Zhu SW, Yu JP, Ruan RW, Cui Z, Li YT, Lv MC, Wang HG, Chen M, Jin CH, Wang S. Intelligent detection endoscopic assistant: An artificial intelligence-based system for monitoring blind spots during esophagogastroduodenoscopy in real-time. Dig Liver Dis 2021; 53:216-223. [PMID: 33272862 DOI: 10.1016/j.dld.2020.11.017] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 10/19/2020] [Accepted: 11/16/2020] [Indexed: 02/09/2023]
Abstract
BACKGROUND Observation of the entire stomach during esophagogastroduodenoscopy (EGD) is important; however, there is a lack of effective evaluation tools. AIMS To develop an artificial intelligence (AI)-assisted EGD system able to automatically monitor blind spots in real-time. METHODS An AI-based system, called the Intelligent Detection Endoscopic Assistant (IDEA), was developed using a deep convolutional neural network (DCNN) and long short-term memory (LSTM). The performance of IDEA for recognition of gastric sites in images and videos was evaluated. Primary outcomes included diagnostic accuracy, sensitivity, and specificity. RESULTS A total of 170,297 images and 5779 endoscopic videos were collected to develop the system. As the test group, 3100 EGD images were acquired to evaluate the performance of DCNN in recognition of gastric sites in images. The sensitivity, specificity, and accuracy of DCNN were determined as 97.18%,99.91%, and 99.83%, respectively. To assess the performance of IDEA in recognition of gastric sites in EGD videos, 129 videos were used as the test group. The sensitivity, specificity, and accuracy of IDEA were 96.29%,93.32%, and 95.30%, respectively. CONCLUSIONS IDEA achieved high accuracy for recognition of gastric sites in real-time. The system can be applied as a powerful assistant tool for monitoring blind spots during EGD.
Collapse
Affiliation(s)
- Yan-Dong Li
- Department of Endoscopy, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Shu-Wen Zhu
- Department of Endoscopy, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Jiang-Ping Yu
- Department of Endoscopy, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Rong-Wei Ruan
- Department of Endoscopy, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Zhao Cui
- Department of Endoscopy, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Yi-Ting Li
- Department of Internal Medicine, Seton Hall University School of Health and Medical Sciences, Saint Francis Medical Center, Trenton, NJ, United States
| | - Mei-Chao Lv
- Hithink RoyalFlush Information Network Co., Ltd, Hangzhou, China
| | - Huo-Gen Wang
- Hithink RoyalFlush Information Network Co., Ltd, Hangzhou, China
| | - Ming Chen
- Hithink RoyalFlush Information Network Co., Ltd, Hangzhou, China
| | - Chao-Hui Jin
- Hithink RoyalFlush Information Network Co., Ltd, Hangzhou, China
| | - Shi Wang
- Department of Endoscopy, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang, China.
| |
Collapse
|
27
|
Öztürk Ş, Özkaya U. Residual LSTM layered CNN for classification of gastrointestinal tract diseases. J Biomed Inform 2020; 113:103638. [PMID: 33271341 DOI: 10.1016/j.jbi.2020.103638] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 08/05/2020] [Accepted: 11/26/2020] [Indexed: 12/24/2022]
Abstract
nowadays, considering the number of patients per specialist doctor, the size of the need for automatic medical image analysis methods can be understood. These systems, which are very advantageous compared to manual systems both in terms of cost and time, benefit from artificial intelligence (AI). AI mechanisms that mimic the decision-making process of a specialist increase their diagnosis performance day by day, depending on technological developments. In this study, an AI method is proposed to effectively classify Gastrointestinal (GI) Tract Image datasets containing a small number of labeled data. The proposed AI method uses the convolutional neural network (CNN) architecture, which is accepted as the most successful automatic classification method of today, as a backbone. According to our approach, a shallowly trained CNN architecture needs to be supported by a strong classifier to classify unbalanced datasets robustly. For this purpose, the features in each pooling layer in the CNN architecture are transmitted to an LSTM layer. A classification is made by combining all LSTM layers. All experiments are carried out using AlexNet, GoogLeNet, and ResNet to evaluate the contribution of the proposed residual LSTM structure fairly. Besides, three different experiments are carried out with 2000, 4000, and 6000 samples to determine the effect of sample number change on the proposed method. The performance of the proposed method is higher than other state-of-the-art methods.
Collapse
Affiliation(s)
- Şaban Öztürk
- Amasya University, Technology Faculty, Electrical and Electronics Engineering, Amasya 05100, Turkey.
| | - Umut Özkaya
- Konya Technical University, Engineering and Natural Science Faculty, Electrical and Electronics Engineering, Konya, Turkey
| |
Collapse
|
28
|
Owais M, Arsalan M, Mahmood T, Kang JK, Park KR. Automated Diagnosis of Various Gastrointestinal Lesions Using a Deep Learning-Based Classification and Retrieval Framework With a Large Endoscopic Database: Model Development and Validation. J Med Internet Res 2020; 22:e18563. [PMID: 33242010 PMCID: PMC7728528 DOI: 10.2196/18563] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 09/16/2020] [Accepted: 11/11/2020] [Indexed: 12/14/2022] Open
Abstract
Background The early diagnosis of various gastrointestinal diseases can lead to effective treatment and reduce the risk of many life-threatening conditions. Unfortunately, various small gastrointestinal lesions are undetectable during early-stage examination by medical experts. In previous studies, various deep learning–based computer-aided diagnosis tools have been used to make a significant contribution to the effective diagnosis and treatment of gastrointestinal diseases. However, most of these methods were designed to detect a limited number of gastrointestinal diseases, such as polyps, tumors, or cancers, in a specific part of the human gastrointestinal tract. Objective This study aimed to develop a comprehensive computer-aided diagnosis tool to assist medical experts in diagnosing various types of gastrointestinal diseases. Methods Our proposed framework comprises a deep learning–based classification network followed by a retrieval method. In the first step, the classification network predicts the disease type for the current medical condition. Then, the retrieval part of the framework shows the relevant cases (endoscopic images) from the previous database. These past cases help the medical expert validate the current computer prediction subjectively, which ultimately results in better diagnosis and treatment. Results All the experiments were performed using 2 endoscopic data sets with a total of 52,471 frames and 37 different classes. The optimal performances obtained by our proposed method in accuracy, F1 score, mean average precision, and mean average recall were 96.19%, 96.99%, 98.18%, and 95.86%, respectively. The overall performance of our proposed diagnostic framework substantially outperformed state-of-the-art methods. Conclusions This study provides a comprehensive computer-aided diagnosis framework for identifying various types of gastrointestinal diseases. The results show the superiority of our proposed method over various other recent methods and illustrate its potential for clinical diagnosis and treatment. Our proposed network can be applicable to other classification domains in medical imaging, such as computed tomography scans, magnetic resonance imaging, and ultrasound sequences.
Collapse
Affiliation(s)
- Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Jin Kyu Kang
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| |
Collapse
|
29
|
Mahmood T, Owais M, Noh KJ, Yoon HS, Haider A, Sultan H, Park KR. Artificial Intelligence-based Segmentation of Nuclei in Multi-organ Histopathology Images: Model Development and Validation (Preprint). JMIR Med Inform 2020. [DOI: 10.2196/24394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
30
|
Borgli H, Thambawita V, Smedsrud PH, Hicks S, Jha D, Eskeland SL, Randel KR, Pogorelov K, Lux M, Nguyen DTD, Johansen D, Griwodz C, Stensland HK, Garcia-Ceja E, Schmidt PT, Hammer HL, Riegler MA, Halvorsen P, de Lange T. HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy. Sci Data 2020; 7:283. [PMID: 32859981 PMCID: PMC7455694 DOI: 10.1038/s41597-020-00622-y] [Citation(s) in RCA: 95] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Accepted: 07/21/2020] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence is currently a hot topic in medicine. However, medical data is often sparse and hard to obtain due to legal restrictions and lack of medical personnel for the cumbersome and tedious process to manually label training data. These constraints make it difficult to develop systems for automatic analysis, like detecting disease or other lesions. In this respect, this article presents HyperKvasir, the largest image and video dataset of the gastrointestinal tract available today. The data is collected during real gastro- and colonoscopy examinations at Bærum Hospital in Norway and partly labeled by experienced gastrointestinal endoscopists. The dataset contains 110,079 images and 374 videos, and represents anatomical landmarks as well as pathological and normal findings. The total number of images and video frames together is around 1 million. Initial experiments demonstrate the potential benefits of artificial intelligence-based computer-assisted diagnosis systems. The HyperKvasir dataset can play a valuable role in developing better algorithms and computer-assisted examination systems not only for gastro- and colonoscopy, but also for other fields in medicine.
Collapse
Affiliation(s)
- Hanna Borgli
- SimulaMet, Oslo, Norway
- University of Oslo, Oslo, Norway
| | | | - Pia H Smedsrud
- SimulaMet, Oslo, Norway
- University of Oslo, Oslo, Norway
- Augere Medical AS, Oslo, Norway
| | - Steven Hicks
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | - Debesh Jha
- SimulaMet, Oslo, Norway
- UIT The Arctic University of Norway, Tromsø, Norway
| | | | | | | | | | | | - Dag Johansen
- UIT The Arctic University of Norway, Tromsø, Norway
| | | | - Håkon K Stensland
- University of Oslo, Oslo, Norway
- Simula Research Laboratory, Oslo, Norway
| | | | - Peter T Schmidt
- Department of Medicine (Solna), Karolinska Institutet, Stockholm, Sweden
- Department of Medicine, Ersta hospital, Stockholm, Sweden
| | - Hugo L Hammer
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | | | - Pål Halvorsen
- SimulaMet, Oslo, Norway.
- Oslo Metropolitan University, Oslo, Norway.
| | - Thomas de Lange
- Department of Medical Research, Bærum Hospital, Bærum, Norway
- Augere Medical AS, Oslo, Norway
- Medical Department, Sahlgrenska University Hospital-Mölndal, Mölndal, Sweden
| |
Collapse
|
31
|
Boers T, van der Putten J, Struyvenberg M, Fockens K, Jukema J, Schoon E, van der Sommen F, Bergman J, de With P. Improving Temporal Stability and Accuracy for Endoscopic Video Tissue Classification Using Recurrent Neural Networks. SENSORS (BASEL, SWITZERLAND) 2020; 20:E4133. [PMID: 32722344 PMCID: PMC7436238 DOI: 10.3390/s20154133] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Revised: 07/09/2020] [Accepted: 07/20/2020] [Indexed: 01/10/2023]
Abstract
Early Barrett's neoplasia are often missed due to subtle visual features and inexperience of the non-expert endoscopist with such lesions. While promising results have been reported on the automated detection of this type of early cancer in still endoscopic images, video-based detection using the temporal domain is still open. The temporally stable nature of video data in endoscopic examinations enables to develop a framework that can diagnose the imaged tissue class over time, thereby yielding a more robust and improved model for spatial predictions. We show that the introduction of Recurrent Neural Network nodes offers a more stable and accurate model for tissue classification, compared to classification on individual images. We have developed a customized Resnet18 feature extractor with four types of classifiers: Fully Connected (FC), Fully Connected with an averaging filter (FC Avg(n = 5)), Long Short Term Memory (LSTM) and a Gated Recurrent Unit (GRU). Experimental results are based on 82 pullback videos of the esophagus with 46 high-grade dysplasia patients. Our results demonstrate that the LSTM classifier outperforms the FC, FC Avg(n = 5) and GRU classifier with an average accuracy of 85.9% compared to 82.2%, 83.0% and 85.6%, respectively. The benefit of our novel implementation for endoscopic tissue classification is the inclusion of spatio-temporal information for improved and robust decision making, and it is the first step towards full temporal learning of esophageal cancer detection in endoscopic video.
Collapse
Affiliation(s)
- Tim Boers
- Department of Electrical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands; (J.v.d.P.); (F.v.d.S.); (P.d.W.)
| | - Joost van der Putten
- Department of Electrical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands; (J.v.d.P.); (F.v.d.S.); (P.d.W.)
| | - Maarten Struyvenberg
- Amsterdam University Medical Center, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands; (M.S.); (K.F.); (J.J.); (J.B.)
| | - Kiki Fockens
- Amsterdam University Medical Center, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands; (M.S.); (K.F.); (J.J.); (J.B.)
| | - Jelmer Jukema
- Amsterdam University Medical Center, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands; (M.S.); (K.F.); (J.J.); (J.B.)
| | - Erik Schoon
- Catharina Hospital, Michelangelolaan 2, 5623 EJ Eindhoven, The Netherlands;
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands; (J.v.d.P.); (F.v.d.S.); (P.d.W.)
| | - Jacques Bergman
- Amsterdam University Medical Center, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands; (M.S.); (K.F.); (J.J.); (J.B.)
| | - Peter de With
- Department of Electrical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands; (J.v.d.P.); (F.v.d.S.); (P.d.W.)
| |
Collapse
|
32
|
Arsalan M, Baek NR, Owais M, Mahmood T, Park KR. Deep Learning-Based Detection of Pigment Signs for Analysis and Diagnosis of Retinitis Pigmentosa. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3454. [PMID: 32570943 PMCID: PMC7349531 DOI: 10.3390/s20123454] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 06/16/2020] [Accepted: 06/16/2020] [Indexed: 12/24/2022]
Abstract
Ophthalmological analysis plays a vital role in the diagnosis of various eye diseases, such as glaucoma, retinitis pigmentosa (RP), and diabetic and hypertensive retinopathy. RP is a genetic retinal disorder that leads to progressive vision degeneration and initially causes night blindness. Currently, the most commonly applied method for diagnosing retinal diseases is optical coherence tomography (OCT)-based disease analysis. In contrast, fundus imaging-based disease diagnosis is considered a low-cost diagnostic solution for retinal diseases. This study focuses on the detection of RP from the fundus image, which is a crucial task because of the low quality of fundus images and non-cooperative image acquisition conditions. Automatic detection of pigment signs in fundus images can help ophthalmologists and medical practitioners in diagnosing and analyzing RP disorders. To accurately segment pigment signs for diagnostic purposes, we present an automatic RP segmentation network (RPS-Net), which is a specifically designed deep learning-based semantic segmentation network to accurately detect and segment the pigment signs with fewer trainable parameters. Compared with the conventional deep learning methods, the proposed method applies a feature enhancement policy through multiple dense connections between the convolutional layers, which enables the network to discriminate between normal and diseased eyes, and accurately segment the diseased area from the background. Because pigment spots can be very small and consist of very few pixels, the RPS-Net provides fine segmentation, even in the case of degraded images, by importing high-frequency information from the preceding layers through concatenation inside and outside the encoder-decoder. To evaluate the proposed RPS-Net, experiments were performed based on 4-fold cross-validation using the publicly available Retinal Images for Pigment Signs (RIPS) dataset for detection and segmentation of retinal pigments. Experimental results show that RPS-Net achieved superior segmentation performance for RP diagnosis, compared with the state-of-the-art methods.
Collapse
Affiliation(s)
| | | | | | | | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea; (M.A.); (N.R.B.); (M.O.); (T.M.)
| |
Collapse
|
33
|
Adamian N, Naunheim MR, Jowett N. An Open-Source Computer Vision Tool for Automated Vocal Fold Tracking From Videoendoscopy. Laryngoscope 2020; 131:E219-E225. [PMID: 32356903 DOI: 10.1002/lary.28669] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 03/03/2020] [Accepted: 03/18/2020] [Indexed: 12/17/2022]
Abstract
OBJECTIVES Contemporary clinical assessment of vocal fold adduction and abduction is qualitative and subjective. Herein is described a novel computer vision tool for automated quantitative tracking of vocal fold motion from videolaryngoscopy. The potential of this software as a diagnostic aid in unilateral vocal fold paralysis is demonstrated. STUDY DESIGN Case-control. METHODS A deep-learning algorithm was trained for vocal fold localization from videoendoscopy for automated frame-wise estimation of glottic opening angles. Algorithm accuracy was compared against manual expert markings. Maximum glottic opening angles between adults with normal movements (N = 20) and those with unilateral vocal fold paralysis (N = 20) were characterized. RESULTS Algorithm angle estimations demonstrated a correlation coefficient of 0.97 (P < .001) and mean absolute difference of 3.72° (standard deviation [SD], 3.49°) in comparison to manual expert markings. In comparison to those with normal movements, patients with unilateral vocal fold paralysis demonstrated significantly lower maximal glottic opening angles (mean 68.75° ± 11.82° vs. 49.44° ± 10.42°; difference, 19.31°; 95% confidence interval [CI] [12.17°-26.44°]; P < .001). Maximum opening angle less than 58.65° predicted unilateral vocal fold paralysis with a sensitivity of 0.85 and specificity of 0.85, with an area under the receiver operating characteristic curve of 0.888 (95% CI [0.784-0.991]; P < .001). CONCLUSION A user-friendly software tool for automated quantification of vocal fold movements from previously recorded videolaryngoscopy examinations is presented, termed automated glottic action tracking by artificial intelligence (AGATI). This tool may prove useful for diagnosis and outcomes tracking of vocal fold movement disorders. LEVEL OF EVIDENCE IV Laryngoscope, 131:E219-E225, 2021.
Collapse
Affiliation(s)
- Nat Adamian
- Surgical Photonics & Engineering Laboratory, Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear and Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Matthew R Naunheim
- Division of Laryngology, Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear and Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Nate Jowett
- Surgical Photonics & Engineering Laboratory, Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear and Harvard Medical School, Boston, Massachusetts, U.S.A
| |
Collapse
|
34
|
Artificial Intelligence-Based Diagnosis of Cardiac and Related Diseases. J Clin Med 2020; 9:jcm9030871. [PMID: 32209991 PMCID: PMC7141544 DOI: 10.3390/jcm9030871] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 03/17/2020] [Accepted: 03/19/2020] [Indexed: 12/11/2022] Open
Abstract
Automatic chest anatomy segmentation plays a key role in computer-aided disease diagnosis, such as for cardiomegaly, pleural effusion, emphysema, and pneumothorax. Among these diseases, cardiomegaly is considered a perilous disease, involving a high risk of sudden cardiac death. It can be diagnosed early by an expert medical practitioner using a chest X-Ray (CXR) analysis. The cardiothoracic ratio (CTR) and transverse cardiac diameter (TCD) are the clinical criteria used to estimate the heart size for diagnosing cardiomegaly. Manual estimation of CTR and other diseases is a time-consuming process and requires significant work by the medical expert. Cardiomegaly and related diseases can be automatically estimated by accurate anatomical semantic segmentation of CXRs using artificial intelligence. Automatic segmentation of the lungs and heart from the CXRs is considered an intensive task owing to inferior quality images and intensity variations using nonideal imaging conditions. Although there are a few deep learning-based techniques for chest anatomy segmentation, most of them only consider single class lung segmentation with deep complex architectures that require a lot of trainable parameters. To address these issues, this study presents two multiclass residual mesh-based CXR segmentation networks, X-RayNet-1 and X-RayNet-2, which are specifically designed to provide fine segmentation performance with a few trainable parameters compared to conventional deep learning schemes. The proposed methods utilize semantic segmentation to support the diagnostic procedure of related diseases. To evaluate X-RayNet-1 and X-RayNet-2, experiments were performed with a publicly available Japanese Society of Radiological Technology (JSRT) dataset for multiclass segmentation of the lungs, heart, and clavicle bones; two other publicly available datasets, Montgomery County (MC) and Shenzhen X-Ray sets (SC), were evaluated for lung segmentation. The experimental results showed that X-RayNet-1 achieved fine performance for all datasets and X-RayNet-2 achieved competitive performance with a 75% parameter reduction.
Collapse
|
35
|
Arsalan M, Owais M, Mahmood T, Cho SW, Park KR. Aiding the Diagnosis of Diabetic and Hypertensive Retinopathy Using Artificial Intelligence-Based Semantic Segmentation. J Clin Med 2019; 8:E1446. [PMID: 31514466 PMCID: PMC6780110 DOI: 10.3390/jcm8091446] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 09/04/2019] [Accepted: 09/07/2019] [Indexed: 12/13/2022] Open
Abstract
Automatic segmentation of retinal images is an important task in computer-assisted medical image analysis for the diagnosis of diseases such as hypertension, diabetic and hypertensive retinopathy, and arteriosclerosis. Among the diseases, diabetic retinopathy, which is the leading cause of vision detachment, can be diagnosed early through the detection of retinal vessels. The manual detection of these retinal vessels is a time-consuming process that can be automated with the help of artificial intelligence with deep learning. The detection of vessels is difficult due to intensity variation and noise from non-ideal imaging. Although there are deep learning approaches for vessel segmentation, these methods require many trainable parameters, which increase the network complexity. To address these issues, this paper presents a dual-residual-stream-based vessel segmentation network (Vess-Net), which is not as deep as conventional semantic segmentation networks, but provides good segmentation with few trainable parameters and layers. The method takes advantage of artificial intelligence for semantic segmentation to aid the diagnosis of retinopathy. To evaluate the proposed Vess-Net method, experiments were conducted with three publicly available datasets for vessel segmentation: digital retinal images for vessel extraction (DRIVE), the Child Heart Health Study in England (CHASE-DB1), and structured analysis of retina (STARE). Experimental results show that Vess-Net achieved superior performance for all datasets with sensitivity (Se), specificity (Sp), area under the curve (AUC), and accuracy (Acc) of 80.22%, 98.1%, 98.2%, and 96.55% for DRVIE; 82.06%, 98.41%, 98.0%, and 97.26% for CHASE-DB1; and 85.26%, 97.91%, 98.83%, and 96.97% for STARE dataset.
Collapse
Affiliation(s)
- Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Se Woon Cho
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| |
Collapse
|