1
|
Ning Y, Li J, Sun S. Advancing Visual Perception Through VCANet-Crossover Osprey Algorithm: Integrating Visual Technologies. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01467-w. [PMID: 40180632 DOI: 10.1007/s10278-025-01467-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 11/30/2024] [Revised: 02/11/2025] [Accepted: 02/24/2025] [Indexed: 04/05/2025]
Abstract
Diabetic retinopathy (DR) is a significant vision-threatening condition, necessitating accurate and efficient automated screening methods. Traditional deep learning (DL) models struggle to detect subtle lesions and also suffer from high computational complexity. Existing models primarily mimic the primary visual cortex (V1) of the human visual system, neglecting other higher-order processing regions. To overcome these limitations, this research introduces the vision core-adapted network-based crossover osprey algorithm (VCANet-COP) for subtle lesion recognition with better computational efficiency. The model integrates sparse autoencoders (SAEs) to extract vascular structures and lesion-specific features at a pixel level for improved abnormality detection. The front-end network in the VCANet emulates the V1, V2, V4, and inferotemporal (IT) regions to derive subtle lesions effectively and improve lesion detection accuracy. Additionally, the COP algorithm leveraging the osprey optimization algorithm (OOA) with a crossover strategy optimizes hyperparameters and network configurations to ensure better computational efficiency, faster convergence, and enhanced performance in lesion recognition. The experimental assessment of the VCANet-COP model on multiple DR datasets namely Diabetic_Retinopathy_Data (DR-Data), Structured Analysis of the Retina (STARE) dataset, Indian Diabetic Retinopathy Image Dataset (IDRiD), Digital Retinal Images for Vessel Extraction (DRIVE) dataset, and Retinal fundus multi-disease image dataset (RFMID) demonstrates superior performance over baseline works, namely EDLDR, FFU_Net, LSTM_MFORG, fundus-DeepNet, and CNN_SVD by achieving average outcomes of 98.14% accuracy, 97.9% sensitivity, 98.08% specificity, 98.4% precision, 98.1% F1-score, 96.2% kappa coefficient, 2.0% false positive rate (FPR), 2.1% false negative rate (FNR), and 1.5-s execution time. By addressing critical limitations, VCANet-COP provides a scalable and robust solution for real-world DR screening and clinical decision support.
Collapse
Affiliation(s)
- Yuwen Ning
- Teaching and Research Support Center, Air Force Medical University, Xi'an, 710032, China.
| | - Jiaxin Li
- Information Management Department, 986th Hospitalof PLAAF, Xi'an, 710054, Shaanxi, China
| | - Shuyi Sun
- Network and Data Center, Northwest University, Xi'an, 710127, Shaanxi, China
| |
Collapse
|
2
|
Chen C, Mat Isa NA, Liu X. A review of convolutional neural network based methods for medical image classification. Comput Biol Med 2025; 185:109507. [PMID: 39631108 DOI: 10.1016/j.compbiomed.2024.109507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 11/20/2024] [Accepted: 11/28/2024] [Indexed: 12/07/2024]
Abstract
This study systematically reviews CNN-based medical image classification methods. We surveyed 149 of the latest and most important papers published to date and conducted an in-depth analysis of the methods used therein. Based on the selected literature, we organized this review systematically. First, the development and evolution of CNN in the field of medical image classification are analyzed. Subsequently, we provide an in-depth overview of the main techniques of CNN applied to medical image classification, which is also the current research focus in this field, including data preprocessing, transfer learning, CNN architectures, and explainability, and their role in improving classification accuracy and efficiency. In addition, this overview summarizes the main public datasets for various diseases. Although CNN has great potential in medical image classification tasks and has achieved good results, clinical application is still difficult. Therefore, we conclude by discussing the main challenges faced by CNNs in medical image analysis and pointing out future research directions to address these challenges. This review will help researchers with their future studies and can promote the successful integration of deep learning into clinical practice and smart medical systems.
Collapse
Affiliation(s)
- Chao Chen
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia; School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin, 644000, China
| | - Nor Ashidi Mat Isa
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia.
| | - Xin Liu
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia
| |
Collapse
|
3
|
D B, Angamuthu S, Balaji P, Ajay Chaurasia M. Revolutionizing diabetic eye disease detection: retinal image analysis with cutting-edge deep learning techniques. PeerJ Comput Sci 2024; 10:e2186. [PMID: 39650355 PMCID: PMC11623275 DOI: 10.7717/peerj-cs.2186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 06/18/2024] [Indexed: 12/11/2024]
Abstract
Globally, glaucoma is a leading cause of visual impairment and vision loss, emphasizing the critical need for early diagnosis and intervention. This research explores the application of deep learning for automated glaucoma diagnosis using retinal fundus photographs. We introduce a novel cross-sectional optic nerve head (ONH) feature derived from optical coherence tomography (OCT) images to enhance existing diagnostic procedures. Our approach leverages deep learning to automatically detect key optic disc characteristics, eliminating the need for manual feature engineering. The deep learning classifier then categorizes images as normal or abnormal, streamlining the diagnostic process. Deep learning techniques have proven effective in classifying and segmenting retinal fundus images, enabling the analysis of a growing number of images. This study introduces a novel mixed loss function that combines the strengths of focal loss and correntropy loss to handle complex biomedical data with class imbalance and outliers, particularly in OCT images. We further refine a multi-task deep learning model that capitalizes on similarities across major eye-fundus activities and metrics for glaucoma detection. The model is rigorously evaluated on a real-world ophthalmic dataset, achieving impressive accuracy, specificity, and sensitivity of 100%, 99.8%, and 99.2%, respectively, surpassing state-of-the-art methods. These promising results underscore the potential of our deep learning algorithm for automated glaucoma diagnosis, with significant implications for clinical applications. By simultaneously addressing segmentation and classification challenges, our approach demonstrates its effectiveness in accurately identifying ocular diseases, paving the way for improved glaucoma diagnosis and early intervention.
Collapse
Affiliation(s)
- Banumathy D
- Department of Computer Science and Engineering, Paavai Engineering College, Namakkal, Tamilnadu, India
| | - Swathi Angamuthu
- Department of Mathematics,Faculty of Science, University of Hradec Kralove, Hradec Kralove, Czech Republic
| | - Prasanalakshmi Balaji
- Department of Computer Science, College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | | |
Collapse
|
4
|
Madduri VK, Rao BS. Detection and diagnosis of diabetic eye diseases using two phase transfer learning approach. PeerJ Comput Sci 2024; 10:e2135. [PMID: 39314692 PMCID: PMC11419640 DOI: 10.7717/peerj-cs.2135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 05/27/2024] [Indexed: 09/25/2024]
Abstract
Background Early diagnosis and treatment of diabetic eye disease (DED) improve prognosis and lessen the possibility of permanent vision loss. Screening of retinal fundus images is a significant process widely employed for diagnosing patients with DED or other eye problems. However, considerable time and effort are required to detect these images manually. Methods Deep learning approaches in machine learning have attained superior performance for the binary classification of healthy and pathological retinal fundus images. In contrast, multi-class retinal eye disease classification is still a difficult task. Therefore, a two-phase transfer learning approach is developed in this research for automated classification and segmentation of multi-class DED pathologies. Results In the first step, a Modified ResNet-50 model pre-trained on the ImageNet dataset was transferred and learned to classify normal diabetic macular edema (DME), diabetic retinopathy, glaucoma, and cataracts. In the second step, the defective region of multiple eye diseases is segmented using the transfer learning-based DenseUNet model. From the publicly accessible dataset, the suggested model is assessed using several retinal fundus images. Our proposed model for multi-class classification achieves a maximum specificity of 99.73%, a sensitivity of 99.54%, and an accuracy of 99.67%.
Collapse
Affiliation(s)
- Vamsi Krishna Madduri
- School of Computer Science and Engineering (SCOPE), Vellore Institute of Technology-AP University, Amaravati, Andhra Pradesh, India
| | - Battula Srinivasa Rao
- School of Computer Science and Engineering (SCOPE), Vellore Institute of Technology-AP University, Amaravati, Andhra Pradesh, India
| |
Collapse
|
5
|
Jeribi F, Nazir T, Nawaz M, Javed A, Alhameed M, Tahir A. Recognition of diabetic retinopathy and macular edema using deep learning. Med Biol Eng Comput 2024; 62:2687-2701. [PMID: 38684593 DOI: 10.1007/s11517-024-03105-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 04/20/2024] [Indexed: 05/02/2024]
Abstract
Diabetic retinopathy (DR) and diabetic macular edema (DME) are both serious eye conditions associated with diabetes and if left untreated, and they can lead to permanent blindness. Traditional methods for screening these conditions rely on manual image analysis by experts, which can be time-consuming and costly due to the scarcity of such experts. To overcome the aforementioned challenges, we present the Modified CornerNet approach with DenseNet-100. This system aims to localize and classify lesions associated with DR and DME. To train our model, we first generate annotations for input samples. These annotations likely include information about the location and type of lesions within the retinal images. DenseNet-100 is a deep CNN used for feature extraction, and CornerNet is a one-stage object detection model. CornerNet is known for its ability to accurately localize small objects, which makes it suitable for detecting lesions in retinal images. We assessed our technique on two challenging datasets, EyePACS and IDRiD. These datasets contain a diverse range of retinal images, which is important to estimate the performance of our model. Further, the proposed model is also tested in the cross-corpus scenario on two challenging datasets named APTOS-2019 and Diaretdb1 to assess the generalizability of our system. According to the accomplished analysis, our method outperformed the latest approaches in terms of both qualitative and quantitative results. The ability to effectively localize small abnormalities and handle over-fitted challenges is highlighted as a key strength of the suggested framework which can assist the practitioners in the timely recognition of such eye ailments.
Collapse
Affiliation(s)
- Fathe Jeribi
- College of Engineering and Computer Science, Jazan University, 45142, Jazan, Saudi Arabia
| | - Tahira Nazir
- Department of Computer Science, Riphah International University, Gulberg Green Campus, Islamabad, Pakistan
| | - Marriam Nawaz
- Department of Software Engineering, University of Engineering and Technology-Taxila, Punjab, 47050, Pakistan
| | - Ali Javed
- Department of Software Engineering, University of Engineering and Technology-Taxila, Punjab, 47050, Pakistan.
| | - Mohammed Alhameed
- College of Engineering and Computer Science, Jazan University, 45142, Jazan, Saudi Arabia
| | - Ali Tahir
- College of Engineering and Computer Science, Jazan University, 45142, Jazan, Saudi Arabia
| |
Collapse
|
6
|
Rafay A, Asghar Z, Manzoor H, Hussain W. EyeCNN: exploring the potential of convolutional neural networks for identification of multiple eye diseases through retinal imagery. Int Ophthalmol 2023; 43:3569-3586. [PMID: 37291412 DOI: 10.1007/s10792-023-02764-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/21/2023] [Indexed: 06/10/2023]
Abstract
BACKGROUND The eyes are the most important part of the human body as these are directly connected to the brain and help us perceive the imagery in daily life whereas, eye diseases are mostly ignored and underestimated until it is too late. Diagnosing eye disorders through manual diagnosis by the physician can be very costly and time taking. OBJECTIVE Thus, to tackle this, a novel method namely EyeCNN is proposed for identifying eye diseases through retinal images using EfficientNet B3. METHODS A dataset of retinal imagery of three diseases, i.e. Diabetic Retinopathy, Glaucoma, and Cataract is used to train 12 convolutional networks while EfficientNet B3 was the topperforming model out of all 12 models with a testing accuracy of 94.30%. RESULTS After preprocessing of the dataset and training of models, various experimentations were performed to see where our model stands. The evaluation was performed using some well-defined measures and the final model was deployed on the Streamlit server as a prototype for public usage. The proposed model has the potential to help diagnose eye diseases early, which can facilitate timely treatment. CONCLUSION The use of EyeCNN for classifying eye diseases has the potential to aid ophthalmologists in diagnosing conditions accurately and efficiently. This research may also lead to a deeper understanding of these diseases and it may lead to new treatments. The webserver of EyeCNN can be accessed at ( https://abdulrafay97-eyecnn-app-rd9wgz.streamlit.app/ ).
Collapse
Affiliation(s)
- Abdul Rafay
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| | - Zaeem Asghar
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| | - Hamza Manzoor
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| | - Waqar Hussain
- Department of Artificial Intelligence, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan.
| |
Collapse
|
7
|
Li Z, Han Y, Yang X. Multi-Fundus Diseases Classification Using Retinal Optical Coherence Tomography Images with Swin Transformer V2. J Imaging 2023; 9:203. [PMID: 37888310 PMCID: PMC10607340 DOI: 10.3390/jimaging9100203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/25/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023] Open
Abstract
Fundus diseases cause damage to any part of the retina. Untreated fundus diseases can lead to severe vision loss and even blindness. Analyzing optical coherence tomography (OCT) images using deep learning methods can provide early screening and diagnosis of fundus diseases. In this paper, a deep learning model based on Swin Transformer V2 was proposed to diagnose fundus diseases rapidly and accurately. In this method, calculating self-attention within local windows was used to reduce computational complexity and improve its classification efficiency. Meanwhile, the PolyLoss function was introduced to further improve the model's accuracy, and heat maps were generated to visualize the predictions of the model. Two independent public datasets, OCT 2017 and OCT-C8, were applied to train the model and evaluate its performance, respectively. The results showed that the proposed model achieved an average accuracy of 99.9% on OCT 2017 and 99.5% on OCT-C8, performing well in the automatic classification of multi-fundus diseases using retinal OCT images.
Collapse
Affiliation(s)
- Zhenwei Li
- College of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang 471023, China; (Y.H.); (X.Y.)
| | | | | |
Collapse
|
8
|
Albahli S, Nazir T. A Circular Box-Based Deep Learning Model for the Identification of Signet Ring Cells from Histopathological Images. Bioengineering (Basel) 2023; 10:1147. [PMID: 37892876 PMCID: PMC10604551 DOI: 10.3390/bioengineering10101147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/16/2023] [Accepted: 09/18/2023] [Indexed: 10/29/2023] Open
Abstract
Signet ring cell (SRC) carcinoma is a particularly serious type of cancer that is a leading cause of death all over the world. SRC carcinoma has a more deceptive onset than other carcinomas and is mostly encountered in its later stages. Thus, the recognition of SRCs at their initial stages is a challenge because of different variants and sizes and illumination changes. The recognition process of SRCs at their early stages is costly because of the requirement for medical experts. A timely diagnosis is important because the level of the disease determines the severity, cure, and survival rate of victims. To tackle the current challenges, a deep learning (DL)-based methodology is proposed in this paper, i.e., custom CircleNet with ResNet-34 for SRC recognition and classification. We chose this method because of the circular shapes of SRCs and achieved better performance due to the CircleNet method. We utilized a challenging dataset for experimentation and performed augmentation to increase the dataset samples. The experiments were conducted using 35,000 images and attained 96.40% accuracy. We performed a comparative analysis and confirmed that our method outperforms the other methods.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia;
| | - Tahira Nazir
- Faculty of Computing, Riphah International University, Islamabad 44600, Pakistan
| |
Collapse
|
9
|
Liu YF, Ji YK, Fei FQ, Chen NM, Zhu ZT, Fei XZ. Research progress in artificial intelligence assisted diabetic retinopathy diagnosis. Int J Ophthalmol 2023; 16:1395-1405. [PMID: 37724288 PMCID: PMC10475636 DOI: 10.18240/ijo.2023.09.05] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/14/2023] [Indexed: 09/20/2023] Open
Abstract
Diabetic retinopathy (DR) is one of the most common retinal vascular diseases and one of the main causes of blindness worldwide. Early detection and treatment can effectively delay vision decline and even blindness in patients with DR. In recent years, artificial intelligence (AI) models constructed by machine learning and deep learning (DL) algorithms have been widely used in ophthalmology research, especially in diagnosing and treating ophthalmic diseases, particularly DR. Regarding DR, AI has mainly been used in its diagnosis, grading, and lesion recognition and segmentation, and good research and application results have been achieved. This study summarizes the research progress in AI models based on machine learning and DL algorithms for DR diagnosis and discusses some limitations and challenges in AI research.
Collapse
Affiliation(s)
- Yun-Fang Liu
- Department of Ophthalmology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Yu-Ke Ji
- Eye Hospital, Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Fang-Qin Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Nai-Mei Chen
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Zhen-Tao Zhu
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Xing-Zhen Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| |
Collapse
|
10
|
Chen LY, Hsu SM, Wang JC, Yang TH, Chuang HS. Photonic crystal enhanced immunofluorescence biosensor integrated with a lateral flow microchip: Toward rapid tear-based diabetic retinopathy screening. BIOMICROFLUIDICS 2023; 17:044102. [PMID: 37484814 PMCID: PMC10361775 DOI: 10.1063/5.0158780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 07/10/2023] [Indexed: 07/25/2023]
Abstract
Diabetic retinopathy (DR) has accounted for major loss of vision in chronic diabetes. Although clinical statistics have shown that early screening can procrastinate or improve the deterioration of the disease, the screening rate remains low worldwide because of the great inconvenience of conventional ophthalmoscopic examination. Instead, tear fluid that contains rich proteins caused by direct contact with eyeballs is an ideal substitute to monitor vision health. Herein, an immunofluorescence biosensor enhanced by a photonic crystal (PhC) is presented to handle the trace proteins suspended in the tear fluid. The PhC was constructed by self-assembled nanoparticles with a thin layer of gold coated on top of it. Then, the PC substrate was conjugated with antibodies and placed in a microchannel. When the capillary-driven tear sample flew over the PC substrate, the immunoassay enabled the formation of a sandwich antibody-antigen-antibody configuration for PhC-enhanced immunofluorescence. The use of PhC resulted in a concentration enhancement of more than tenfold compared to non-PhC, while achieving an equivalent signal intensity. The limit of detection for the target biomarker, lipocalin-1 (LCN-1), reached nearly 3 μg/ml, and the turnaround time of each detection was 15 min. Finally, a preclinical evaluation was conducted using ten tear samples. A clear trend was observed, showing that the concentrations of LCN-1 were at least twofold higher in individuals with chronic diabetes or DR than in healthy individuals. This trend was consistent with their medical conditions. The results provided a direct proof-of-concept for the proposed PhC biosensor in rapid tear-based DR screening.
Collapse
Affiliation(s)
- Li-Ying Chen
- Department of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Sheng-Min Hsu
- Department of Ophthalmology, National Cheng Kung University Hospital, Tainan 701, Taiwan
| | | | | | | |
Collapse
|
11
|
Feng H, Chen J, Zhang Z, Lou Y, Zhang S, Yang W. A bibliometric analysis of artificial intelligence applications in macular edema: exploring research hotspots and Frontiers. Front Cell Dev Biol 2023; 11:1174936. [PMID: 37255600 PMCID: PMC10225517 DOI: 10.3389/fcell.2023.1174936] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 05/02/2023] [Indexed: 06/01/2023] Open
Abstract
Background: Artificial intelligence (AI) is used in ophthalmological disease screening and diagnostics, medical image diagnostics, and predicting late-disease progression rates. We reviewed all AI publications associated with macular edema (ME) research Between 2011 and 2022 and performed modeling, quantitative, and qualitative investigations. Methods: On 1st February 2023, we screened the Web of Science Core Collection for AI applications related to ME, from which 297 studies were identified and analyzed (2011-2022). We collected information on: publications, institutions, country/region, keywords, journal name, references, and research hotspots. Literature clustering networks and Frontier knowledge bases were investigated using bibliometrix-BiblioShiny, VOSviewer, and CiteSpace bibliometric platforms. We used the R "bibliometrix" package to synopsize our observations, enumerate keywords, visualize collaboration networks between countries/regions, and generate a topic trends plot. VOSviewer was used to examine cooperation between institutions and identify citation relationships between journals. We used CiteSpace to identify clustering keywords over the timeline and identify keywords with the strongest citation bursts. Results: In total, 47 countries published AI studies related to ME; the United States had the highest H-index, thus the greatest influence. China and the United States cooperated most closely between all countries. Also, 613 institutions generated publications - the Medical University of Vienna had the highest number of studies. This publication record and H-index meant the university was the most influential in the ME field. Reference clusters were also categorized into 10 headings: retinal Optical Coherence Tomography (OCT) fluid detection, convolutional network models, deep learning (DL)-based single-shot predictions, retinal vascular disease, diabetic retinopathy (DR), convolutional neural networks (CNNs), automated macular pathology diagnosis, dry age-related macular degeneration (DARMD), class weight, and advanced DL architecture systems. Frontier keywords were represented by diabetic macular edema (DME) (2021-2022). Conclusion: Our review of the AI-related ME literature was comprehensive, systematic, and objective, and identified future trends and current hotspots. With increased DL outputs, the ME research focus has gradually shifted from manual ME examinations to automatic ME detection and associated symptoms. In this review, we present a comprehensive and dynamic overview of AI in ME and identify future research areas.
Collapse
Affiliation(s)
- Haiwen Feng
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Jiaqi Chen
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Zhichang Zhang
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Yan Lou
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Shaochong Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
12
|
Manikandan S, Raman R, Rajalakshmi R, Tamilselvi S, Surya RJ. Deep learning-based detection of diabetic macular edema using optical coherence tomography and fundus images: A meta-analysis. Indian J Ophthalmol 2023; 71:1783-1796. [PMID: 37203031 PMCID: PMC10391382 DOI: 10.4103/ijo.ijo_2614_22] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/20/2023] Open
Abstract
Diabetic macular edema (DME) is an important cause of visual impairment in the working-age group. Deep learning methods have been developed to detect DME from two-dimensional retinal images and also from optical coherence tomography (OCT) images. The performances of these algorithms vary and often create doubt regarding their clinical utility. In resource-constrained health-care systems, these algorithms may play an important role in determining referral and treatment. The survey provides a diversified overview of macular edema detection methods, including cutting-edge research, with the objective of providing pertinent information to research groups, health-care professionals, and diabetic patients about the applications of deep learning in retinal image detection and classification process. Electronic databases such as PubMed, IEEE Explore, BioMed, and Google Scholar were searched from inception to March 31, 2022, and the reference lists of published papers were also searched. The study followed the preferred reporting items for systematic review and meta-analysis (PRISMA) reporting guidelines. Examination of various deep learning models and their exhibition regarding precision, epochs, their capacity to detect anomalies for less training data, concepts, and challenges that go deep into the applications were analyzed. A total of 53 studies were included that evaluated the performance of deep learning models in a total of 1,414,169°CT volumes, B-scans, patients, and 472,328 fundus images. The overall area under the receiver operating characteristic curve (AUROC) was 0.9727. The overall sensitivity for detecting DME using OCT images was 96% (95% confidence interval [CI]: 0.94-0.98). The overall sensitivity for detecting DME using fundus images was 94% (95% CI: 0.90-0.96).
Collapse
Affiliation(s)
- Suchetha Manikandan
- Professor & Deputy Director, Centre for Healthcare Advancement, Innovation ! Research, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - Rajiv Raman
- Senior Consultant, Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India
| | - Ramachandran Rajalakshmi
- Head Medical Retina, Dr. Mohan's Diabetes Specialties Centre and Madras Diabetes Research Foundation, Chennai, Tamil Nadu, India
| | - S Tamilselvi
- Junior Research Fellow, Centre for Healthcare Advancement, Innovation & Research, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - R Janani Surya
- Research Associate, Vision Research Foundation, Chennai, Tamil Nadu, India
| |
Collapse
|
13
|
Heo SP, Choi H. Development of a robust eye exam diagnosis platform with a deep learning model. Technol Health Care 2023; 31:423-428. [PMID: 37066941 DOI: 10.3233/thc-236036] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Abstract
BACKGROUND Eye exam diagnosis is one of the early detection methods. However, such a method is dependent on expensive and unpredictable optical equipment. OBJECTIVE The eye exam can be re-emerged through an optometric lens attached to a smartphone and come to read the diseases automatically. Therefore, this study aims to provide a stable and predictable model with a given dataset representing the target group domain and develop a new method to identify eye disease with accurate and stable performance. METHODS The ResNet-18 models pre-trained on ImageNet data composed of 1,000 everyday objects were employed to learn the dataset's features and validate the test dataset separated from the training dataset. RESULTS A proposed model showed high training and validation accuracy values of 99.1% and 96.9%, respectively. CONCLUSION The designed model could produce a robust and stable eye disease discrimination performance.
Collapse
Affiliation(s)
- Sung-Phil Heo
- Department of Information and Telecommunication Engineering, Gangeung-Wonju National University, Wonju, Korea
| | - Hojong Choi
- Department of Electronic Engineering, Gachon University, Seongnam, Korea
| |
Collapse
|
14
|
Diabetic Retinopathy and Diabetic Macular Edema Detection Using Ensemble Based Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13051001. [PMID: 36900145 PMCID: PMC10000375 DOI: 10.3390/diagnostics13051001] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 02/23/2023] [Accepted: 03/02/2023] [Indexed: 03/09/2023] Open
Abstract
Diabetic retinopathy (DR) and diabetic macular edema (DME) are forms of eye illness caused by diabetes that affects the blood vessels in the eyes, with the ground occupied by lesions of varied extent determining the disease burden. This is among the most common cause of visual impairment in the working population. Various factors have been discovered to play an important role in a person's growth of this condition. Among the essential elements at the top of the list are anxiety and long-term diabetes. If not detected early, this illness might result in permanent eyesight loss. The damage can be reduced or avoided if it is recognized ahead of time. Unfortunately, due to the time and arduous nature of the diagnosing process, it is harder to identify the prevalence of this condition. Skilled doctors manually review digital color images to look for damage produced by vascular anomalies, the most common complication of diabetic retinopathy. Even though this procedure is reasonably accurate, it is quite pricey. The delays highlight the necessity for diagnosis to be automated, which will have a considerable positive significant impact on the health sector. The use of AI in diagnosing the disease has yielded promising and dependable findings in recent years, which is the impetus for this publication. This article used ensemble convolutional neural network (ECNN) to diagnose DR and DME automatically, with accurate results of 99 percent. This result was achieved using preprocessing, blood vessel segmentation, feature extraction, and classification. For contrast enhancement, the Harris hawks optimization (HHO) technique is presented. Finally, the experiments were conducted for two kinds of datasets: IDRiR and Messidor for accuracy, precision, recall, F-score, computational time, and error rate.
Collapse
|
15
|
Zheng X, Tang P, Ai L, Liu D, Zhang Y, Wang B. White blood cell detection using saliency detection and CenterNet: A two-stage approach. JOURNAL OF BIOPHOTONICS 2023; 16:e202200174. [PMID: 36101492 DOI: 10.1002/jbio.202200174] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 08/23/2022] [Accepted: 09/12/2022] [Indexed: 06/15/2023]
Abstract
White blood cell (WBC) detection plays a vital role in peripheral blood smear analysis. However, cell detection remains a challenging task due to multi-cell adhesion, different staining and imaging conditions. Owing to the powerful feature extraction capability of deep learning, object detection methods based on convolutional neural networks (CNNs) have been widely applied in medical image analysis. Nevertheless, the CNN training is time-consuming and inaccuracy, especially for large-scale blood smear images, where most of the images are background. To address the problem, we propose a two-stage approach that treats WBC detection as a small salient object detection task. In the first saliency detection stage, we use the Itti's visual attention model to locate the regions of interest (ROIs), based on the proposed adaptive center-surround difference (ACSD) operator. In the second WBC detection stage, the modified CenterNet model is performed on ROI sub-images to obtain a more accurate localization and classification result of each WBC. Experimental results showed that our method exceeds the performance of several existing methods on two different data sets, and achieves a state-of-the-art mAP of over 98.8%.
Collapse
Affiliation(s)
- Xin Zheng
- School of Computer and Information, Anqing Normal University, Anqing, China
- The University Key Laboratory of Intelligent Perception and Computing of Anhui Province, Anqing Normal University, Anqing, China
| | - Pan Tang
- School of Computer and Information, Anqing Normal University, Anqing, China
| | - Liefu Ai
- School of Computer and Information, Anqing Normal University, Anqing, China
- The University Key Laboratory of Intelligent Perception and Computing of Anhui Province, Anqing Normal University, Anqing, China
| | - Deyang Liu
- School of Computer and Information, Anqing Normal University, Anqing, China
- The University Key Laboratory of Intelligent Perception and Computing of Anhui Province, Anqing Normal University, Anqing, China
| | - Youzhi Zhang
- School of Computer and Information, Anqing Normal University, Anqing, China
- The University Key Laboratory of Intelligent Perception and Computing of Anhui Province, Anqing Normal University, Anqing, China
| | - Boyang Wang
- School of Computer and Information, Anqing Normal University, Anqing, China
| |
Collapse
|
16
|
Sebastian A, Elharrouss O, Al-Maadeed S, Almaadeed N. A Survey on Deep-Learning-Based Diabetic Retinopathy Classification. Diagnostics (Basel) 2023; 13:diagnostics13030345. [PMID: 36766451 PMCID: PMC9914068 DOI: 10.3390/diagnostics13030345] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/21/2022] [Accepted: 12/22/2022] [Indexed: 01/19/2023] Open
Abstract
The number of people who suffer from diabetes in the world has been considerably increasing recently. It affects people of all ages. People who have had diabetes for a long time are affected by a condition called Diabetic Retinopathy (DR), which damages the eyes. Automatic detection using new technologies for early detection can help avoid complications such as the loss of vision. Currently, with the development of Artificial Intelligence (AI) techniques, especially Deep Learning (DL), DL-based methods are widely preferred for developing DR detection systems. For this purpose, this study surveyed the existing literature on diabetic retinopathy diagnoses from fundus images using deep learning and provides a brief description of the current DL techniques that are used by researchers in this field. After that, this study lists some of the commonly used datasets. This is followed by a performance comparison of these reviewed methods with respect to some commonly used metrics in computer vision tasks.
Collapse
|
17
|
Nawaz M, Nazir T, Baili J, Khan MA, Kim YJ, Cha JH. CXray-EffDet: Chest Disease Detection and Classification from X-ray Images Using the EfficientDet Model. Diagnostics (Basel) 2023; 13:248. [PMID: 36673057 PMCID: PMC9857576 DOI: 10.3390/diagnostics13020248] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/28/2022] [Accepted: 01/05/2023] [Indexed: 01/11/2023] Open
Abstract
The competence of machine learning approaches to carry out clinical expertise tasks has recently gained a lot of attention, particularly in the field of medical-imaging examination. Among the most frequently used clinical-imaging modalities in the healthcare profession is chest radiography, which calls for prompt reporting of the existence of potential anomalies and illness diagnostics in images. Automated frameworks for the recognition of chest abnormalities employing X-rays are being introduced in health departments. However, the reliable detection and classification of particular illnesses in chest X-ray samples is still a complicated issue because of the complex structure of radiographs, e.g., the large exposure dynamic range. Moreover, the incidence of various image artifacts and extensive inter- and intra-category resemblances further increases the difficulty of chest disease recognition procedures. The aim of this study was to resolve these existing problems. We propose a deep learning (DL) approach to the detection of chest abnormalities with the X-ray modality using the EfficientDet (CXray-EffDet) model. More clearly, we employed the EfficientNet-B0-based EfficientDet-D0 model to compute a reliable set of sample features and accomplish the detection and classification task by categorizing eight categories of chest abnormalities using X-ray images. The effective feature computation power of the CXray-EffDet model enhances the power of chest abnormality recognition due to its high recall rate, and it presents a lightweight and computationally robust approach. A large test of the model employing a standard database from the National Institutes of Health (NIH) was conducted to demonstrate the chest disease localization and categorization performance of the CXray-EffDet model. We attained an AUC score of 0.9080, along with an IOU of 0.834, which clearly determines the competency of the introduced model.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan
- Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
| | - Tahira Nazir
- Faculty of Computing, Department of Computer Science, Riphah International University Gulberg Green Campus, Islamabad 04403, Pakistan
| | - Jamel Baili
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia
- Higher Institute of Applied Science and Technology of Sousse (ISSATS), Cité Taffala (Ibn Khaldoun) 4003 Sousse, University of Souse, Sousse 4000, Tunisia
| | | | - Ye Jin Kim
- Department of Computer Science, Hanyang University, Seoul 04763, Republic of Korea
| | - Jae-Hyuk Cha
- Department of Computer Science, Hanyang University, Seoul 04763, Republic of Korea
| |
Collapse
|
18
|
Image-to-image translation with Generative Adversarial Networks via retinal masks for realistic Optical Coherence Tomography imaging of Diabetic Macular Edema disorders. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
19
|
An Efficient Approach to Predict Eye Diseases from Symptoms Using Machine Learning and Ranker-Based Feature Selection Methods. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 10:bioengineering10010025. [PMID: 36671598 PMCID: PMC9854513 DOI: 10.3390/bioengineering10010025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/13/2022] [Accepted: 12/19/2022] [Indexed: 12/28/2022]
Abstract
The eye is generally considered to be the most important sensory organ of humans. Diseases and other degenerative conditions of the eye are therefore of great concern as they affect the function of this vital organ. With proper early diagnosis by experts and with optimal use of medicines and surgical techniques, these diseases or conditions can in many cases be either cured or greatly mitigated. Experts that perform the diagnosis are in high demand and their services are expensive, hence the appropriate identification of the cause of vision problems is either postponed or not done at all such that corrective measures are either not done or done too late. An efficient model to predict eye diseases using machine learning (ML) and ranker-based feature selection (r-FS) methods is therefore proposed which will aid in obtaining a correct diagnosis. The aim of this model is to automatically predict one or more of five common eye diseases namely, Cataracts (CT), Acute Angle-Closure Glaucoma (AACG), Primary Congenital Glaucoma (PCG), Exophthalmos or Bulging Eyes (BE) and Ocular Hypertension (OH). We have used efficient data collection methods, data annotations by professional ophthalmologists, applied five different feature selection methods, two types of data splitting techniques (train-test and stratified k-fold cross validation), and applied nine ML methods for the overall prediction approach. While applying ML methods, we have chosen suitable classic ML methods, such as Decision Tree (DT), Random Forest (RF), Naive Bayes (NB), AdaBoost (AB), Logistic Regression (LR), k-Nearest Neighbour (k-NN), Bagging (Bg), Boosting (BS) and Support Vector Machine (SVM). We have performed a symptomatic analysis of the prominent symptoms of each of the five eye diseases. The results of the analysis and comparison between methods are shown separately. While comparing the methods, we have adopted traditional performance indices, such as accuracy, precision, sensitivity, F1-Score, etc. Finally, SVM outperformed other models obtaining the highest accuracy of 99.11% for 10-fold cross-validation and LR obtained 98.58% for the split ratio of 80:20.
Collapse
|
20
|
Lu Z, Miao J, Dong J, Zhu S, Wang X, Feng J. Automatic classification of retinal diseases with transfer learning-based lightweight convolutional neural network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
21
|
Nawaz M, Nazir T, Masood M, Ali F, Khan MA, Tariq U, Sahar N, Damaševičius R. Melanoma segmentation: A framework of improved DenseNet77 and UNET convolutional neural network. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:2137-2153. [DOI: 10.1002/ima.22750] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 05/03/2022] [Indexed: 08/25/2024]
Abstract
AbstractMelanoma is the most fatal type of skin cancer which can cause the death of victims at the advanced stage. Extensive work has been presented by the researcher on computer vision for skin lesion localization. However, correct and effective melanoma segmentation is still a tough job because of the extensive variations found in the shape, color, and sizes of skin moles. Moreover, the presence of light and brightness variations further complicates the segmentation task. We have presented improved deep learning (DL)‐based approach, namely, the DenseNet77‐based UNET model. More clearly, we have introduced the DenseNet77 network at the encoder unit of the UNET approach to computing the more representative set of image features. The calculated keypoints are later segmented by the decoder of the UNET model. We have used two standard datasets, namely, the ISIC‐2017 and ISIC‐2018 to evaluate the performance of the proposed approach and acquired the segmentation accuracies of 99.21% and 99.51% for the ISIC‐2017 and ISIC‐2018 datasets, respectively. We have confirmed through both the quantitative and qualitative results that the proposed improved UNET approach is robust to skin lesions segmentation and can accurately recognize the moles of varying colors and sizes.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science University of Engineering and Technology Taxila Pakistan
- Department of Software Engineering University of Enginering and Technology Taxila Pakistan
| | - Tahira Nazir
- Department of Computing Riphah International University Islamabad Pakistan
| | - Momina Masood
- Department of Computer Science University of Engineering and Technology Taxila Pakistan
| | - Farooq Ali
- Department of Computer Science University of Engineering and Technology Taxila Pakistan
| | | | - Usman Tariq
- College of Computer Engineering and Science Prince Sattam Bin Abdulaziz University Al‐Kharj Saudi Arabia
| | - Naveera Sahar
- Department of Computer Science University of Wah Wah Cantt Pakistan
| | | |
Collapse
|
22
|
Nawaz M, Nazir T, Khan MA, Alhaisoni M, Kim JY, Nam Y. MSeg-Net: A Melanoma Mole Segmentation Network Using CornerNet and Fuzzy K-Means Clustering. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7502504. [PMID: 36276999 PMCID: PMC9586776 DOI: 10.1155/2022/7502504] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 09/17/2022] [Indexed: 11/18/2022]
Abstract
Melanoma is a dangerous form of skin cancer that results in the demise of patients at the developed stage. Researchers have attempted to develop automated systems for the timely recognition of this deadly disease. However, reliable and precise identification of melanoma moles is a tedious and complex activity as there exist huge differences in the mass, structure, and color of the skin lesions. Additionally, the incidence of noise, blurring, and chrominance changes in the suspected images further enhance the complexity of the detection procedure. In the proposed work, we try to overcome the limitations of the existing work by presenting a deep learning (DL) model. Descriptively, after accomplishing the preprocessing task, we have utilized an object detection approach named CornerNet model to detect melanoma lesions. Then the localized moles are passed as input to the fuzzy K-means (FLM) clustering approach to perform the segmentation task. To assess the segmentation power of the proposed approach, two standard databases named ISIC-2017 and ISIC-2018 are employed. Extensive experimentation has been conducted to demonstrate the robustness of the proposed approach through both numeric and pictorial results. The proposed approach is capable of detecting and segmenting the moles of arbitrary shapes and orientations. Furthermore, the presented work can tackle the presence of noise, blurring, and brightness variations as well. We have attained the segmentation accuracy values of 99.32% and 99.63% over the ISIC-2017 and ISIC-2018 databases correspondingly which clearly depicts the effectiveness of our model for the melanoma mole segmentation.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Software Engineering, University of Engineering and Technology Taxila, 47050, Pakistan
- Department of Computer Science, University of Engineering and Technology Taxila, 47050, Pakistan
| | - Tahira Nazir
- Department of Computing, Riphah International University, Islamabad, Pakistan
| | | | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Jung-Yeon Kim
- Department of ICT Convergence, Soonchunhyang University, Asan 31538, Republic of Korea
| | - Yunyoung Nam
- Department of ICT Convergence, Soonchunhyang University, Asan 31538, Republic of Korea
| |
Collapse
|
23
|
OHGCNet: Optimal feature selection-based hybrid graph convolutional network model for joint DR-DME classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
24
|
Blaivas M, Blaivas LN, Campbell K, Thomas J, Shah S, Yadav K, Liu YT. Making Artificial Intelligence Lemonade Out of Data Lemons: Adaptation of a Public Apical Echo Database for Creation of a Subxiphoid Visual Estimation Automatic Ejection Fraction Machine Learning Algorithm. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:2059-2069. [PMID: 34820867 DOI: 10.1002/jum.15889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 11/02/2021] [Accepted: 11/09/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES A paucity of point-of-care ultrasound (POCUS) databases limits machine learning (ML). Assess feasibility of training ML algorithms to visually estimate left ventricular ejection fraction (EF) from a subxiphoid (SX) window using only apical 4-chamber (A4C) images. METHODS Researchers used a long-short-term-memory algorithm for image analysis. Using the Stanford EchoNet-Dynamic database of 10,036 A4C videos with calculated exact EF, researchers tested 3 ML training permeations. First, training on unaltered Stanford A4C videos, then unaltered and 90° clockwise (CW) rotated videos and finally unaltered, 90° rotated and horizontally flipped videos. As a real-world test, we obtained 615 SX videos from Harbor-UCLA (HUCLA) with EF calculations in 5% ranges. Researchers performed 1000 randomizations of EF point estimation within HUCLA EF ranges to compensate for ML and HUCLA EF mismatch, obtaining a mean value for absolute error (MAE) comparison and performed Bland-Altman analyses. RESULTS The ML algorithm EF mean MAE was estimated at 23.0, with a range of 22.8-23.3 using unaltered A4C video, mean MAE was 16.7, with a range of 16.5-16.9 using unaltered and 90° CW rotated video, mean MAE was 16.6, with a range of 16.3-16.8 using unaltered, 90° CW rotated and horizontally flipped video training. Bland-Altman showed weakest agreement at 40-45% EF. CONCLUSIONS Researchers successfully adapted unrelated ultrasound window data to train a POCUS ML algorithm with fair MAE using data manipulation to simulate a different ultrasound examination. This may be important for future POCUS algorithm design to help overcome a paucity of POCUS databases.
Collapse
Affiliation(s)
- Michael Blaivas
- Department of Medicine, University of South Carolina School of Medicine, Columbia, SC, USA
- Department of Emergency Medicine, St. Francis Hospital, Columbus, GA, USA
| | | | - Kendra Campbell
- Department of Emergency Medicine, Harbor-UCLA Medical Center, Torrance, CA, USA
| | - Joseph Thomas
- Department of Cardiology, Harbor-UCLA Medical Center, Torrance, CA, USA
- David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Sonia Shah
- Department of Cardiology, Harbor-UCLA Medical Center, Torrance, CA, USA
- David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Kabir Yadav
- Department of Emergency Medicine, Harbor-UCLA Medical Center, Torrance, CA, USA
- David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Yiju Teresa Liu
- Department of Emergency Medicine, Harbor-UCLA Medical Center, Torrance, CA, USA
- David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
25
|
Yaghy A, Lee AY, Keane PA, Keenan TDL, Mendonca LSM, Lee CS, Cairns AM, Carroll J, Chen H, Clark J, Cukras CA, de Sisternes L, Domalpally A, Durbin MK, Goetz KE, Grassmann F, Haines JL, Honda N, Hu ZJ, Mody C, Orozco LD, Owsley C, Poor S, Reisman C, Ribeiro R, Sadda SR, Sivaprasad S, Staurenghi G, Ting DS, Tumminia SJ, Zalunardo L, Waheed NK. Artificial intelligence-based strategies to identify patient populations and advance analysis in age-related macular degeneration clinical trials. Exp Eye Res 2022; 220:109092. [PMID: 35525297 PMCID: PMC9405680 DOI: 10.1016/j.exer.2022.109092] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/18/2022] [Accepted: 04/20/2022] [Indexed: 11/04/2022]
Affiliation(s)
- Antonio Yaghy
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | - Pearse A Keane
- Moorfields Eye Hospital & UCL Institute of Ophthalmology, London, UK
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | | | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, 925 N 87th Street, Milwaukee, WI, 53226, USA
| | - Hao Chen
- Genentech, South San Francisco, CA, USA
| | | | - Catherine A Cukras
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Amitha Domalpally
- Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA
| | | | - Kerry E Goetz
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Jonathan L Haines
- Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Cleveland Institute of Computational Biology, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | | | - Zhihong Jewel Hu
- Doheny Eye Institute, University of California, Los Angeles, CA, USA
| | | | - Luz D Orozco
- Department of Bioinformatics, Genentech, South San Francisco, CA, 94080, USA
| | - Cynthia Owsley
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Stephen Poor
- Department of Ophthalmology, Novartis Institutes for Biomedical Research, Cambridge, MA, USA
| | | | | | - Srinivas R Sadda
- Doheny Eye Institute, David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Giovanni Staurenghi
- Department of Biomedical and Clinical Sciences Luigi Sacco, Luigi Sacco Hospital, University of Milan, Italy
| | - Daniel Sw Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Santa J Tumminia
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Nadia K Waheed
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA.
| |
Collapse
|
26
|
A Comprehensive Performance Analysis of Transfer Learning Optimization in Visual Field Defect Classification. Diagnostics (Basel) 2022; 12:diagnostics12051258. [PMID: 35626413 PMCID: PMC9140208 DOI: 10.3390/diagnostics12051258] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 05/16/2022] [Accepted: 05/17/2022] [Indexed: 02/05/2023] Open
Abstract
Numerous research have demonstrated that Convolutional Neural Network (CNN) models are capable of classifying visual field (VF) defects with great accuracy. In this study, we evaluated the performance of different pre-trained models (VGG-Net, MobileNet, ResNet, and DenseNet) in classifying VF defects and produced a comprehensive comparative analysis to compare the performance of different CNN models before and after hyperparameter tuning and fine-tuning. Using 32 batch sizes, 50 epochs, and ADAM as the optimizer to optimize weight, bias, and learning rate, VGG-16 obtained the highest accuracy of 97.63 percent, according to experimental findings. Subsequently, Bayesian optimization was utilized to execute automated hyperparameter tuning and automated fine-tuning layers of the pre-trained models to determine the optimal hyperparameter and fine-tuning layer for classifying many VF defect with the highest accuracy. We found that the combination of different hyperparameters and fine-tuning of the pre-trained models significantly impact the performance of deep learning models for this classification task. In addition, we also discovered that the automated selection of optimal hyperparameters and fine-tuning by Bayesian has significantly enhanced the performance of the pre-trained models. The results observed the best performance for the DenseNet-121 model with a validation accuracy of 98.46% and a test accuracy of 99.57% for the tested datasets.
Collapse
|
27
|
Diagnosis of Retinal Diseases Based on Bayesian Optimization Deep Learning Network Using Optical Coherence Tomography Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8014979. [PMID: 35463234 PMCID: PMC9033334 DOI: 10.1155/2022/8014979] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 03/17/2022] [Indexed: 02/08/2023]
Abstract
Retinal abnormalities have emerged as a serious public health concern in recent years and can manifest gradually and without warning. These diseases can affect any part of the retina, causing vision impairment and indeed blindness in extreme cases. This necessitates the development of automated approaches to detect retinal diseases more precisely and, preferably, earlier. In this paper, we examine transfer learning of pretrained convolutional neural network (CNN) and then transfer it to detect retinal problems from Optical Coherence Tomography (OCT) images. In this study, pretrained CNN models, namely, VGG16, DenseNet201, InceptionV3, and Xception, are used to classify seven different retinal diseases from a dataset of images with and without retinal diseases. In addition, to choose optimum values for hyperparameters, Bayesian optimization is applied, and image augmentation is used to increase the generalization capabilities of the developed models. This research also provides a comparison of the proposed models as well as an analysis of them. The accuracy achieved using DenseNet201 on the Retinal OCT Image dataset is more than 99% and offers a good level of accuracy in classifying retinal diseases compared to other approaches, which only detect a small number of retinal diseases.
Collapse
|
28
|
Tandon A, Guha SK, Rashid J, Kim J, Gahlan M, Shabaz M, Anjum N. Graph based CNN Algorithm to Detect Spammer Activity Over Social Media. IETE JOURNAL OF RESEARCH 2022. [DOI: 10.1080/03772063.2022.2061610] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
- Aditya Tandon
- Krishna Engineering College, Ghaziabad, Up, Ghaziabad, India
| | - Shouvik Kumar Guha
- The West Bengal National University of Juridical Sciences, Kolkata, India
| | - Junaid Rashid
- Department of Computer Science and Engineering, Kongju National University, Cheonan 31080, Korea
| | - Jungeun Kim
- Department of Computer Science and Engineering, Kongju National University, Cheonan 31080, Korea
| | - Mamta Gahlan
- Maharaja Surajmal Institute of Technology, Delhi, India
| | - Mohammad Shabaz
- Model Institute of Engineering and Technology, Jammu, J&K, India
| | - Nasreen Anjum
- Department of Computing and Engineering, University of Gloucestershire, Gloucestershir, UK
| |
Collapse
|
29
|
Rashid J, Batool S, Kim J, Wasif Nisar M, Hussain A, Juneja S, Kushwaha R. An Augmented Artificial Intelligence Approach for Chronic Diseases Prediction. Front Public Health 2022; 10:860396. [PMID: 35433587 PMCID: PMC9008324 DOI: 10.3389/fpubh.2022.860396] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 02/22/2022] [Indexed: 12/23/2022] Open
Abstract
Chronic diseases are increasing in prevalence and mortality worldwide. Early diagnosis has therefore become an important research area to enhance patient survival rates. Several research studies have reported classification approaches for specific disease prediction. In this paper, we propose a novel augmented artificial intelligence approach using an artificial neural network (ANN) with particle swarm optimization (PSO) to predict five prevalent chronic diseases including breast cancer, diabetes, heart attack, hepatitis, and kidney disease. Seven classification algorithms are compared to evaluate the proposed model's prediction performance. The ANN prediction model constructed with a PSO based feature extraction approach outperforms other state-of-the-art classification approaches when evaluated with accuracy. Our proposed approach gave the highest accuracy of 99.67%, with the PSO. However, the classification model's performance is found to depend on the attributes of data used for classification. Our results are compared with various chronic disease datasets and shown to outperform other benchmark approaches. In addition, our optimized ANN processing is shown to require less time compared to random forest (RF), deep learning and support vector machine (SVM) based methods. Our study could play a role for early diagnosis of chronic diseases in hospitals, including through development of online diagnosis systems.
Collapse
Affiliation(s)
- Junaid Rashid
- Department of Computer Science and Engineering, Kongju National University, Cheonan, South Korea
| | - Saba Batool
- Department of Computer Science, COMSATS University Islamabad, Islamabad, Pakistan
| | - Jungeun Kim
- Department of Computer Science and Engineering, Kongju National University, Cheonan, South Korea
- *Correspondence: Jungeun Kim
| | - Muhammad Wasif Nisar
- Department of Computer Science, COMSATS University Islamabad, Islamabad, Pakistan
| | - Amir Hussain
- Data Science and Cyber Analytics Research Group, Edinburgh Napier University, Edinburgh, United Kingdom
| | - Sapna Juneja
- Department of Computer Science, KIET Group of Institutions, Ghaziabad, India
| | - Riti Kushwaha
- Department of Computer Science, Bennett University, Greater Noida, India
| |
Collapse
|
30
|
Faheem Saleem M, Muhammad Adnan Shah S, Nazir T, Mehmood A, Nawaz M, Attique Khan M, Kadry S, Majumdar A, Thinnukool O. Signet Ring Cell Detection from Histological Images Using Deep Learning. COMPUTERS, MATERIALS & CONTINUA 2022; 72:5985-5997. [DOI: 10.32604/cmc.2022.023101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 11/29/2021] [Indexed: 08/25/2024]
|
31
|
Atteia G, Abdel Samee N, Zohair Hassan H. DFTSA-Net: Deep Feature Transfer-Based Stacked Autoencoder Network for DME Diagnosis. ENTROPY 2021; 23:e23101251. [PMID: 34681974 PMCID: PMC8534911 DOI: 10.3390/e23101251] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 09/14/2021] [Accepted: 09/23/2021] [Indexed: 12/13/2022]
Abstract
Diabetic macular edema (DME) is the most common cause of irreversible vision loss in diabetes patients. Early diagnosis of DME is necessary for effective treatment of the disease. Visual detection of DME in retinal screening images by ophthalmologists is a time-consuming process. Recently, many computer-aided diagnosis systems have been developed to assist doctors by detecting DME automatically. In this paper, a new deep feature transfer-based stacked autoencoder neural network system is proposed for the automatic diagnosis of DME in fundus images. The proposed system integrates the power of pretrained convolutional neural networks as automatic feature extractors with the power of stacked autoencoders in feature selection and classification. Moreover, the system enables extracting a large set of features from a small input dataset using four standard pretrained deep networks: ResNet-50, SqueezeNet, Inception-v3, and GoogLeNet. The most informative features are then selected by a stacked autoencoder neural network. The stacked network is trained in a semi-supervised manner and is used for the classification of DME. It is found that the introduced system achieves a maximum classification accuracy of 96.8%, sensitivity of 97.5%, and specificity of 95.5%. The proposed system shows a superior performance over the original pretrained network classifiers and state-of-the-art findings.
Collapse
Affiliation(s)
- Ghada Atteia
- Information Technology Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11461, Saudi Arabia;
- Correspondence: or
| | - Nagwan Abdel Samee
- Information Technology Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11461, Saudi Arabia;
- Computer Engineering Department, Misr University for Science and Technology, Giza 12511, Egypt
| | - Hassan Zohair Hassan
- Department of Mechanical Engineering, College of Engineering, Alfaisal University, Takhassusi Street, P.O. Box 50927, Riyadh 11533, Saudi Arabia;
| |
Collapse
|