1
|
Zhong Y, Liu Z, Zhang X, Liang Z, Chen W, Dai C, Qi L. Unsupervised adversarial neural network for enhancing vasculature in photoacoustic tomography images using optical coherence tomography angiography. Comput Med Imaging Graph 2024; 117:102425. [PMID: 39216343 DOI: 10.1016/j.compmedimag.2024.102425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 08/23/2024] [Accepted: 08/23/2024] [Indexed: 09/04/2024]
Abstract
Photoacoustic tomography (PAT) is a powerful imaging modality for visualizing tissue physiology and exogenous contrast agents. However, PAT faces challenges in visualizing deep-seated vascular structures due to light scattering, absorption, and reduced signal intensity with depth. Optical coherence tomography angiography (OCTA) offers high-contrast visualization of vasculature networks, yet its imaging depth is limited to a millimeter scale. Herein, we propose OCPA-Net, a novel unsupervised deep learning method that utilizes the rich vascular feature of OCTA to enhance PAT images. Trained on unpaired OCTA and PAT images, OCPA-Net incorporates a vessel-aware attention module to enhance deep-seated vessel details captured from OCTA. It leverages a domain-adversarial loss function to enforce structural consistency and a novel identity invariant loss to mitigate excessive image content generation. We validate the structural fidelity of OCPA-Net on simulation experiments, and then demonstrate its vascular enhancement performance on in vivo imaging experiments of tumor-bearing mice and contrast-enhanced pregnant mice. The results show the promise of our method for comprehensive vessel-related image analysis in preclinical research applications.
Collapse
Affiliation(s)
- Yutian Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Zhenyang Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China; Department of Radiotherapy, The Second Hospital of Nanjing, Nanjing University of Chinese Medicine, Nanjing, 210003, China
| | - Xiaoming Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Zhaoyong Liang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Cuixia Dai
- College of Science, Shanghai Institute of Technology, Shanghai, 201418, China
| | - Li Qi
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
2
|
Romero-Oraá R, Herrero-Tudela M, López MI, Hornero R, García M. Attention-based deep learning framework for automatic fundus image processing to aid in diabetic retinopathy grading. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 249:108160. [PMID: 38583290 DOI: 10.1016/j.cmpb.2024.108160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 01/26/2024] [Accepted: 03/30/2024] [Indexed: 04/09/2024]
Abstract
BACKGROUND AND OBJECTIVE Early detection and grading of Diabetic Retinopathy (DR) is essential to determine an adequate treatment and prevent severe vision loss. However, the manual analysis of fundus images is time consuming and DR screening programs are challenged by the availability of human graders. Current automatic approaches for DR grading attempt the joint detection of all signs at the same time. However, the classification can be optimized if red lesions and bright lesions are independently processed since the task gets divided and simplified. Furthermore, clinicians would greatly benefit from explainable artificial intelligence (XAI) to support the automatic model predictions, especially when the type of lesion is specified. As a novelty, we propose an end-to-end deep learning framework for automatic DR grading (5 severity degrees) based on separating the attention of the dark structures from the bright structures of the retina. As the main contribution, this approach allowed us to generate independent interpretable attention maps for red lesions, such as microaneurysms and hemorrhages, and bright lesions, such as hard exudates, while using image-level labels only. METHODS Our approach is based on a novel attention mechanism which focuses separately on the dark and the bright structures of the retina by performing a previous image decomposition. This mechanism can be seen as a XAI approach which generates independent attention maps for red lesions and bright lesions. The framework includes an image quality assessment stage and deep learning-related techniques, such as data augmentation, transfer learning and fine-tuning. We used the architecture Xception as a feature extractor and the focal loss function to deal with data imbalance. RESULTS The Kaggle DR detection dataset was used for method development and validation. The proposed approach achieved 83.7 % accuracy and a Quadratic Weighted Kappa of 0.78 to classify DR among 5 severity degrees, which outperforms several state-of-the-art approaches. Nevertheless, the main result of this work is the generated attention maps, which reveal the pathological regions on the image distinguishing the red lesions and the bright lesions. These maps provide explainability to the model predictions. CONCLUSIONS Our results suggest that our framework is effective to automatically grade DR. The separate attention approach has proven useful for optimizing the classification. On top of that, the obtained attention maps facilitate visual interpretation for clinicians. Therefore, the proposed method could be a diagnostic aid for the early detection and grading of DR.
Collapse
Affiliation(s)
- Roberto Romero-Oraá
- Biomedical Engineering Group, University of Valladolid, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain.
| | - María Herrero-Tudela
- Biomedical Engineering Group, University of Valladolid, Valladolid, 47011, Spain
| | - María I López
- Biomedical Engineering Group, University of Valladolid, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain
| | - Roberto Hornero
- Biomedical Engineering Group, University of Valladolid, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain
| | - María García
- Biomedical Engineering Group, University of Valladolid, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain
| |
Collapse
|
3
|
Ran J, Zhang G, Xia F, Zhang X, Xie J, Zhang H. Source-free active domain adaptation for diabetic retinopathy grading based on ultra-wide-field fundus images. Comput Biol Med 2024; 174:108418. [PMID: 38593641 DOI: 10.1016/j.compbiomed.2024.108418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 03/19/2024] [Accepted: 04/04/2024] [Indexed: 04/11/2024]
Abstract
Domain adaptation (DA) is commonly employed in diabetic retinopathy (DR) grading using unannotated fundus images, allowing knowledge transfer from labeled color fundus images. Existing DAs often struggle with domain disparities, hindering DR grading performance compared to clinical diagnosis. A source-free active domain adaptation method (SFADA), which generates features of color fundus images by noise, selects valuable ultra-wide-field (UWF) fundus images through local representation matching, and adapts models using DR lesion prototypes, is proposed to upgrade DR diagnostic accuracy. Importantly, SFADA enhances data security and patient privacy by excluding source domain data. It reduces image resolution and boosts model training speed by modeling DR grade relationships directly. Experiments show SFADA significantly improves DR grading performance, increasing accuracy by 20.90% and quadratic weighted kappa by 18.63% over baseline, reaching 85.36% and 92.38%, respectively. This suggests SFADA's promise for real clinical applications.
Collapse
Affiliation(s)
- Jinye Ran
- College of Computer and Information Science, Southwest University, Chongqing 400700, China
| | - Guanghua Zhang
- School of Big Data Intelligent Diagnosis and Treatment Industry, Taiyuan University, Taiyuan 030002, China; College of Biomedical Engineering, Taiyuan University of Technology, Taiyuan 030600, China
| | - Fan Xia
- Reading Academy, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Ximei Zhang
- College of Biomedical Engineering, Taiyuan University of Technology, Taiyuan 030600, China
| | - Juan Xie
- Shanxi Eye hospital, Taiyuan 030002, China
| | - Hao Zhang
- College of Chemistry and Chemical Engineering, Southwest University, Chongqing 400700, China.
| |
Collapse
|
4
|
Yuan H, Dai M, Shi C, Li M, Li H. A generative adversarial neural network with multi-attention feature extraction for fundus lesion segmentation. Int Ophthalmol 2023; 43:5079-5090. [PMID: 37851139 DOI: 10.1007/s10792-023-02911-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 09/29/2023] [Indexed: 10/19/2023]
Abstract
PURPOSE Fundus lesion segmentation determines the location and size of diabetes retinopathy in fundus image, which assists doctors in developing the best eye treatment plan. However, owing to the scattered distribution and the similarity of lesions, it is extremely difficult to extract representative lesions feature and accurately segment lesions area. METHODS To solve the thorny problem, a generative adversarial network with multi-attention feature extraction is developed to segment diabetic retinopathy region. The main contributions are as follows: (1) An improved residual U-Net network combining with self-attention mechanism is designed as generative network to fully extract local and global feature of lesions while reducing the loss of key feature information. Considering the correlation between the same lesions feature of different samples, external attention mechanism is introduced in the residual U-Net network to focus on the relevant features of the same lesions in different samples throughout the entire dataset. (2) A discriminative network based on the PatchGAN structure is designed to further enhance the segmentation ability of generation network by discriminating between true and false samples. RESULTS The proposed network is evaluated on the public dataset IDRiD, which achieved the Dice correlation coefficients of 75.7%, 76.53%, 50.06%, and 45.89% for EX, SE, MA, and HE, respectively. CONCLUSION The experimental results show the generative adversarial neural network qualified for accurate segmentation of diabetic retinopathy from fundus image well.
Collapse
Affiliation(s)
- Haiying Yuan
- Faculty of Information Technology, Beijing University of Technology, No.100 Pingleyuan, Chaoyang District, Beijing, 100124, People's Republic of China.
| | - Mengfan Dai
- Faculty of Information Technology, Beijing University of Technology, No.100 Pingleyuan, Chaoyang District, Beijing, 100124, People's Republic of China
| | - Cheng Shi
- Faculty of Information Technology, Beijing University of Technology, No.100 Pingleyuan, Chaoyang District, Beijing, 100124, People's Republic of China
| | - Minghao Li
- Faculty of Information Technology, Beijing University of Technology, No.100 Pingleyuan, Chaoyang District, Beijing, 100124, People's Republic of China
| | - Haihang Li
- Faculty of Information Technology, Beijing University of Technology, No.100 Pingleyuan, Chaoyang District, Beijing, 100124, People's Republic of China
| |
Collapse
|
5
|
Shao Y, Guo J, Wang J, Huang Y, Gan W, Zhang X, Wu G, Sun D, Gu Y, Gu Q, Yue NJ, Yang G, Xie G, Xu Z. Novel in-house knowledge-based automated planning system for lung cancer treated with intensity-modulated radiotherapy. Strahlenther Onkol 2023:10.1007/s00066-023-02126-1. [PMID: 37603050 DOI: 10.1007/s00066-023-02126-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 07/10/2023] [Indexed: 08/22/2023]
Abstract
PURPOSE The goal of this study was to propose a knowledge-based planning system which could automatically design plans for lung cancer patients treated with intensity-modulated radiotherapy (IMRT). METHODS AND MATERIALS From May 2018 to June 2020, 612 IMRT treatment plans of lung cancer patients were retrospectively selected to construct a planning database. Knowledge-based planning (KBP) architecture named αDiar was proposed in this study. It consisted of two parts separated by a firewall. One was the in-hospital workstation, and the other was the search engine in the cloud. Based on our previous study, A‑Net in the in-hospital workstation was used to generate predicted virtual dose images. A search engine including a three-dimensional convolutional neural network (3D CNN) was constructed to derive the feature vectors of dose images. By comparing the similarity of the features between virtual dose images and the clinical dose images in the database, the most similar feature was found. The optimization parameters (OPs) of the treatment plan corresponding to the most similar feature were assigned to the new plan, and the design of a new treatment plan was automatically completed. After αDiar was developed, we performed two studies. The first retrospective study was conducted to validate whether this architecture was qualified for clinical practice and involved 96 patients. The second comparative study was performed to investigate whether αDiar could assist dosimetrists in improving the quality of planning for the patients. Two dosimetrists were involved and designed plans for only one trial with and without αDiar; 26 patients were involved in this study. RESULTS The first study showed that about 54% (52/96) of the automatically generated plans would achieve the dosimetric constraints of the Radiation Therapy Oncology Group (RTOG) and about 93% (89/96) of the automatically generated plans would achieve the dosimetric constraints of the National Comprehensive Cancer Network (NCCN). The second study showed that the quality of treatment planning designed by junior dosimetrists was improved with the help of αDiar. CONCLUSIONS Our results showed that αDiar was an effective tool to improve planning quality. Over half of the patients' plans could be designed automatically. For the remaining patients, although the automatically designed plans did not fully meet the clinical requirements, their quality was also better than that of manual plans.
Collapse
Affiliation(s)
- Yan Shao
- Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jindong Guo
- Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jiyong Wang
- Shanghai Pulse Medical Technology Inc., Shanghai, China
| | - Ying Huang
- Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Wutian Gan
- School of Physics and Technology, University of Wuhan, Wuhan, China
| | - Xiaoying Zhang
- School of Information Science and Engineering, Xiamen University, Xiamen, China
| | - Ge Wu
- Ping An Healthcare Technology Co. Ltd., Shanghai, China
| | - Dong Sun
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yu Gu
- School of Engineering, Hong Kong University of Science and Technology, Hong Kong SAR, China
| | - Qingtao Gu
- School of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Ning Jeff Yue
- Department of Radiation Oncology, Rutgers Cancer Institute of New Jersey, Rutgers University, New Brunswick, NJ, USA
| | - Guanli Yang
- Radiotherapy Department, Shandong Second Provincial General Hospital, Shandong University, Jinan, China.
| | - Guotong Xie
- Ping An Healthcare Technology Co. Ltd., Shanghai, China.
- Ping An Health Cloud Company Limited, Shanghai, China.
- Ping An International Smart City Technology Co., Ltd., Shanghai, China.
| | - Zhiyong Xu
- Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
6
|
Zhang X, Ma Y, Gong Q, Yao J. Automatic detection of microaneurysms in fundus images based on multiple preprocessing fusion to extract features. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
7
|
Alarood AA, Faheem M, Al‐Khasawneh MA, Alzahrani AIA, Alshdadi AA. Secure medical image transmission using deep neural network in e-health applications. Healthc Technol Lett 2023; 10:87-98. [PMID: 37529409 PMCID: PMC10388229 DOI: 10.1049/htl2.12049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/13/2023] [Accepted: 07/03/2023] [Indexed: 08/03/2023] Open
Abstract
Recently, medical technologies have developed, and the diagnosis of diseases through medical images has become very important. Medical images often pass through the branches of the network from one end to the other. Hence, high-level security is required. Problems arise due to unauthorized use of data in the image. One of the methods used to secure data in the image is encryption, which is one of the most effective techniques in this field. Confusion and diffusion are the two main steps addressed here. The contribution here is the adaptation of the deep neural network by the weight that has the highest impact on the output, whether it is an intermediate output or a semi-final output in additional to a chaotic system that is not detectable using deep neural network algorithm. The colour and grayscale images were used in the proposed method by dividing the images according to the Region of Interest by the deep neural network algorithm. The algorithm was then used to generate random numbers to randomly create a chaotic system based on the replacement of columns and rows, and randomly distribute the pixels on the designated area. The proposed algorithm evaluated in several ways, and compared with the existing methods to prove the worth of the proposed method.
Collapse
Affiliation(s)
| | - Muhammad Faheem
- School of Technology and InnovationsUniversity of VaasaVaasaFinland
| | - Mahmoud Ahmad Al‐Khasawneh
- School of Information TechnologySkyline University CollegeUniversity City SharjahSharjahUnited Arab Emirates
| | - Abdullah I. A. Alzahrani
- Department of Computer Science, Collage of Science and Humanities in Al QuwaiiyahShaqra UniversityShaqraSaudi Arabia
| | - Abdulrahman A. Alshdadi
- Department of Information Systems and Technology, College of Computer Science and EngineeringUniversity of JeddahJeddahSaudi Arabia
| |
Collapse
|
8
|
Bar-David D, Bar-David L, Shapira Y, Leibu R, Dori D, Gebara A, Schneor R, Fischer A, Soudry S. Elastic Deformation of Optical Coherence Tomography Images of Diabetic Macular Edema for Deep-Learning Models Training: How Far to Go? IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 11:487-494. [PMID: 37817823 PMCID: PMC10561735 DOI: 10.1109/jtehm.2023.3294904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 05/09/2023] [Accepted: 07/04/2023] [Indexed: 10/12/2023]
Abstract
- Objective: To explore the clinical validity of elastic deformation of optical coherence tomography (OCT) images for data augmentation in the development of deep-learning model for detection of diabetic macular edema (DME). METHODS Prospective evaluation of OCT images of DME (n = 320) subject to elastic transformation, with the deformation intensity represented by ([Formula: see text]). Three sets of images, each comprising 100 pairs of scans (100 original & 100 modified), were grouped according to the range of ([Formula: see text]), including low-, medium- and high-degree of augmentation; ([Formula: see text] = 1-6), ([Formula: see text] = 7-12), and ([Formula: see text] = 13-18), respectively. Three retina specialists evaluated all datasets in a blinded manner and designated each image as 'original' versus 'modified'. The rate of assignment of 'original' value to modified images (false-negative) was determined for each grader in each dataset. RESULTS The false-negative rates ranged between 71-77% for the low-, 63-76% for the medium-, and 50-75% for the high-augmentation categories. The corresponding rates of correct identification of original images ranged between 75-85% ([Formula: see text]0.05) in the low-, 73-85% ([Formula: see text]0.05 for graders 1 & 2, p = 0.01 for grader 3) in the medium-, and 81-91% ([Formula: see text]) in the high-augmentation categories. In the subcategory ([Formula: see text] = 7-9) the false-negative rates were 93-83%, whereas the rates of correctly identifying original images ranged between 89-99% ([Formula: see text]0.05 for all graders). CONCLUSIONS Deformation of low-medium intensity ([Formula: see text] = 1-9) may be applied without compromising OCT image representativeness in DME. Clinical and Translational Impact Statement-Elastic deformation may efficiently augment the size, robustness, and diversity of training datasets without altering their clinical value, enhancing the development of high-accuracy algorithms for automated interpretation of OCT images.
Collapse
Affiliation(s)
- Daniel Bar-David
- Faculty of Mechanical EngineeringTechnion Israel Institute of TechnologyHaifa3200003Israel
| | - Laura Bar-David
- Department of OphthalmologyRambam Health Care CampusHaifa3109601Israel
| | - Yinon Shapira
- Department of OphthalmologyCarmel Medical CenterHaifa3436212Israel
| | - Rina Leibu
- Department of OphthalmologyRambam Health Care CampusHaifa3109601Israel
| | - Dalia Dori
- Department of OphthalmologyRambam Health Care CampusHaifa3109601Israel
| | - Aseel Gebara
- Department of OphthalmologyRambam Health Care CampusHaifa3109601Israel
| | - Ronit Schneor
- Faculty of Mechanical EngineeringTechnion Israel Institute of TechnologyHaifa3200003Israel
| | - Anath Fischer
- Faculty of Mechanical EngineeringTechnion Israel Institute of TechnologyHaifa3200003Israel
| | - Shiri Soudry
- Department of OphthalmologyRambam Health Care CampusHaifa3109601Israel
- Clinical Research Institute at RambamRambam Health Care CampusHaifa3109601Israel
- The Ruth and Bruce Rappaport Faculty of MedicineTechnion Israel Institute of TechnologyHaifa3525433Israel
| |
Collapse
|
9
|
Estaji M, Hosseini B, Bozorg-Qomi S, Ebrahimi B. Pathophysiology and diagnosis of diabetic retinopathy: a narrative review. J Investig Med 2023; 71:265-278. [PMID: 36718824 DOI: 10.1177/10815589221145040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Diabetes is an endocrine disorder which is known by abnormal high blood glucose levels. There are two main categories of diabetes: type I (10%-15%) and type II (85%-90%). Although type II is more common, type I is the most common form in children. Diabetic retinopathy (DR), which remains the foremost cause of losing vision in working-age populations, can be considered as the main complication of diabetes mellitus. So choosing the best method for diagnosing, tracking, and treating the DR is vital to enhance the quality of life and decrease the medical expenses. Each method for diagnosing DR has some advantages and the best way must be selected according to the points that we need to find. For writing this manuscript, we made a list of relevant keywords including diabetes, DR, pathophysiology, ultrawide field imaging, fluorescein angiography, optical coherence tomography, and optical coherence tomography-angiography, and then we started searching for studies in PubMed, Scopus, and Web of Science databases. This review article covers the pathophysiology of DR and medical imaging techniques to monitor DR. First, we introduce DR and its pathophysiology and then we present the medical imaging techniques to monitor it.
Collapse
Affiliation(s)
- Mohadese Estaji
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Bita Hosseini
- Bioscience Research Group, School of Health and Life Sciences, Aston University, Birmingham, UK
| | - Saeed Bozorg-Qomi
- Department of Medical Genetics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Babak Ebrahimi
- Department of Anatomy, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
10
|
Chen Y, Feng X, Huang Y, Zhao L, Chen X, Qin S, Sun J, Jing J, Zhang X, Wang Y. Blood flow perfusion in visual pathway detected by arterial spin labeling magnetic resonance imaging for differential diagnosis of ocular ischemic syndrome. Front Neurosci 2023; 17:1121490. [PMID: 36860621 PMCID: PMC9969084 DOI: 10.3389/fnins.2023.1121490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 01/19/2023] [Indexed: 02/16/2023] Open
Abstract
Background Ocular ischemic syndrome (OIS), attributable to chronic hypoperfusion caused by marked carotid stenosis, is one of the important factors that cause ocular neurodegenerative diseases such as optic atrophy. The current study aimed to detect blood flow perfusion in a visual pathway by arterial spin labeling (ASL) and magnetic resonance imaging (MRI) for the differential diagnosis of OIS. Methods This diagnostic, cross-sectional study at a single institution was performed to detect blood flow perfusion in a visual pathway based on 3D pseudocontinuous ASL (3D-pCASL) using 3.0T MRI. A total of 91 participants (91 eyes) consisting of 30 eyes with OIS and 61 eyes with noncarotid artery stenosis-related retinal vascular diseases (39 eyes with diabetic retinopathy and 22 eyes with high myopic retinopathy) were consecutively included. Blood flow perfusion values in visual pathways derived from regions of interest in ASL images, including the retinal-choroidal complex, the intraorbital segments of the optic nerve, the tractus optics, and the visual center, were obtained and compared with arm-retinal circulation time and retinal circulation time derived from fundus fluorescein angiography (FFA). Receiver operating characteristic (ROC) curve analyses and the intraclass correlation coefficient (ICC) were performed to evaluate the accuracy and consistency. Results Patients with OIS had the lowest blood flow perfusion values in the visual pathway (all p < 0.05). The relative intraorbital segments of optic nerve blood flow values at post-labeling delays (PLDs) of 1.5 s (area under the curve, AUC = 0.832) and the relative retinal-choroidal complex blood flow values at PLDs of 2.5 s (AUC = 0.805) were effective for the differential diagnosis of OIS. The ICC of the blood flow values derived from the retinal-choroidal complex and the intraorbital segments of the optic nerve between the two observers showed satisfactory concordance (all ICC > 0.932, p < 0.001). The adverse reaction rates of ASL and FFA were 2.20 and 3.30%, respectively. Conclusion 3D-pCASL showed that the participants with OIS had lower blood flow perfusion values in the visual pathway, which presented satisfactory accuracy, reproducibility, and safety. It is a noninvasive and comprehensive differential diagnostic tool to assess blood flow perfusion in a visual pathway for the differential diagnosis of OIS.
Collapse
Affiliation(s)
- Yanan Chen
- Department of Ophthalmology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Xue Feng
- Department of Ophthalmology, Beijing Jishuitan Hospital, The Fourth Clinical Medical College of Peking University, Beijing, China
| | - Yingxiang Huang
- Department of Ophthalmology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Lu Zhao
- Department of Ophthalmology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Xi Chen
- Department of Ophthalmology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Shuqi Qin
- Department of Ophthalmology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jiao Sun
- Department of Ophthalmology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jing Jing
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Xiaolei Zhang
- Department of Ophthalmology, Beijing Friendship Hospital, Capital Medical University, Beijing, China,*Correspondence: Xiaolei Zhang ✉
| | - Yanling Wang
- Department of Ophthalmology, Beijing Friendship Hospital, Capital Medical University, Beijing, China,Yanling Wang ✉
| |
Collapse
|
11
|
Attention-Driven Cascaded Network for Diabetic Retinopathy Grading from Fundus Images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
12
|
Shahriari MH, Sabbaghi H, Asadi F, Hosseini A, Khorrami Z. Artificial intelligence in screening, diagnosis, and classification of diabetic macular edema: A systematic review. Surv Ophthalmol 2023; 68:42-53. [PMID: 35970233 DOI: 10.1016/j.survophthal.2022.08.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 08/03/2022] [Accepted: 08/08/2022] [Indexed: 02/01/2023]
Abstract
We review the application of artificial intelligence (AI) techniques in the screening, diagnosis, and classification of diabetic macular edema (DME) by searching six databases- PubMed, Scopus, Web of Science, Science Direct, IEEE, and ACM- from January 1, 2005 to July 4, 2021. A total of 879 articles were extracted, and by applying inclusion and exclusion criteria, 38 articles were selected for more evaluation. The methodological quality of included studies was evaluated using the Quality Assessment for Diagnostic Accuracy Studies (QUADAS-2). We provide an overview of the current state of various AI techniques for DME screening, diagnosis, and classification using retinal imaging modalities such as optical coherence tomography (OCT) and color fundus photography (CFP). Based on our findings, deep learning models have an extraordinary capacity to provide an accurate and efficient system for DME screening and diagnosis. Using these in the processing of modalities leads to a significant increase in sensitivity and specificity values. The use of decision support systems and applications based on AI in processing retinal images provided by OCT and CFP increases the sensitivity and specificity in DME screening and detection.
Collapse
Affiliation(s)
- Mohammad Hasan Shahriari
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Hamideh Sabbaghi
- Ophthalmic Epidemiology Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, Tehran, Iran; Department of Optometry, School of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Farkhondeh Asadi
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Azamosadat Hosseini
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Zahra Khorrami
- Ophthalmic Epidemiology Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
13
|
Selvachandran G, Quek SG, Paramesran R, Ding W, Son LH. Developments in the detection of diabetic retinopathy: a state-of-the-art review of computer-aided diagnosis and machine learning methods. Artif Intell Rev 2023; 56:915-964. [PMID: 35498558 PMCID: PMC9038999 DOI: 10.1007/s10462-022-10185-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 02/02/2023]
Abstract
The exponential increase in the number of diabetics around the world has led to an equally large increase in the number of diabetic retinopathy (DR) cases which is one of the major complications caused by diabetes. Left unattended, DR worsens the vision and would lead to partial or complete blindness. As the number of diabetics continue to increase exponentially in the coming years, the number of qualified ophthalmologists need to increase in tandem in order to meet the demand for screening of the growing number of diabetic patients. This makes it pertinent to develop ways to automate the detection process of DR. A computer aided diagnosis system has the potential to significantly reduce the burden currently placed on the ophthalmologists. Hence, this review paper is presented with the aim of summarizing, classifying, and analyzing all the recent development on automated DR detection using fundus images from 2015 up to this date. Such work offers an unprecedentedly thorough review of all the recent works on DR, which will potentially increase the understanding of all the recent studies on automated DR detection, particularly on those that deploys machine learning algorithms. Firstly, in this paper, a comprehensive state-of-the-art review of the methods that have been introduced in the detection of DR is presented, with a focus on machine learning models such as convolutional neural networks (CNN) and artificial neural networks (ANN) and various hybrid models. Each AI will then be classified according to its type (e.g. CNN, ANN, SVM), its specific task(s) in performing DR detection. In particular, the models that deploy CNN will be further analyzed and classified according to some important properties of the respective CNN architectures of each model. A total of 150 research articles related to the aforementioned areas that were published in the recent 5 years have been utilized in this review to provide a comprehensive overview of the latest developments in the detection of DR. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-022-10185-6.
Collapse
Affiliation(s)
- Ganeshsree Selvachandran
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Shio Gai Quek
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Raveendran Paramesran
- Institute of Computer Science and Digital Innovation, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019 People’s Republic of China
| | - Le Hoang Son
- VNU Information Technology Institute, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
14
|
Medhi JP, S.R. N, Choudhury S, Dandapat S. Improved detection and analysis of Macular Edema using modified guided image filtering with modified level set spatial fuzzy clustering on Optical Coherence Tomography images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
15
|
Alwakid G, Gouda W, Humayun M, Jhanjhi NZ. Enhancing diabetic retinopathy classification using deep learning. Digit Health 2023; 9:20552076231203676. [PMID: 37766903 PMCID: PMC10521302 DOI: 10.1177/20552076231203676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2023] [Indexed: 09/29/2023] Open
Abstract
Prolonged hyperglycemia can cause diabetic retinopathy (DR), which is a major contributor to blindness. Numerous incidences of DR may be avoided if it were identified and addressed promptly. Throughout recent years, many deep learning (DL)-based algorithms have been proposed to facilitate psychometric testing. Utilizing DL model that encompassed four scenarios, DR and its stages were identified in this study using retinal scans from the "Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 Blindness Detection" dataset. Adopting a DL model then led to the use of augmentation strategies that produced a comprehensive dataset with consistent hyper parameters across all test cases. As a further step in the classification process, we used a Convolutional Neural Network model. Different enhancement methods have been used to raise visual quality. The proposed approach detected the DR with a highest experimental result of 97.83%, a top-2 accuracy of 99.31%, and a top-3 accuracy of 99.88% across all the 5 severity stages of the APTOS 2019 evaluation employing CLAHE and ESRGAN techniques for image enhancement. In addition, we employed APTOS 2019 to develop a set of evaluation metrics (precision, recall, and F1-score) to use in analyzing the efficacy of the suggested model. The proposed approach was also proven to be more efficient at DR location than both state-of-the-art technology and conventional DL.
Collapse
Affiliation(s)
- Ghadah Alwakid
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakakah, Al Jouf, Saudi Arabia
| | - Walaa Gouda
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo, Egypt
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakakah, Al Jouf, Saudi Arabia
| | - NZ Jhanjhi
- School of Computer Science and Engineering (SCE), Taylor's University, Subang Jaya, Malaysia
| |
Collapse
|
16
|
Iqbal S, Khan TM, Naveed K, Naqvi SS, Nawaz SJ. Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
Affiliation(s)
- Shahzaib Iqbal
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan; Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
17
|
Mohammadi H, Gupta S, Sharma S. A large-scale performance study of entropy-based image thresholding techniques using new SAD metric. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01121-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
18
|
Ezhei M, Plonka G, Rabbani H. Retinal optical coherence tomography image analysis by a restricted Boltzmann machine. BIOMEDICAL OPTICS EXPRESS 2022; 13:4539-4558. [PMID: 36187262 PMCID: PMC9484437 DOI: 10.1364/boe.458753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 06/06/2022] [Accepted: 07/07/2022] [Indexed: 06/16/2023]
Abstract
Optical coherence tomography (OCT) is an emerging imaging technique for ophthalmic disease diagnosis. Two major problems in OCT image analysis are image enhancement and image segmentation. Deep learning methods have achieved excellent performance in image analysis. However, most of the deep learning-based image analysis models are supervised learning-based approaches and need a high volume of training data (e.g., reference clean images for image enhancement and accurate annotated images for segmentation). Moreover, acquiring reference clean images for OCT image enhancement and accurate annotation of the high volume of OCT images for segmentation is hard. So, it is difficult to extend these deep learning methods to the OCT image analysis. We propose an unsupervised learning-based approach for OCT image enhancement and abnormality segmentation, where the model can be trained without reference images. The image is reconstructed by Restricted Boltzmann Machine (RBM) by defining a target function and minimizing it. For OCT image enhancement, each image is independently learned by the RBM network and is eventually reconstructed. In the reconstruction phase, we use the ReLu function instead of the Sigmoid function. Reconstruction of images given by the RBM network leads to improved image contrast in comparison to other competitive methods in terms of contrast to noise ratio (CNR). For anomaly detection, hyper-reflective foci (HF) as one of the first signs in retinal OCTs of patients with diabetic macular edema (DME) are identified based on image reconstruction by RBM and post-processing by removing the HFs candidates outside the area between the first and the last retinal layers. Our anomaly detection method achieves a high ability to detect abnormalities.
Collapse
Affiliation(s)
- Mansooreh Ezhei
- Medical Image & Signal Processing Research Center, Isfahan Univ. of Medical Sciences, Isfahan, 8174673461, Iran
| | - Gerlind Plonka
- Institute for Numerical and Applied Mathematics, Georg-August-University Göttingen, Göttingen, Germany
| | - Hossein Rabbani
- Medical Image & Signal Processing Research Center, Isfahan Univ. of Medical Sciences, Isfahan, 8174673461, Iran
| |
Collapse
|
19
|
OLTU B, KARACA BK, ERDEM H, ÖZGÜR A. A systematic review of transfer learning-based approaches for diabetic retinopathy detection. GAZI UNIVERSITY JOURNAL OF SCIENCE 2022. [DOI: 10.35378/gujs.1081546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Cases of diabetes and related diabetic retinopathy (DR) have been increasing at an alarming rate in modern times. Early detection of DR is an important problem since it may cause permanent blindness in the late stages. In the last two decades, many different approaches have been applied in DR detection. Reviewing academic literature shows that deep neural networks (DNNs) have become the most preferred approach for DR detection. Among these DNN approaches, Convolutional Neural Network (CNN) models are the most used ones in the field of medical image classification. Designing a new CNN architecture is a tedious and time-consuming approach. Additionally, training an enormous number of parameters is also a difficult task. Due to this reason, instead of training CNNs from scratch, using pre-trained models has been suggested in recent years as transfer learning approach. Accordingly, the present study as a review focuses on DNN and Transfer Learning based applications of DR detection considering 43 publications between 2015 and 2021. The published papers are summarized using 3 figures and 10 tables, giving information about 29 pre-trained CNN models, 13 DR data sets and standard performance metrics.
Collapse
Affiliation(s)
- Burcu OLTU
- BAŞKENT ÜNİVERSİTESİ, MÜHENDİSLİK FAKÜLTESİ
| | | | | | | |
Collapse
|
20
|
Toledo-Cortés S, Useche DH, Müller H, González FA. Grading diabetic retinopathy and prostate cancer diagnostic images with deep quantum ordinal regression. Comput Biol Med 2022; 145:105472. [DOI: 10.1016/j.compbiomed.2022.105472] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 11/28/2022]
|
21
|
Hervella ÁS, Rouco J, Novo J, Ortega M. Multimodal image encoding pre-training for diabetic retinopathy grading. Comput Biol Med 2022; 143:105302. [PMID: 35219187 DOI: 10.1016/j.compbiomed.2022.105302] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 01/11/2022] [Accepted: 01/26/2022] [Indexed: 11/18/2022]
Abstract
Diabetic retinopathy is an increasingly prevalent eye disorder that can lead to severe vision impairment. The severity grading of the disease using retinal images is key to provide an adequate treatment. However, in order to learn the diverse patterns and complex relations that are required for the grading, deep neural networks require very large annotated datasets that are not always available. This has been typically addressed by reusing networks that were pre-trained for natural image classification, hence relying on additional annotated data from a different domain. In contrast, we propose a novel pre-training approach that takes advantage of unlabeled multimodal visual data commonly available in ophthalmology. The use of multimodal visual data for pre-training purposes has been previously explored by training a network in the prediction of one image modality from another. However, that approach does not ensure a broad understanding of the retinal images, given that the network may exclusively focus on the similarities between modalities while ignoring the differences. Thus, we propose a novel self-supervised pre-training that explicitly teaches the networks to learn the common characteristics between modalities as well as the characteristics that are exclusive to the input modality. This provides a complete comprehension of the input domain and facilitates the training of downstream tasks that require a broad understanding of the retinal images, such as the grading of diabetic retinopathy. To validate and analyze the proposed approach, we performed an exhaustive experimentation on different public datasets. The transfer learning performance for the grading of diabetic retinopathy is evaluated under different settings while also comparing against previous state-of-the-art pre-training approaches. Additionally, a comparison against relevant state-of-the-art works for the detection and grading of diabetic retinopathy is also provided. The results show a satisfactory performance of the proposed approach, which outperforms previous pre-training alternatives in the grading of diabetic retinopathy.
Collapse
Affiliation(s)
- Álvaro S Hervella
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - José Rouco
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Jorge Novo
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Marcos Ortega
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| |
Collapse
|
22
|
|
23
|
Nunez do Rio JM, Nderitu P, Bergeles C, Sivaprasad S, Tan GSW, Raman R. Evaluating a Deep Learning Diabetic Retinopathy Grading System Developed on Mydriatic Retinal Images When Applied to Non-Mydriatic Community Screening. J Clin Med 2022; 11:jcm11030614. [PMID: 35160065 PMCID: PMC8836386 DOI: 10.3390/jcm11030614] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 01/05/2022] [Accepted: 01/21/2022] [Indexed: 02/06/2023] Open
Abstract
Artificial Intelligence has showcased clear capabilities to automatically grade diabetic retinopathy (DR) on mydriatic retinal images captured by clinical experts on fixed table-top retinal cameras within hospital settings. However, in many low- and middle-income countries, screening for DR revolves around minimally trained field workers using handheld non-mydriatic cameras in community settings. This prospective study evaluated the diagnostic accuracy of a deep learning algorithm developed using mydriatic retinal images by the Singapore Eye Research Institute, commercially available as Zeiss VISUHEALTH-AI DR, on images captured by field workers on a Zeiss Visuscout® 100 non-mydriatic handheld camera from people with diabetes in a house-to-house cross-sectional study across 20 regions in India. A total of 20,489 patient eyes from 11,199 patients were used to evaluate algorithm performance in identifying referable DR, non-referable DR, and gradability. For each category, the algorithm achieved precision values of 29.60 (95% CI 27.40, 31.88), 92.56 (92.13, 92.97), and 58.58 (56.97, 60.19), recall values of 62.69 (59.17, 66.12), 85.65 (85.11, 86.18), and 65.06 (63.40, 66.69), and F-score values of 40.22 (38.25, 42.21), 88.97 (88.62, 89.31), and 61.65 (60.50, 62.80), respectively. Model performance reached 91.22 (90.79, 91.64) sensitivity and 65.06 (63.40, 66.69) specificity at detecting gradability and 72.08 (70.68, 73.46) sensitivity and 85.65 (85.11, 86.18) specificity for the detection of all referable eyes. Algorithm accuracy is dependent on the quality of acquired retinal images, and this is a major limiting step for its global implementation in community non-mydriatic DR screening using handheld cameras. This study highlights the need to develop and train deep learning-based screening tools in such conditions before implementation.
Collapse
Affiliation(s)
- Joan M. Nunez do Rio
- Institute of Ophthalmology, University College London, London EC1V 9EL, UK; (P.N.); (S.S.)
- Section of Ophthalmology, King’s College London, London WC2R 2LS, UK
- Correspondence:
| | - Paul Nderitu
- Institute of Ophthalmology, University College London, London EC1V 9EL, UK; (P.N.); (S.S.)
- Section of Ophthalmology, King’s College London, London WC2R 2LS, UK
| | - Christos Bergeles
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK;
| | - Sobha Sivaprasad
- Institute of Ophthalmology, University College London, London EC1V 9EL, UK; (P.N.); (S.S.)
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London EC1V 2PD, UK
| | - Gavin S. W. Tan
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore 169856, Singapore;
- Duke-NUS Medical School, National University of Singapore, Singapore 169857, Singapore
| | - Rajiv Raman
- Sankara Nethralaya, 18, College Road, Chennai 600006, India;
| |
Collapse
|
24
|
Gour N, Tanveer M, Khanna P. Challenges for ocular disease identification in the era of artificial intelligence. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06770-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
25
|
Guo Y, Peng Y. CARNet: Cascade attentive RefineNet for multi-lesion segmentation of diabetic retinopathy images. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-021-00630-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractDiabetic retinopathy is the leading cause of blindness in working population. Lesion segmentation from fundus images helps ophthalmologists accurately diagnose and grade of diabetic retinopathy. However, the task of lesion segmentation is full of challenges due to the complex structure, the various sizes and the interclass similarity with other fundus tissues. To address the issue, this paper proposes a cascade attentive RefineNet (CARNet) for automatic and accurate multi-lesion segmentation of diabetic retinopathy. It can make full use of the fine local details and coarse global information from the fundus image. CARNet is composed of global image encoder, local image encoder and attention refinement decoder. We take the whole image and the patch image as the dual input, and feed them to ResNet50 and ResNet101, respectively, for downsampling to extract lesion features. The high-level refinement decoder uses dual attention mechanism to integrate the same-level features in the two encoders with the output of the low-level attention refinement module for multiscale information fusion, which focus the model on the lesion area to generate accurate predictions. We evaluated the segmentation performance of the proposed CARNet on the IDRiD, E-ophtha and DDR data sets. Extensive comparison experiments and ablation studies on various data sets demonstrate the proposed framework outperforms the state-of-the-art approaches and has better accuracy and robustness. It not only overcomes the interference of similar tissues and noises to achieve accurate multi-lesion segmentation, but also preserves the contour details and shape features of small lesions without overloading GPU memory usage.
Collapse
|
26
|
Zia F, Irum I, Nawaz Qadri N, Nam Y, Khurshid K, Ali M, Ashraf I, Attique Khan M. A Multilevel Deep Feature Selection Framework for Diabetic Retinopathy Image Classification. COMPUTERS, MATERIALS & CONTINUA 2022; 70:2261-2276. [DOI: 10.32604/cmc.2022.017820] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 04/19/2021] [Indexed: 08/25/2024]
|
27
|
Kuo MT, Hsu BWY, Lin YS, Fang PC, Yu HJ, Chen A, Yu MS, Tseng VS. Comparisons of deep learning algorithms for diagnosing bacterial keratitis via external eye photographs. Sci Rep 2021; 11:24227. [PMID: 34930952 PMCID: PMC8688438 DOI: 10.1038/s41598-021-03572-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 11/30/2021] [Indexed: 02/08/2023] Open
Abstract
Bacterial keratitis (BK), a painful and fulminant bacterial infection of the cornea, is the most common type of vision-threatening infectious keratitis (IK). A rapid clinical diagnosis by an ophthalmologist may often help prevent BK patients from progression to corneal melting or even perforation, but many rural areas cannot afford an ophthalmologist. Thanks to the rapid development of deep learning (DL) algorithms, artificial intelligence via image could provide an immediate screening and recommendation for patients with red and painful eyes. Therefore, this study aims to elucidate the potentials of different DL algorithms for diagnosing BK via external eye photos. External eye photos of clinically suspected IK were consecutively collected from five referral centers. The candidate DL frameworks, including ResNet50, ResNeXt50, DenseNet121, SE-ResNet50, EfficientNets B0, B1, B2, and B3, were trained to recognize BK from the photo toward the target with the greatest area under the receiver operating characteristic curve (AUROC). Via five-cross validation, EfficientNet B3 showed the most excellent average AUROC, in which the average percentage of sensitivity, specificity, positive predictive value, and negative predictive value was 74, 64, 77, and 61. There was no statistical difference in diagnostic accuracy and AUROC between any two of these DL frameworks. The diagnostic accuracy of these models (ranged from 69 to 72%) is comparable to that of the ophthalmologist (66% to 74%). Therefore, all these models are promising tools for diagnosing BK in first-line medical care units without ophthalmologists.
Collapse
Affiliation(s)
- Ming-Tse Kuo
- Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No.123, Dapi Rd., Niaosong Dist., Kaohsiung City, 833, Taiwan (R.O.C.). .,School of Medicine, Chang Gung University, Taoyuan City, 33302, Taiwan.
| | - Benny Wei-Yun Hsu
- Department of Computer Science, National Yang Ming Chiao Tung University, No. 1001, Daxue Rd., East Dist., Hsinchu City, 300, Taiwan (R.O.C.)
| | - Yi-Sheng Lin
- Department of Computer Science, National Yang Ming Chiao Tung University, No. 1001, Daxue Rd., East Dist., Hsinchu City, 300, Taiwan (R.O.C.)
| | - Po-Chiung Fang
- Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No.123, Dapi Rd., Niaosong Dist., Kaohsiung City, 833, Taiwan (R.O.C.).,School of Medicine, Chang Gung University, Taoyuan City, 33302, Taiwan
| | - Hun-Ju Yu
- Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No.123, Dapi Rd., Niaosong Dist., Kaohsiung City, 833, Taiwan (R.O.C.)
| | - Alexander Chen
- Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No.123, Dapi Rd., Niaosong Dist., Kaohsiung City, 833, Taiwan (R.O.C.)
| | - Meng-Shan Yu
- Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No.123, Dapi Rd., Niaosong Dist., Kaohsiung City, 833, Taiwan (R.O.C.)
| | - Vincent S Tseng
- Department of Computer Science, National Yang Ming Chiao Tung University, No. 1001, Daxue Rd., East Dist., Hsinchu City, 300, Taiwan (R.O.C.).
| |
Collapse
|
28
|
Classification of diabetic retinopathy using unlabeled data and knowledge distillation. Artif Intell Med 2021; 121:102176. [PMID: 34763798 DOI: 10.1016/j.artmed.2021.102176] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 09/11/2021] [Accepted: 09/13/2021] [Indexed: 11/22/2022]
Abstract
Over the last decade, advances in Machine Learning and Artificial Intelligence have highlighted their potential as a diagnostic tool in the healthcare domain. Despite the widespread availability of medical images, their usefulness is severely hampered by a lack of access to labeled data. For example, while Convolutional Neural Networks (CNNs) have emerged as an essential analytical tool in image processing, their impact is curtailed by training limitations due to insufficient labeled data availability. Transfer Learning enables models developed for one task to be reused for a second task. Knowledge distillation enables transferring knowledge from a pre-trained model to another. However, it suffers from limitations, and the two models' constraints need to be architecturally similar. Knowledge distillation addresses some of the shortcomings of transfer learning by generalizing a complex model to a lighter model. However, some parts of the knowledge may not be distilled by knowledge distillation sufficiently. In this paper, a novel knowledge distillation approach using transfer learning is proposed. The proposed approach transfers the complete knowledge of a model to a new smaller one. Unlabeled data are used in an unsupervised manner to transfer the new smaller model's maximum amount of knowledge. The proposed method can be beneficial in medical image analysis, where labeled data are typically scarce. The proposed approach is evaluated in classifying images for diagnosing Diabetic Retinopathy on two publicly available datasets, including Messidor and EyePACS. Simulation results demonstrate that the approach effectively transfers knowledge from a complex model to a lighter one. Furthermore, experimental results illustrate that different small models' performance is improved significantly using unlabeled data and knowledge distillation.
Collapse
|
29
|
Mezni I, Ben Slama A, Mbarki Z, Seddik H, Trabelsi H. Automated identification of SD-optical coherence tomography derived macular diseases by combining 3D-block-matching and deep learning techniques. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2021. [DOI: 10.1080/21681163.2021.1926329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Ilhem Mezni
- ISTMT, Laboratory of Biophysics and Medical Technologies (LRBTM), LR13ES07, University of Tunis El Manar, Tunis, Tunisia
| | - Amine Ben Slama
- ISTMT, Laboratory of Biophysics and Medical Technologies (LRBTM), LR13ES07, University of Tunis El Manar, Tunis, Tunisia
| | | | | | - Hedi Trabelsi
- ISTMT, Laboratory of Biophysics and Medical Technologies (LRBTM), LR13ES07, University of Tunis El Manar, Tunis, Tunisia
| |
Collapse
|
30
|
An interpretable multiple-instance approach for the detection of referable diabetic retinopathy in fundus images. Sci Rep 2021; 11:14326. [PMID: 34253799 PMCID: PMC8275626 DOI: 10.1038/s41598-021-93632-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 06/16/2021] [Indexed: 11/09/2022] Open
Abstract
Diabetic retinopathy (DR) is one of the leading causes of vision loss across the world. Yet despite its wide prevalence, the majority of affected people lack access to the specialized ophthalmologists and equipment required for monitoring their condition. This can lead to delays in the start of treatment, thereby lowering their chances for a successful outcome. Machine learning systems that automatically detect the disease in eye fundus images have been proposed as a means of facilitating access to retinopathy severity estimates for patients in remote regions or even for complementing the human expert’s diagnosis. Here we propose a machine learning system for the detection of referable diabetic retinopathy in fundus images, which is based on the paradigm of multiple-instance learning. Our method extracts local information independently from multiple rectangular image patches and combines it efficiently through an attention mechanism that focuses on the abnormal regions of the eye (i.e. those that contain DR-induced lesions), thus resulting in a final image representation that is suitable for classification. Furthermore, by leveraging the attention mechanism our algorithm can seamlessly produce informative heatmaps that highlight the regions where the lesions are located. We evaluate our approach on the publicly available Kaggle, Messidor-2 and IDRiD retinal image datasets, in which it exhibits near state-of-the-art classification performance (AUC of 0.961 in Kaggle and 0.976 in Messidor-2), while also producing valid lesion heatmaps (AUPRC of 0.869 in the 81 images of IDRiD that contain pixel-level lesion annotations). Our results suggest that the proposed approach provides an efficient and interpretable solution against the problem of automated diabetic retinopathy grading.
Collapse
|
31
|
Li W, Jin L, Cui Y, Nie A, Xie N, Liang G. Bone marrow mesenchymal stem cells-induced exosomal microRNA-486-3p protects against diabetic retinopathy through TLR4/NF-κB axis repression. J Endocrinol Invest 2021; 44:1193-1207. [PMID: 32979189 DOI: 10.1007/s40618-020-01405-3] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 08/23/2020] [Indexed: 02/08/2023]
Abstract
AIM Diabetic retinopathy (DR) is a chronic disease causing health and economic burdens on individuals and society. Thus, this study is conducted to figure out the mechanisms of bone marrow mesenchymal stem cells (BMSCs)-induced exosomal microRNA-486-3p (miR-486-3p) in DR. METHODS The putative miR-486-3p binding sites to 3'untranslated region of Toll-like receptor 4 (TLR4) was verified by luciferase reporter assay. High glucose (HG)-treated Muller cells were transfected with miR-486-3p or TLR4-related oligonucleotides and plasmids to explore theirs functions in DR. Additionally, HG-treated Muller cells were co-cultured with BMSC-derived exosomes, exosomes collected from BMSCs that had been transfected with miR-486-3p or TLR4-related oligonucleotides and plasmids to explore their functions in DR. MiR-486-3p, TLR4 and nuclear factor-kappaB (NF-κB) expression, angiogenesis-related factors, oxidative stress factors, viability and apoptosis in HG-treated Muller cells were detected by RT-qPCR, western blot analysis, ELISA, MTT assay and flow cytometry, respectively. RESULTS MiR-486-3p was poorly expressed while TLR4 and NF-κB were highly expressed in HG-treated Muller cells. TLR4 was a target of miR-486-3p. Upregulating miR-486-3p or down-regulating TLR4 inhibited oxidative stress, inflammation and apoptosis, and promoted proliferation of HG-treated Muller cells. Meanwhile, BMSC-derived exosomes inhibited oxidative stress, inflammation and apoptosis, and promoted proliferation of HG-treated Muller cells. Restoring miR-486-3p further enhanced, while up-regulating TLR4 reversed, the improvement of exosomes treatment. CONCLUSION Our study highlights that up-regulation of miR-486-3p induced by BMSC-derived exosomes played a protective role in DR mice via TLR4/NF-κB axis repression.
Collapse
Affiliation(s)
- W Li
- Department of Ophthalmology, The Second Clinical Medical College of Jinan University, Shenzhen People's Hospital, 1017 Dongmen North Road, Luohu District, Shenzhen, 518000, Guangdong, China
| | - L Jin
- Department of Ophthalmology, The Second Clinical Medical College of Jinan University, Shenzhen People's Hospital, 1017 Dongmen North Road, Luohu District, Shenzhen, 518000, Guangdong, China
| | - Y Cui
- Department of Ophthalmology, The Second Clinical Medical College of Jinan University, Shenzhen People's Hospital, 1017 Dongmen North Road, Luohu District, Shenzhen, 518000, Guangdong, China
| | - A Nie
- Department of Ophthalmology, The Second Clinical Medical College of Jinan University, Shenzhen People's Hospital, 1017 Dongmen North Road, Luohu District, Shenzhen, 518000, Guangdong, China
| | - N Xie
- Department of Ophthalmology, The Second Clinical Medical College of Jinan University, Shenzhen People's Hospital, 1017 Dongmen North Road, Luohu District, Shenzhen, 518000, Guangdong, China.
| | - G Liang
- Department of Ophthalmology, The Affiliated Hospital of Youjiang Medical University for Nationalities, Baise, 53300, Guangxi, China.
| |
Collapse
|
32
|
Musulin J, Štifanić D, Zulijani A, Ćabov T, Dekanić A, Car Z. An Enhanced Histopathology Analysis: An AI-Based System for Multiclass Grading of Oral Squamous Cell Carcinoma and Segmenting of Epithelial and Stromal Tissue. Cancers (Basel) 2021; 13:1784. [PMID: 33917952 PMCID: PMC8068326 DOI: 10.3390/cancers13081784] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 04/02/2021] [Accepted: 04/07/2021] [Indexed: 12/12/2022] Open
Abstract
Oral squamous cell carcinoma is most frequent histological neoplasm of head and neck cancers, and although it is localized in a region that is accessible to see and can be detected very early, this usually does not occur. The standard procedure for the diagnosis of oral cancer is based on histopathological examination, however, the main problem in this kind of procedure is tumor heterogeneity where a subjective component of the examination could directly impact patient-specific treatment intervention. For this reason, artificial intelligence (AI) algorithms are widely used as computational aid in the diagnosis for classification and segmentation of tumors, in order to reduce inter- and intra-observer variability. In this research, a two-stage AI-based system for automatic multiclass grading (the first stage) and segmentation of the epithelial and stromal tissue (the second stage) from oral histopathological images is proposed in order to assist the clinician in oral squamous cell carcinoma diagnosis. The integration of Xception and SWT resulted in the highest classification value of 0.963 (σ = 0.042) AUCmacro and 0.966 (σ = 0.027) AUCmicro while using DeepLabv3+ along with Xception_65 as backbone and data preprocessing, semantic segmentation prediction resulted in 0.878 (σ = 0.027) mIOU and 0.955 (σ = 0.014) F1 score. Obtained results reveal that the proposed AI-based system has great potential in the diagnosis of OSCC.
Collapse
Affiliation(s)
- Jelena Musulin
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (J.M.); (Z.C.)
| | - Daniel Štifanić
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (J.M.); (Z.C.)
| | - Ana Zulijani
- Department of Oral Surgery, Clinical Hospital Center Rijeka, Krešimirova Ul. 40, 51000 Rijeka, Croatia;
| | - Tomislav Ćabov
- Faculty of Dental Medicine, University of Rijeka, Krešimirova Ul. 40, 51000 Rijeka, Croatia
| | - Andrea Dekanić
- Department of Pathology and Cytology, Clinical Hospital Center Rijeka, Krešimirova Ul. 42, 51000 Rijeka, Croatia;
- Faculty of Medicine, University of Rijeka, Ul. Braće Branchetta 20/1, 51000 Rijeka, Croatia
| | - Zlatan Car
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia; (J.M.); (Z.C.)
| |
Collapse
|
33
|
Javaid I, Zhang S, Isselmou AEK, Kamhi S, Ahmad IS, Kulsum U. Brain Tumor Classification & Segmentation by Using Advanced DNN, CNN & ResNet-50 Neural Networks. INTERNATIONAL JOURNAL OF CIRCUITS, SYSTEMS AND SIGNAL PROCESSING 2020; 14:1011-1029. [DOI: 10.46300/9106.2020.14.129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
In the medical domain, brain image classification is an extremely challenging field. Medical images play a vital role in making the doctor's precise diagnosis and in the surgery process. Adopting intelligent algorithms makes it feasible to detect the lesions of medical images quickly, and it is especially necessary to extract features from medical images. Several studies have integrated multiple algorithms toward medical images domain. Concerning feature extraction from the medical image, a vast amount of data is analyzed to achieve processing results, helping physicians deliver more precise case diagnoses. Image processing mechanism becomes extensive usage in medical science to advance the early detection and treatment aspects. In this aspect, this paper takes tumor, and healthy images as the research object and primarily performs image processing and data augmentation process to feed the dataset to the neural networks. Deep neural networks (DNN), to date, have shown outstanding achievement in classification and segmentation tasks. Carrying this concept into consideration, in this study, we adopted a pre-trained model Resnet_50 for image analysis. The paper proposed three diverse neural networks, particularly DNN, CNN, and ResNet-50. Finally, the splitting dataset is individually assigned to each simplified neural network. Once the image is classified as a tumor accurately, the OTSU segmentation is employed to extract the tumor alone. It can be examined from the experimental outcomes that the ResNet-50 algorithm shows high accuracy 0.996, precision 1.00 with best F1 score 1.0, and minimum test losses of 0.0269 in terms of Brain tumor classification. Extensive experiments prove our offered tumor detection segmentation efficiency and accuracy. To this end, our approach is comprehensive sufficient and only requires minimum pre-and post-processing, which allows its adoption in various medical image classification & segmentation tasks.
Collapse
Affiliation(s)
- Imran Javaid
- Hebei University of Technology, 8 Dingzigu 1stRd, Hongqiao Qu,China
| | - Shuai Zhang
- Hebei University of Technology, 8 Dingzigu 1stRd, Hongqiao Qu,China
| | | | - Souha Kamhi
- Hebei University of Technology, 8 Dingzigu 1stRd, Hongqiao Qu,China
| | - Isah Salim Ahmad
- Hebei University of Technology, 8 Dingzigu 1stRd, Hongqiao Qu,China
| | - Ummay Kulsum
- Hebei University of Technology, 8 Dingzigu 1stRd, Hongqiao Qu,China
| |
Collapse
|
34
|
ICA-RD: The Regional Domination Policy for Imperialist Competitive Algorithm from Imperialism to Internationalism. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2020. [DOI: 10.1007/s13369-020-04787-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
35
|
Liu F, Wang K, Liu D, Yang X, Tian J. Deep pyramid local attention neural network for cardiac structure segmentation in two-dimensional echocardiography. Med Image Anal 2020; 67:101873. [PMID: 33129143 DOI: 10.1016/j.media.2020.101873] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 10/12/2020] [Accepted: 10/13/2020] [Indexed: 02/07/2023]
Abstract
Automatic semantic segmentation in 2D echocardiography is vital in clinical practice for assessing various cardiac functions and improving the diagnosis of cardiac diseases. However, two distinct problems have persisted in automatic segmentation in 2D echocardiography, namely the lack of an effective feature enhancement approach for contextual feature capture and lack of label coherence in category prediction for individual pixels. Therefore, in this study, we propose a deep learning model, called deep pyramid local attention neural network (PLANet), to improve the segmentation performance of automatic methods in 2D echocardiography. Specifically, we propose a pyramid local attention module to enhance features by capturing supporting information within compact and sparse neighboring contexts. We also propose a label coherence learning mechanism to promote prediction consistency for pixels and their neighbors by guiding the learning with explicit supervision signals. The proposed PLANet was extensively evaluated on the dataset of cardiac acquisitions for multi-structure ultrasound segmentation (CAMUS) and sub-EchoNet-Dynamic, which are two large-scale and public 2D echocardiography datasets. The experimental results show that PLANet performs better than traditional and deep learning-based segmentation methods on geometrical and clinical metrics. Moreover, PLANet can complete the segmentation of heart structures in 2D echocardiography in real time, indicating a potential to assist cardiologists accurately and efficiently.
Collapse
Affiliation(s)
- Fei Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; Department of the Artificial Intelligence Technology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; Department of the Artificial Intelligence Technology, University of Chinese Academy of Sciences, Beijing, 100049, China; Zhuhai Precision Medical Center, Zhuhai People's Hospital (affiliated with Jinan University), Zhuhai, 519000, China
| | - Dan Liu
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330008, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; Department of the Artificial Intelligence Technology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; Zhuhai Precision Medical Center, Zhuhai People's Hospital (affiliated with Jinan University), Zhuhai, 519000, China; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, 100191, China.
| |
Collapse
|