1
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
2
|
Duan H, Yan W. Visual fatigue a comprehensive review of mechanisms of occurrence, animal model design and nutritional intervention strategies. Crit Rev Food Sci Nutr 2023:1-25. [PMID: 38153314 DOI: 10.1080/10408398.2023.2298789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2023]
Abstract
When the eyes work intensively, it is easy to have eye discomfort such as blurred vision, soreness, dryness, and tearing, that is, visual fatigue. Visual fatigue not only affects work and study efficiency, but long-term visual fatigue can also easily affect physical and mental health. In recent years, with the popularization of electronic products, although it has brought convenience to the office and study, it has also caused more frequent visual fatigue among people who use electronic devices. Moreover, studies have reported that the number of people with visual fatigue is showing a trend of increasing year by year. The range of people involved is also extensive, especially students, people who have been engaged in computer work and fine instruments (such as microscopes) for a long time, and older adults with aging eye function. More and more studies have proposed that supplementation with the proper nutrients can effectively relieve visual fatigue and promote eye health. This review discusses the physiological mechanisms of visual fatigue and the design ideas of animal experiments from the perspective of modern nutritional science. Functional food ingredients with the ability to alleviate visual fatigue are discussed in detail.
Collapse
Affiliation(s)
- Hao Duan
- College of Biochemical Engineering, Beijing Key Laboratory of Bioactive Substances and Functional Food, Beijing Union University, Beijing, China
| | - Wenjie Yan
- College of Biochemical Engineering, Beijing Key Laboratory of Bioactive Substances and Functional Food, Beijing Union University, Beijing, China
| |
Collapse
|
3
|
Arslan S, Kaya MK, Tasci B, Kaya S, Tasci G, Ozsoy F, Dogan S, Tuncer T. Attention TurkerNeXt: Investigations into Bipolar Disorder Detection Using OCT Images. Diagnostics (Basel) 2023; 13:3422. [PMID: 37998558 PMCID: PMC10669998 DOI: 10.3390/diagnostics13223422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 11/04/2023] [Accepted: 11/08/2023] [Indexed: 11/25/2023] Open
Abstract
Background and Aim: In the era of deep learning, numerous models have emerged in the literature and various application domains. Transformer architectures, particularly, have gained popularity in deep learning, with diverse transformer-based computer vision algorithms. Attention convolutional neural networks (CNNs) have been introduced to enhance image classification capabilities. In this context, we propose a novel attention convolutional model with the primary objective of detecting bipolar disorder using optical coherence tomography (OCT) images. Materials and Methods: To facilitate our study, we curated a unique OCT image dataset, initially comprising two distinct cases. For the development of an automated OCT image detection system, we introduce a new attention convolutional neural network named "TurkerNeXt". This proposed Attention TurkerNeXt encompasses four key modules: (i) the patchify stem block, (ii) the Attention TurkerNeXt block, (iii) the patchify downsampling block, and (iv) the output block. In line with the swin transformer, we employed a patchify operation in this study. The design of the attention block, Attention TurkerNeXt, draws inspiration from ConvNeXt, with an added shortcut operation to mitigate the vanishing gradient problem. The overall architecture is influenced by ResNet18. Results: The dataset comprises two distinctive cases: (i) top to bottom and (ii) left to right. Each case contains 987 training and 328 test images. Our newly proposed Attention TurkerNeXt achieved 100% test and validation accuracies for both cases. Conclusions: We curated a novel OCT dataset and introduced a new CNN, named TurkerNeXt in this research. Based on the research findings and classification results, our proposed TurkerNeXt model demonstrated excellent classification performance. This investigation distinctly underscores the potential of OCT images as a biomarker for bipolar disorder.
Collapse
Affiliation(s)
| | | | - Burak Tasci
- Vocational School of Technical Sciences, Firat University, 23119 Elazig, Turkey
| | - Suheda Kaya
- Department of Psychiatry, Elazig Fethi Sekin City Hospital, 23100 Elazig, Turkey; (S.K.); (G.T.)
| | - Gulay Tasci
- Department of Psychiatry, Elazig Fethi Sekin City Hospital, 23100 Elazig, Turkey; (S.K.); (G.T.)
| | - Filiz Ozsoy
- Department of Psychiatry, School of Medicine, Tokat Gaziosmanpasa University, 60100 Tokat, Turkey;
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, 23119 Elazig, Turkey; (S.D.); (T.T.)
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, 23119 Elazig, Turkey; (S.D.); (T.T.)
| |
Collapse
|
4
|
Ait Hammou B, Antaki F, Boucher MC, Duval R. MBT: Model-Based Transformer for retinal optical coherence tomography image and video multi-classification. Int J Med Inform 2023; 178:105178. [PMID: 37657204 DOI: 10.1016/j.ijmedinf.2023.105178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 07/13/2023] [Accepted: 08/06/2023] [Indexed: 09/03/2023]
Abstract
BACKGROUND AND OBJECTIVE The detection of retinal diseases using optical coherence tomography (OCT) images and videos is a concrete example of a data classification problem. In recent years, Transformer architectures have been successfully applied to solve a variety of real-world classification problems. Although they have shown impressive discriminative abilities compared to other state-of-the-art models, improving their performance is essential, especially in healthcare-related problems. METHODS This paper presents an effective technique named model-based transformer (MBT). It is based on popular pre-trained transformer models, particularly, vision transformer, swin transformer for OCT image classification, and multiscale vision transformer for OCT video classification. The proposed approach is designed to represent OCT data by taking advantage of an approximate sparse representation technique. Then, it estimates the optimal features, and performs data classification. RESULTS The experiments are carried out using three real-world retinal datasets. The experimental results on OCT image and OCT video datasets show that the proposed method outperforms existing state-of-the-art deep learning approaches in terms of classification accuracy, precision, recall, and f1-score, kappa, AUC-ROC, and AUC-PR. It can also boost the performance of existing transformer models, including Vision transformer and Swin transformer for OCT image classification, and Multiscale Vision Transformers for OCT video classification. CONCLUSIONS This work presents an approach for the automated detection of retinal diseases. Although deep neural networks have proven great potential in ophthalmology applications, our findings demonstrate for the first time a new way to identify retinal pathologies using OCT videos instead of images. Moreover, our proposal can help researchers enhance the discriminative capacity of a variety of powerful deep learning models presented in published papers. This can be valuable for future directions in medical research and clinical practice.
Collapse
Affiliation(s)
- Badr Ait Hammou
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada; Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montréal, Québec, Canada.
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada; Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montréal, Québec, Canada; Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
| | - Marie-Carole Boucher
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada; Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montréal, Québec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada; Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montréal, Québec, Canada
| |
Collapse
|
5
|
Durai DBJ, Jaya T. Automatic severity grade classification of diabetic retinopathy using deformable ladder Bi attention U-net and deep adaptive CNN. Med Biol Eng Comput 2023:10.1007/s11517-023-02860-9. [PMID: 37338737 DOI: 10.1007/s11517-023-02860-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 05/25/2023] [Indexed: 06/21/2023]
Abstract
Long-term exposure to diabetes mellitus leads to the formation of diabetic retinopathy (DR), which can cause vision loss in working-age adults. Early stage diagnosis of DR is highly essential for preventing vision loss and preserving vision in people with diabetes. The motivation behind the severity grade classification of DR is to develop an automated system that can assist ophthalmologists and healthcare professionals in the diagnosis and management of DR. However, existing methods suffer from variability in image quality, similar structures of the normal and lesion regions, high dimensional features, variability in disease manifestations, small datasets, high training loss, model complexity, and overfitting, which leads to high misclassification errors in the severity grading system. Hence, there is a need to develop an automated system using improved deep learning techniques to provide a reliable and consistent grading of DR severity with high classification accuracy using fundus images. To solve these issues, we proposes a Deformable Ladder Bi attention U-shaped encoder-decoder network and Deep Adaptive Convolutional Neural Network (DLBUnet-DACNN) for accurate severity classification of DR. The DLBUnet performs lesion segmentation that can be divided into three parts: the encoder, the central processing module and the decoder. In the encoder part, deformable convolution is used instead of convolution to learn different shapes of the lesion by understanding the offset location. Afterwards, Ladder Atrous Spatial Pyramidal Pooling (LASPP) using variable dilation rates is introduced in the central processing module. LASPP enhance the tiny lesion features and variable dilation rates avoid gridding effects and can learn better global context information. Then the decoder part uses a bi-attention layer contains spatial and channel attention, which can learn contour and edges of the lesion accurately. Finally, the severity of DR is classified using a DACNN by extracting the discriminative features from the segmentation results. Experiments are conducted on the Messidor-2, Kaggle, and Messidor datasets. Our proposed method DLBUnet-DACNN achieves better results in terms of accuracy of 98.2, recall of 0.987, kappa coefficient of 0.993, precision of 0.98, F1-score of 0.981, Matthews Correlation Coefficient (MCC) of 0.93 and Classification Success Index (CSI) of 0.96 when compared to existing methods.
Collapse
Affiliation(s)
- D Binny Jeba Durai
- Department of Electronics and Communication Engineering, Udaya School of Engineering, Vellamodi, India.
| | - T Jaya
- Department of Electronics and Communication Engineering, C.S.I. Institute of Technology, Thovalai, India
| |
Collapse
|
6
|
Sut SK, Koc M, Zorlu G, Serhatlioglu I, Barua PD, Dogan S, Baygin M, Tuncer T, Tan RS, Acharya UR. Automated Adrenal Gland Disease Classes Using Patch-Based Center Symmetric Local Binary Pattern Technique with CT Images. J Digit Imaging 2023; 36:879-892. [PMID: 36658376 PMCID: PMC10287607 DOI: 10.1007/s10278-022-00759-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 12/09/2022] [Accepted: 12/13/2022] [Indexed: 01/21/2023] Open
Abstract
Incidental adrenal masses are seen in 5% of abdominal computed tomography (CT) examinations. Accurate discrimination of the possible differential diagnoses has important therapeutic and prognostic significance. A new handcrafted machine learning method has been developed for the automated and accurate classification of adrenal gland CT images. A new dataset comprising 759 adrenal gland CT image slices from 96 subjects were analyzed. Experts had labeled the collected images into four classes: normal, pheochromocytoma, lipid-poor adenoma, and metastasis. The images were preprocessed, resized, and the image features were extracted using the center symmetric local binary pattern (CS-LBP) method. CT images were next divided into 16 × 16 fixed-size patches, and further feature extraction using CS-LBP was performed on these patches. Next, extracted features were selected using neighborhood component analysis (NCA) to obtain the most meaningful ones for downstream classification. Finally, the selected features were classified using k-nearest neighbor (kNN), support vector machine (SVM), and neural network (NN) classifiers to obtain the optimum performing model. Our proposed method obtained an accuracy of 99.87%, 99.21%, and 98.81% with kNN, SVM, and NN classifiers, respectively. Hence, the kNN classifier yielded the highest classification results with no pathological image misclassified as normal. Our developed fixed patch CS-LBP-based automatic classification of adrenal gland pathologies on CT images is highly accurate and has low time complexity [Formula: see text]. It has the potential to be used for screening of adrenal gland disease classes with CT images.
Collapse
Affiliation(s)
- Suat Kamil Sut
- Department of Radiology, Adiyaman Training and Research Hospital, Adiyaman, Turkey
| | - Mustafa Koc
- Department of Radiology, Faculty of Medicine, Firat University, Elazig, Turkey
| | - Gokhan Zorlu
- Department of Biophysics, Faculty of Medicine, Firat University, Elazig, Turkey
| | - Ihsan Serhatlioglu
- Department of Biophysics, Faculty of Medicine, Firat University, Elazig, Turkey
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, QLD 4350 Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007 Australia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, College of Engineering, Ardahan University, Ardahan, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Ru-San Tan
- Department of Cardiology, National Heart Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - U. Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore, 599489 Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
7
|
Bhattacharya A, Saha B, Chattopadhyay S, Sarkar R. Deep feature selection using adaptive β-Hill Climbing aided whale optimization algorithm for lung and colon cancer detection. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
8
|
Açış B, Güney S. Classification of human movements by using Kinect sensor. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
9
|
Qiu S, Li C, Feng Y, Zuo S, Liang H, Xu A. GFANet: Gated Fusion Attention Network for skin lesion segmentation. Comput Biol Med 2023; 155:106462. [PMID: 36857942 DOI: 10.1016/j.compbiomed.2022.106462] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 12/13/2022] [Accepted: 12/19/2022] [Indexed: 02/21/2023]
Abstract
Automatic segmentation of skin lesions is crucial for diagnosing and treating skin diseases. Although current medical image segmentation methods have significantly improved the results of skin lesion segmentation, the following major challenges still affect the segmentation performance: (i) segmentation targets have irregular shapes and diverse sizes and (ii) low contrast or blurred boundaries between lesions and background. To address these issues, this study proposes a Gated Fusion Attention Network (GFANet) which designs two progressive relation decoders to accurately segment skin lesions images. First, we use a Context Features Gated Fusion Decoder (CGFD) to fuse multiple levels of contextual features, and then a prediction result is generated as the initial guide map. Then, it is optimized by a prediction decoder consisting of a shape flow and a final Gated Convolution Fusion (GCF) module, where we iteratively use a set of Channel Reverse Attention (CRA) modules and GCF modules in the shape flow to combine the features of the current layer and the prediction results of the adjacent next layer to gradually extract boundary information. Finally, to speed up network convergence and improve segmentation accuracy, we use GCF to fuse low-level features from the encoder and the final output of the shape flow. To verify the effectiveness and advantages of the proposed GFANet, we conduct extensive experiments on four publicly available skin lesion datasets (International Skin Imaging Collaboration [ISIC] 2016, ISIC 2017, ISIC 2018, and PH2) and compare them with state-of-the-art methods. The experimental results show that the proposed GFANet achieves excellent segmentation performance in commonly used evaluation metrics, and the segmentation results are stable. The source code is available at https://github.com/ShiHanQ/GFANet.
Collapse
Affiliation(s)
- Shihan Qiu
- Department of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| | - Chengfei Li
- Department of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China.
| | - Yue Feng
- Department of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| | - Song Zuo
- Department of Hemangioma and Vascular Malformation, Henan Provincial People's Hospital, People's Hospital of Zhengzhou University, Zhengzhou, Henan, 450003, China.
| | - Huijie Liang
- Department of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| | - Ao Xu
- Department of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| |
Collapse
|
10
|
Tuncer I, Barua PD, Dogan S, Baygin M, Tuncer T, Tan RS, Yeong CH, Acharya UR. Swin-textural: A novel textural features-based image classification model for COVID-19 detection on chest computed tomography. INFORMATICS IN MEDICINE UNLOCKED 2023; 36:101158. [PMID: 36618887 PMCID: PMC9804964 DOI: 10.1016/j.imu.2022.101158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 12/30/2022] [Accepted: 12/30/2022] [Indexed: 01/01/2023] Open
Abstract
Background Chest computed tomography (CT) has a high sensitivity for detecting COVID-19 lung involvement and is widely used for diagnosis and disease monitoring. We proposed a new image classification model, swin-textural, that combined swin-based patch division with textual feature extraction for automated diagnosis of COVID-19 on chest CT images. The main objective of this work is to evaluate the performance of the swin architecture in feature engineering. Material and method We used a public dataset comprising 2167, 1247, and 757 (total 4171) transverse chest CT images belonging to 80, 80, and 50 (total 210) subjects with COVID-19, other non-COVID lung conditions, and normal lung findings. In our model, resized 420 × 420 input images were divided using uniform square patches of incremental dimensions, which yielded ten feature extraction layers. At each layer, local binary pattern and local phase quantization operations extracted textural features from individual patches as well as the undivided input image. Iterative neighborhood component analysis was used to select the most informative set of features to form ten selected feature vectors and also used to select the 11th vector from among the top selected feature vectors with accuracy >97.5%. The downstream kNN classifier calculated 11 prediction vectors. From these, iterative hard majority voting generated another nine voted prediction vectors. Finally, the best result among the twenty was determined using a greedy algorithm. Results Swin-textural attained 98.71% three-class classification accuracy, outperforming published deep learning models trained on the same dataset. The model has linear time complexity. Conclusions Our handcrafted computationally lightweight swin-textural model can detect COVID-19 accurately on chest CT images with low misclassification rates. The model can be implemented in hospitals for efficient automated screening of COVID-19 on chest CT images. Moreover, findings demonstrate that our presented swin-textural is a self-organized, highly accurate, and lightweight image classification model and is better than the compared deep learning models for this dataset.
Collapse
Affiliation(s)
- Ilknur Tuncer
- Elazig Governorship, Interior Ministry, Elazig, Turkey
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, QLD, 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, Faculty of Engineering, Ardahan University, Ardahan, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Ru-San Tan
- Department of Cardiology, National Heart Centre Singapore, Singapore
- Duke-NUS Medical School, Singapore
| | - Chai Hong Yeong
- School of Medicine, Faculty of Health and Medical Sciences, Taylor's University, 47500, Subang Jaya, Malaysia
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
11
|
Tajmirriahi M, Rostamian R, Amini Z, Hamidi A, Zam A, Rabbani H. Mixture of Symmetric Stable Distributions for Macular Pathology Detection in Optical Coherence Tomography Scans. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3866-3869. [PMID: 36086049 DOI: 10.1109/embc48229.2022.9871357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Optical coherence tomography (OCT) is widely used to detect retinal disorders. In this study a new methodology is proposed for automatic detection of macular pathologies in the OCT images. Our approach is based on modeling the normal and abnormal OCT images with α-stable mixture model represented by stochastic differential equations (SDE). Parameters of the model are used to detect abnormal OCT images. The α-stable mixture model is created after applying a fractional Laplacian operator to the image and Expectation-Maximization (EM) algorithm is applied to estimate its parameters. The classification of an OCT image as normal or abnormal would be done by training SVM classifier based on estimated parameters of the mixture model. This method is examined for macular abnormality detection such as AMD, DME, and MH and achieve maximum accuracy of 97.8%. Clinical Relevance - This study establishes automatic method for anomaly detection on OCT images and provides fast and accurate OCT interpretation in clinical application.
Collapse
|