1
|
Aboelkhir HAB, Elomri A, ElMekkawy TY, Kerbache L, Elakkad MS, Al-Ansari A, Aboumarzouk OM, El Omri A. A Bibliometric Analysis and Visualization of Decision Support Systems for Healthcare Referral Strategies. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:16952. [PMID: 36554837 PMCID: PMC9778793 DOI: 10.3390/ijerph192416952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 10/24/2022] [Accepted: 11/14/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND The referral process is an important research focus because of the potential consequences of delays, especially for patients with serious medical conditions that need immediate care, such as those with metastatic cancer. Thus, a systematic literature review of recent and influential manuscripts is critical to understanding the current methods and future directions in order to improve the referral process. METHODS A hybrid bibliometric-structured review was conducted using both quantitative and qualitative methodologies. Searches were conducted of three databases, Web of Science, Scopus, and PubMed, in addition to the references from the eligible papers. The papers were considered to be eligible if they were relevant English articles or reviews that were published from January 2010 to June 2021. The searches were conducted using three groups of keywords, and bibliometric analysis was performed, followed by content analysis. RESULTS A total of 163 papers that were published in impactful journals between January 2010 and June 2021 were selected. These papers were then reviewed, analyzed, and categorized as follows: descriptive analysis (n = 77), cause and effect (n = 12), interventions (n = 50), and quality management (n = 24). Six future research directions were identified. CONCLUSIONS Minimal attention was given to the study of the primary referral of blood cancer cases versus those with solid cancer types, which is a gap that future studies should address. More research is needed in order to optimize the referral process, specifically for suspected hematological cancer patients.
Collapse
Affiliation(s)
| | - Adel Elomri
- College of Science and Engineering, Hamad Bin Khalifa University, Doha 34110, Qatar
| | - Tarek Y. ElMekkawy
- Department of Mechanical and Industrial Engineering, College of Engineering, Qatar University, Doha 2713, Qatar
| | - Laoucine Kerbache
- College of Science and Engineering, Hamad Bin Khalifa University, Doha 34110, Qatar
| | - Mohamed S. Elakkad
- Surgical Research Section, Department of Surgery, Hamad Medical Corporation, Doha 3050, Qatar
| | - Abdulla Al-Ansari
- Surgical Research Section, Department of Surgery, Hamad Medical Corporation, Doha 3050, Qatar
| | - Omar M. Aboumarzouk
- Surgical Research Section, Department of Surgery, Hamad Medical Corporation, Doha 3050, Qatar
- College of Medicine, QU-Health, Qatar University, Doha 2713, Qatar
- School of Medicine, Dentistry and Nursing, The University of Glasgow, Glasgow G12 8QQ, UK
| | - Abdelfatteh El Omri
- Surgical Research Section, Department of Surgery, Hamad Medical Corporation, Doha 3050, Qatar
| |
Collapse
|
2
|
Google’s new AI technology detects cardiac issues using retinal scan. APPLIED NANOSCIENCE 2022. [DOI: 10.1007/s13204-021-02208-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
3
|
Narhari BB, Murlidhar BK, Sayyad AD, Sable GS. Automated diagnosis of diabetic retinopathy enabled by optimized thresholding-based blood vessel segmentation and hybrid classifier. BIO-ALGORITHMS AND MED-SYSTEMS 2020. [DOI: 10.1515/bams-2020-0053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Abstract
Objectives
The focus of this paper is to introduce an automated early Diabetic Retinopathy (DR) detection scheme from colour fundus images through enhanced segmentation and classification strategies by analyzing blood vessels.
Methods
The occurrence of DR is increasing from the past years, impacting the eyes due to a sudden rise in the glucose level of blood. All over the world, half of the people who are under age 70 are severely suffered from diabetes. The patients who are affected by DR will lose their vision during the absence of early recognition of DR and appropriate treatment. To decrease the growth and occurrence of loss of vision, the early detection and timely treatment of DR are desirable. At present, deep learning models have presented better performance using retinal images for DR detection. In this work, the input retinal fundus images are initially subjected to pre-processing that undergoes contrast enhancement by Contrast Limited Adaptive Histogram Equalization (CLAHE) and average filtering. Further, the optimized binary thresholding-based segmentation is done for blood vessel segmentation. For the segmented image, Tri-level Discrete Level Decomposition (Tri-DWT) is performed to decompose it. In the feature extraction phase, Local Binary Pattern (LBP), and Gray-Level Co-occurrence Matrices (GLCMs) are extracted. Next, the classification of images is done through the combination of two algorithms, one is Neural Network (NN), and the other Convolutional Neural Network (CNN). The extracted features are subjected to NN, and the tri-DWT-based segmented image is subjected to CNN. Both the segmentation and classification phases are enhanced by the improved meta-heuristic algorithm called Fitness Rate-based Crow Search Algorithm (FR-CSA), in which few parameters are optimized for attaining maximum detection accuracy.
Results
The proposed DR detection model was implemented in MATLAB 2018a, and the analysis was done using three datasets, HRF, Messidor, and DIARETDB.
Conclusions
The developed FR-CSA algorithm has the best detection accuracy in diagnosing DR.
Collapse
Affiliation(s)
- Bansode Balbhim Narhari
- Department of Electronics & Telecommunication Engineering , MIT College of Engineering, Dr. Babasaheb Ambedkar Marathwada University , Aurangabad , India
| | - Bakwad Kamlakar Murlidhar
- Department of Electronics Engineering , Puranmal Lahoti Govt. Polytechnic College, MSBTE, Latur , Mumbai , India
| | - Ajij Dildar Sayyad
- Department of Electronics & Telecommunication Engineering , MIT College of Engineering, Dr. Babasaheb Ambedkar Marathwada University , Aurangabad , India
| | - Ganesh Shahubha Sable
- Department of Electronics & Telecommunication Engineering , MIT College of Engineering, Dr. Babasaheb Ambedkar Marathwada University , Aurangabad , India
| |
Collapse
|
4
|
Gossec L, Guyard F, Leroy D, Lafargue T, Seiler M, Jacquemin C, Molto A, Sellam J, Foltz V, Gandjbakhch F, Hudry C, Mitrovic S, Fautrel B, Servy H. Detection of Flares by Decrease in Physical Activity, Collected Using Wearable Activity Trackers in Rheumatoid Arthritis or Axial Spondyloarthritis: An Application of Machine Learning Analyses in Rheumatology. Arthritis Care Res (Hoboken) 2020; 71:1336-1343. [PMID: 30242992 DOI: 10.1002/acr.23768] [Citation(s) in RCA: 77] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 09/18/2018] [Indexed: 12/14/2022]
Abstract
OBJECTIVE Flares in rheumatoid arthritis (RA) and axial spondyloarthritis (SpA) may influence physical activity. The aim of this study was to assess longitudinally the association between patient-reported flares and activity-tracker-provided steps per minute, using machine learning. METHODS This prospective observational study (ActConnect) included patients with definite RA or axial SpA. For a 3-month time period, physical activity was assessed continuously by number of steps/minute, using a consumer grade activity tracker, and flares were self-assessed weekly. Machine-learning techniques were applied to the data set. After intrapatient normalization of the physical activity data, multiclass Bayesian methods were used to calculate sensitivities, specificities, and predictive values of the machine-generated models of physical activity in order to predict patient-reported flares. RESULTS Overall, 155 patients (1,339 weekly flare assessments and 224,952 hours of physical activity assessments) were analyzed. The mean ± SD age for patients with RA (n = 82) was 48.9 ± 12.6 years and was 41.2 ± 10.3 years for those with axial SpA (n = 73). The mean ± SD disease duration was 10.5 ± 8.8 years for patients with RA and 10.8 ± 9.1 years for those with axial SpA. Fourteen patients with RA (17.1%) and 41 patients with axial SpA (56.2%) were male. Disease was well-controlled (Disease Activity Score in 28 joints mean ± SD 2.2 ± 1.2; Bath Ankylosing Spondylitis Disease Activity Index score mean ± SD 3.1 ± 2.0), but flares were frequent (22.7% of all weekly assessments). The model generated by machine learning performed well against patient-reported flares (mean sensitivity 96% [95% confidence interval (95% CI) 94-97%], mean specificity 97% [95% CI 96-97%], mean positive predictive value 91% [95% CI 88-96%], and negative predictive value 99% [95% CI 98-100%]). Sensitivity analyses were confirmatory. CONCLUSION Although these pilot findings will have to be confirmed, the correct detection of flares by machine-learning processing of activity tracker data provides a framework for future studies of remote-control monitoring of disease activity, with great precision and minimal patient burden.
Collapse
Affiliation(s)
- Laure Gossec
- Sorbonne Université and Pitié Salpêtrière Hospital, AP-HP, Paris, France
| | | | | | | | | | | | - Anna Molto
- Cochin Hospital, AP-HP, INSERM U1153, PRES Sorbonne Paris-Cité, Paris Descartes University, Paris, France
| | - Jérémie Sellam
- Sorbonne Université, INSERM UMRS 938, Paris, France, St. Antoine Hospital, AP-HP, DHU i2B, Paris, France
| | - Violaine Foltz
- Sorbonne Université and Pitié Salpêtrière Hospital, AP-HP, Paris, France
| | | | | | - Stéphane Mitrovic
- Sorbonne Université and Pitié Salpêtrière Hospital, AP-HP, Paris, France
| | - Bruno Fautrel
- Sorbonne Université and Pitié Salpêtrière Hospital, AP-HP, Paris, France
| | | |
Collapse
|
5
|
Tennakoon R, Bortsova G, Orting S, Gostar AK, Wille MMW, Saghir Z, Hoseinnezhad R, de Bruijne M, Bab-Hadiashar A. Classification of Volumetric Images Using Multi-Instance Learning and Extreme Value Theorem. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:854-865. [PMID: 31425069 DOI: 10.1109/tmi.2019.2936244] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Volumetric imaging is an essential diagnostic tool for medical practitioners. The use of popular techniques such as convolutional neural networks (CNN) for analysis of volumetric images is constrained by the availability of detailed (with local annotations) training data and GPU memory. In this paper, the volumetric image classification problem is posed as a multi-instance classification problem and a novel method is proposed to adaptively select positive instances from positive bags during the training phase. This method uses the extreme value theory to model the feature distribution of the images without a pathology and use it to identify positive instances of an imaged pathology. The experimental results, on three separate image classification tasks (i.e. classify retinal OCT images according to the presence or absence of fluid build-ups, emphysema detection in pulmonary 3D-CT images and detection of cancerous regions in 2D histopathology images) show that the proposed method produces classifiers that have similar performance to fully supervised methods and achieves the state of the art performance in all examined test cases.
Collapse
|
6
|
Loukas C, Sgouros NP. Multi‐instance multi‐label learning for surgical image annotation. Int J Med Robot 2020; 16:e2058. [DOI: 10.1002/rcs.2058] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 10/30/2019] [Accepted: 11/06/2019] [Indexed: 12/23/2022]
Affiliation(s)
- Constantinos Loukas
- Laboratory of Medical PhysicsMedical School National and Kapodistrian University of Athens Athens Greece
| | - Nicholas P. Sgouros
- Department of InformaticsNational and Kapodistrian University of Athens Athens Greece
| |
Collapse
|
7
|
Gómez-Correa JE, Torres-Treviño LM, Moragrega-Adame E, Mayorquin-Ruiz M, Villalobos-Ojeda C, Velasco-Barona C, Chávez-Cerda S. Intelligent-assistant system for scleral spur location. APPLIED OPTICS 2020; 59:3026-3032. [PMID: 32400579 DOI: 10.1364/ao.384440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Accepted: 03/03/2020] [Indexed: 06/11/2023]
Abstract
A system based on the use of two artificial neural networks (ANNs) to determine the location of the scleral spur of the human eye in ocular images generated by an ultrasound biomicroscopy is presented in this paper. The two ANNs establish a relationship between the distance of four manually placed landmarks in an ocular image with the coordinates of the scleral spur. The latter coordinates are generated by the expert knowledge of a subject matter specialist. Trained ANNs that generate good results for scleral spur location are incorporated into a software system. Statistical indicators and results yield an efficiency performance above 95%.
Collapse
|
8
|
Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, Liu L, Wang J, Liu X, Gao L, Wu T, Xiao J, Wang F, Yin B, Wang Y, Danala G, He L, Choi YH, Lee YC, Jung SH, Li Z, Sui X, Wu J, Li X, Zhou T, Toth J, Baran A, Kori A, Chennamsetty SS, Safwan M, Alex V, Lyu X, Cheng L, Chu Q, Li P, Ji X, Zhang S, Shen Y, Dai L, Saha O, Sathish R, Melo T, Araújo T, Harangi B, Sheng B, Fang R, Sheet D, Hajdu A, Zheng Y, Mendonça AM, Zhang S, Campilho A, Zheng B, Shen D, Giancardo L, Quellec G, Mériaudeau F. IDRiD: Diabetic Retinopathy - Segmentation and Grading Challenge. Med Image Anal 2019; 59:101561. [PMID: 31671320 DOI: 10.1016/j.media.2019.101561] [Citation(s) in RCA: 86] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 09/09/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023]
Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
Collapse
Affiliation(s)
- Prasanna Porwal
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA.
| | - Samiksha Pachade
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | | | | | | | - Lihong Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - Xinhui Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - TianBo Wu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | - Jing Xiao
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | | | - Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Linsheng He
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Yoon Ho Choi
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Yeong Chan Lee
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Sang-Hyuk Jung
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Xiaodan Sui
- School of Information Science and Engineering, Shandong Normal University, China
| | - Junyan Wu
- Cleerly Inc., New York, United States
| | | | - Ting Zhou
- University at Buffalo, New York, United States
| | - Janos Toth
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Agnes Baran
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | | | | | | | | | - Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China; Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore
| | - Li Cheng
- Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore; Department of Electric and Computer Engineering, University of Alberta, Canada
| | - Qinhao Chu
- School of Computing, National University of Singapore, Singapore
| | - Pengcheng Li
- School of Computing, National University of Singapore, Singapore
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd., China
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yaxin Shen
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ling Dai
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | | | | | - Tânia Melo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - Teresa Araújo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Balazs Harangi
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, USA
| | | | - Andras Hajdu
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, China
| | - Ana Maria Mendonça
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Aurélio Campilho
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Luca Giancardo
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | | | - Fabrice Mériaudeau
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Malaysia; ImViA/IFTIM, Université de Bourgogne, Dijon, France
| |
Collapse
|
9
|
Data Driven Approach for Eye Disease Classification with Machine Learning. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9142789] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Medical health systems have been concentrating on artificial intelligence techniques for speedy diagnosis. However, the recording of health data in a standard form still requires attention so that machine learning can be more accurate and reliable by considering multiple features. The aim of this study is to develop a general framework for recording diagnostic data in an international standard format to facilitate prediction of disease diagnosis based on symptoms using machine learning algorithms. Efforts were made to ensure error-free data entry by developing a user-friendly interface. Furthermore, multiple machine learning algorithms including Decision Tree, Random Forest, Naive Bayes and Neural Network algorithms were used to analyze patient data based on multiple features, including age, illness history and clinical observations. This data was formatted according to structured hierarchies designed by medical experts, whereas diagnosis was made as per the ICD-10 coding developed by the American Academy of Ophthalmology. Furthermore, the system is designed to evolve through self-learning by adding new classifications for both diagnosis and symptoms. The classification results from tree-based methods demonstrated that the proposed framework performs satisfactorily, given a sufficient amount of data. Owing to a structured data arrangement, the random forest and decision tree algorithms’ prediction rate is more than 90% as compared to more complex methods such as neural networks and the naïve Bayes algorithm.
Collapse
|
10
|
Astorino A, Fuduli A, Veltri P, Vocaturo E. Melanoma Detection by Means of Multiple Instance Learning. Interdiscip Sci 2019; 12:24-31. [PMID: 31292853 DOI: 10.1007/s12539-019-00341-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Revised: 05/20/2019] [Accepted: 06/28/2019] [Indexed: 10/26/2022]
Abstract
We present an application to melanoma detection of a multiple instance learning (MIL) approach, whose objective, in the binary case, is to discriminate between positive and negative sets of items. In the MIL terminology these sets are called bags and the items inside the bags are called instances. Under the hypothesis that a bag is positive if at least one of its instances is positive and it is negative if all its instances are negative, the MIL paradigm fits very well with images classification, since an image (bag) is in general classified on the basis of some its subregions (instances). In this work we have applied a MIL algorithm on some clinical data constituted by color dermoscopic images, with the aim to discriminate between melanomas (positive images) and common nevi (negative images). In comparison with standard classification approaches, such as the well known support vector machine, our method performs very well in terms both of accuracy and sensitivity. In particular, using a leave-one-out validation on a data set constituted by 80 melanomas and 80 common nevi, we have obtained the following results: accuracy = 92.50%, sensitivity = 97.50% and specificity = 87.50%. Since the results appear promising, we conclude that a MIL technique could be at the basis of more sophisticated tools useful to physicians in melanoma detection.
Collapse
Affiliation(s)
| | - Antonio Fuduli
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy.
| | - Pierangelo Veltri
- Bioinformatics Laboratory, Surgical and Medical Science Department - DSMC, University Magna Graecia, Catanzaro, Italy
| | - Eugenio Vocaturo
- Department of Computer Engineering, Modeling, Electronics and Systems - DIMES, University of Calabria, Rende, Italy
| |
Collapse
|
11
|
Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry (Basel) 2019. [DOI: 10.3390/sym11060749] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that exists throughout the world. DR occurs due to a high ratio of glucose in the blood, which causes alterations in the retinal microvasculature. Without preemptive symptoms of DR, it leads to complete vision loss. However, early screening through computer-assisted diagnosis (CAD) tools and proper treatment have the ability to control the prevalence of DR. Manual inspection of morphological changes in retinal anatomic parts are tedious and challenging tasks. Therefore, many CAD systems were developed in the past to assist ophthalmologists for observing inter- and intra-variations. In this paper, a recent review of state-of-the-art CAD systems for diagnosis of DR is presented. We describe all those CAD systems that have been developed by various computational intelligence and image processing techniques. The limitations and future trends of current CAD systems are also described in detail to help researchers. Moreover, potential CAD systems are also compared in terms of statistical parameters to quantitatively evaluate them. The comparison results indicate that there is still a need for accurate development of CAD systems to assist in the clinical diagnosis of diabetic retinopathy.
Collapse
|
12
|
Pires R, Avila S, Wainer J, Valle E, Abramoff MD, Rocha A. A data-driven approach to referable diabetic retinopathy detection. Artif Intell Med 2019; 96:93-106. [PMID: 31164214 DOI: 10.1016/j.artmed.2019.03.009] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 03/23/2019] [Accepted: 03/26/2019] [Indexed: 01/17/2023]
Abstract
Prior art on automated screening of diabetic retinopathy and direct referral decision shows promising performance; yet most methods build upon complex hand-crafted features whose performance often fails to generalize. OBJECTIVE We investigate data-driven approaches that extract powerful abstract representations directly from retinal images to provide a reliable referable diabetic retinopathy detector. METHODS We gradually build the solution based on convolutional neural networks, adding data augmentation, multi-resolution training, robust feature-extraction augmentation, and a patient-basis analysis, testing the effectiveness of each improvement. RESULTS The proposed method achieved an area under the ROC curve of 98.2% (95% CI: 97.4-98.9%) under a strict cross-dataset protocol designed to test the ability to generalize - training on the Kaggle competition dataset and testing using the Messidor-2 dataset. With a 5 × 2-fold cross-validation protocol, similar results are achieved for Messidor-2 and DR2 datasets, reducing the classification error by over 44% when compared to most published studies in existing literature. CONCLUSION Additional boost strategies can improve performance substantially, but it is important to evaluate whether the additional (computation- and implementation-) complexity of each improvement is worth its benefits. We also corroborate that novel families of data-driven methods are the state of the art for diabetic retinopathy screening. SIGNIFICANCE By learning powerful discriminative patterns directly from available training retinal images, it is possible to perform referral diagnostics without detecting individual lesions.
Collapse
Affiliation(s)
- Ramon Pires
- Institute of Computing, University of Campinas (Unicamp), Campinas 13083-852, Brazil.
| | - Sandra Avila
- Institute of Computing, University of Campinas (Unicamp), Campinas 13083-852, Brazil.
| | - Jacques Wainer
- Institute of Computing, University of Campinas (Unicamp), Campinas 13083-852, Brazil.
| | - Eduardo Valle
- School of Electrical and Computing Engineering, University of Campinas (Unicamp), Campinas 13083-852, Brazil.
| | - Michael D Abramoff
- Stephen R. Wynn Institute for Vision Research, the Department of Electrical and Computer Engineering, the Department of Biomedical Engineering, the University of Iowa, Iowa City, IA 52242, USA; VA Medical Center, Iowa City, IA 52246, USA; IDx LLC, Iowa City, IA, USA.
| | - Anderson Rocha
- Institute of Computing, University of Campinas (Unicamp), Campinas 13083-852, Brazil.
| |
Collapse
|
13
|
|
14
|
Orlando JI, Prokofyeva E, Del Fresno M, Blaschko MB. An ensemble deep learning based approach for red lesion detection in fundus images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 153:115-127. [PMID: 29157445 DOI: 10.1016/j.cmpb.2017.10.017] [Citation(s) in RCA: 83] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Revised: 09/06/2017] [Accepted: 10/12/2017] [Indexed: 05/23/2023]
Abstract
BACKGROUND AND OBJECTIVES Diabetic retinopathy (DR) is one of the leading causes of preventable blindness in the world. Its earliest sign are red lesions, a general term that groups both microaneurysms (MAs) and hemorrhages (HEs). In daily clinical practice, these lesions are manually detected by physicians using fundus photographs. However, this task is tedious and time consuming, and requires an intensive effort due to the small size of the lesions and their lack of contrast. Computer-assisted diagnosis of DR based on red lesion detection is being actively explored due to its improvement effects both in clinicians consistency and accuracy. Moreover, it provides comprehensive feedback that is easy to assess by the physicians. Several methods for detecting red lesions have been proposed in the literature, most of them based on characterizing lesion candidates using hand crafted features, and classifying them into true or false positive detections. Deep learning based approaches, by contrast, are scarce in this domain due to the high expense of annotating the lesions manually. METHODS In this paper we propose a novel method for red lesion detection based on combining both deep learned and domain knowledge. Features learned by a convolutional neural network (CNN) are augmented by incorporating hand crafted features. Such ensemble vector of descriptors is used afterwards to identify true lesion candidates using a Random Forest classifier. RESULTS We empirically observed that combining both sources of information significantly improve results with respect to using each approach separately. Furthermore, our method reported the highest performance on a per-lesion basis on DIARETDB1 and e-ophtha, and for screening and need for referral on MESSIDOR compared to a second human expert. CONCLUSIONS Results highlight the fact that integrating manually engineered approaches with deep learned features is relevant to improve results when the networks are trained from lesion-level annotated data. An open source implementation of our system is publicly available at https://github.com/ignaciorlando/red-lesion-detection.
Collapse
Affiliation(s)
- José Ignacio Orlando
- Pladema Institute, UNCPBA, Gral. Pinto 399, Tandil, Argentina; Consejo Nacional de Investigaciones Científicas y Técnicas, CONICET, Argentina.
| | - Elena Prokofyeva
- Scientific Institute of Public Health (WIV-ISP), Brussels, Belgium; Federal Agency for Medicines and Health Products (FAMHP), Brussels, Belgium
| | - Mariana Del Fresno
- Pladema Institute, UNCPBA, Gral. Pinto 399, Tandil, Argentina; Comisión de Investigaciones Científicas de la Provincia de Buenos Aires, CIC-PBA, Buenos Aires, Argentina
| | | |
Collapse
|
15
|
Stark Assessment of Lifestyle Based Human Disorders Using Data Mining Based Learning Techniques. Ing Rech Biomed 2017. [DOI: 10.1016/j.irbm.2017.09.002] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
16
|
Noyel G, Thomas R, Bhakta G, Crowder A, Owens D, Boyle P. Superimposition of eye fundus images for longitudinal analysis from large public health databases. Biomed Phys Eng Express 2017. [DOI: 10.1088/2057-1976/aa7d16] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
17
|
Quellec G, Charrière K, Boudi Y, Cochener B, Lamard M. Deep image mining for diabetic retinopathy screening. Med Image Anal 2017; 39:178-193. [PMID: 28511066 DOI: 10.1016/j.media.2017.04.012] [Citation(s) in RCA: 162] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Revised: 04/18/2017] [Accepted: 04/27/2017] [Indexed: 01/29/2023]
Abstract
Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.
Collapse
Affiliation(s)
- Gwenolé Quellec
- Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France.
| | - Katia Charrière
- IMT Atlantique, Département ITI, Technopôle Brest-Iroise, CS 83818, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France
| | - Yassine Boudi
- IMT Atlantique, Département ITI, Technopôle Brest-Iroise, CS 83818, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France
| | - Béatrice Cochener
- Université de Bretagne Occidentale, 3 rue des Archives, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France; Service d'Ophtalmologie, CHRU Brest, 2 avenue Foch, Brest F-29200, France
| | - Mathieu Lamard
- Université de Bretagne Occidentale, 3 rue des Archives, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France
| |
Collapse
|
18
|
Costa P, Campilho A. Convolutional bag of words for diabetic retinopathy detection from eye fundus images. ACTA ACUST UNITED AC 2017. [DOI: 10.1186/s41074-017-0023-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Abstract
This paper describes a methodology for diabetic retinopathy detection from eye fundus images using a generalization of the bag-of-visual-words (BoVW) method. We formulate the BoVW as two neural networks that can be trained jointly. Unlike the BoVW, our model is able to learn how to perform feature extraction, feature encoding, and classification guided by the classification error. The model achieves 0.97 area under the curve (AUC) on the DR2 dataset while the standard BoVW approach achieves 0.94 AUC. Also, it performs at the same level of the state-of-the-art on the Messidor dataset with 0.90 AUC.
Collapse
|
19
|
Quellec G, Lamard M, Cazuguel G, Erginay A, Cochener B. Mapping the retinas of a patient using a mixed set of fundus photographs from both eyes. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:3239-3242. [PMID: 28268998 DOI: 10.1109/embc.2016.7591419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
With the increased prevalence of retinal pathologies, automating the detection and progression measurement of these pathologies is becoming more and more relevant. Color fundus photography is the leading modality for assessing retinal pathologies. Because eye fundus cameras have a limited field of view, multiple photographs are taken from each retina during an eye fundus examination. However, operators usually don't indicate which photographs are from the left retina and which ones are from the right retina. This paper presents a novel algorithm that automatically assigns each photograph to one retina and builds a composite image (or "mosaic") per retina, which is expected to push the performance of automated diagnosis forward. The algorithm starts by jointly forming two mosaics, one per retina, using a novel graph theoretic approach. Then, in order to determine which mosaic corresponds to the left retina and which one corresponds to the right retina, two retinal landmarks are detected robustly in each mosaic: the main vessel arch surrounding the macula and the optic disc. The laterality of each mosaic derives from their relative location. Experiments on 2790 manually annotated images validate the very good performance of the proposed framework even for highly pathological images.
Collapse
|
20
|
Quellec G, Cazuguel G, Cochener B, Lamard M. Multiple-Instance Learning for Medical Image and Video Analysis. IEEE Rev Biomed Eng 2017; 10:213-234. [DOI: 10.1109/rbme.2017.2651164] [Citation(s) in RCA: 86] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|