1
|
Rashad M, Afifi I, Abdelfatah M. RbQE: An Efficient Method for Content-Based Medical Image Retrieval Based on Query Expansion. J Digit Imaging 2023; 36:1248-1261. [PMID: 36702987 PMCID: PMC10287886 DOI: 10.1007/s10278-022-00769-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 12/18/2022] [Accepted: 12/19/2022] [Indexed: 01/27/2023] Open
Abstract
Systems for retrieving and managing content-based medical images are becoming more important, especially as medical imaging technology advances and the medical image database grows. In addition, these systems can also use medical images to better grasp and gain a deeper understanding of the causes and treatments of different diseases, not just for diagnostic purposes. For achieving all these purposes, there is a critical need for an efficient and accurate content-based medical image retrieval (CBMIR) method. This paper proposes an efficient method (RbQE) for the retrieval of computed tomography (CT) and magnetic resonance (MR) images. RbQE is based on expanding the features of querying and exploiting the pre-trained learning models AlexNet and VGG-19 to extract compact, deep, and high-level features from medical images. There are two searching procedures in RbQE: a rapid search and a final search. In the rapid search, the original query is expanded by retrieving the top-ranked images from each class and is used to reformulate the query by calculating the mean values for deep features of the top-ranked images, resulting in a new query for each class. In the final search, the new query that is most similar to the original query will be used for retrieval from the database. The performance of the proposed method has been compared to state-of-the-art methods on four publicly available standard databases, namely, TCIA-CT, EXACT09-CT, NEMA-CT, and OASIS-MRI. Experimental results show that the proposed method exceeds the compared methods by 0.84%, 4.86%, 1.24%, and 14.34% in average retrieval precision (ARP) for the TCIA-CT, EXACT09-CT, NEMA-CT, and OASIS-MRI databases, respectively.
Collapse
Affiliation(s)
- Metwally Rashad
- Department of Computer Science, Faculty of Computers & Artificial Intelligence, Benha University, Benha, Egypt
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Gamasa, Egypt
| | - Ibrahem Afifi
- Department of Information System, Faculty of Computers & Artificial Intelligence, Benha University, Benha, Egypt
| | - Mohammed Abdelfatah
- Department of Information System, Faculty of Computers & Artificial Intelligence, Benha University, Benha, Egypt
| |
Collapse
|
2
|
Schulze MM, Ng A, Yang M, Panjwani F, Srinivasan S, Jones LW, Senchyna M. Bulbar Redness and Dry Eye Disease: Comparison of a Validated Subjective Grading Scale and an Objective Automated Method. Optom Vis Sci 2021; 98:113-120. [PMID: 33534379 DOI: 10.1097/opx.0000000000001638] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
SIGNIFICANCE In this study, assessments of conjunctival redness were performed to evaluate whether patients with or without dry eye disease (DED) could be discriminated based on this measure. Our findings suggest that subjectively grading redness by quadrant, as opposed to automated en face measurements, may be more suitable for this purpose. PURPOSE This study aimed to quantify bulbar redness using the validated bulbar redness (VBR) grading scale and an automated objective method (Oculus Keratograph 5M; K5M) in participants with DED and non-DED controls. METHODS Participants with DED (Ocular Surface Disease Index score ≥20 and Oxford scale corneal staining ≥2) and controls (Ocular Surface Disease Index score ≤10 and corneal staining ≤1) attended two study visits. In part 1A of visit 1, baseline bulbar redness was graded with the VBR scale in each conjunctival quadrant of both eyes, followed by automated measurements of temporal and nasal redness with the K5M. This was immediately followed by part 1B, during which a topical vasoconstrictor was instilled into both eyes. Redness assessments were repeated 5 and 30 minutes after instillation with both instruments. Participants returned 14 days later for visit 2, where the same assessments as for visit 1A were repeated. RESULTS Seventy-four participants (50 DED and 24 controls) completed the study. There were statistically significant differences in redness between the DED and control groups when assessed with the VBR scale (14/16 comparisons; all, P < .05), whereas no significant differences in K5M-derived redness between the DED and non-DED groups were found at any location or time point. Both subjective and objective instruments detected statistically significant reductions in redness 5 and 30 minutes after instillation of the vasoconstrictor (all, P < .01). CONCLUSIONS Although both subjective and objective instruments were sensitive to detecting changes in redness induced by vasoconstriction, statistically significant differences in redness between DED and control groups were only found using the VBR scale.
Collapse
Affiliation(s)
- Marc-Matthias Schulze
- Centre for Ocular Research & Education (CORE), School of Optometry and Vision Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Alison Ng
- Centre for Ocular Research & Education (CORE), School of Optometry and Vision Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Mike Yang
- Centre for Ocular Research & Education (CORE), School of Optometry and Vision Science, University of Waterloo, Waterloo, Ontario, Canada
| | | | | | - Lyndon W Jones
- Centre for Ocular Research & Education (CORE), School of Optometry and Vision Science, University of Waterloo, Waterloo, Ontario, Canada
| | | |
Collapse
|
3
|
Alarcón-Paredes A, Guzmán-Guzmán IP, Hernández-Rosales DE, Navarro-Zarza JE, Cantillo-Negrete J, Cuevas-Valencia RE, Alonso GA. Computer-aided diagnosis based on hand thermal, RGB images, and grip force using artificial intelligence as screening tool for rheumatoid arthritis in women. Med Biol Eng Comput 2021; 59:287-300. [PMID: 33420616 DOI: 10.1007/s11517-020-02294-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Accepted: 12/08/2020] [Indexed: 11/25/2022]
Abstract
Rheumatoid arthritis (RA) is an autoimmune disorder that typically affects people between 23 and 60 years old causing chronic synovial inflammation, symmetrical polyarthritis, destruction of large and small joints, and chronic disability. Clinical diagnosis of RA is stablished by current ACR-EULAR criteria, and it is crucial for starting conventional therapy in order to minimize damage progression. The 2010 ACR-EULAR criteria include the presence of swollen joints, elevated levels of rheumatoid factor or anti-citrullinated protein antibodies (ACPA), elevated acute phase reactant, and duration of symptoms. In this paper, a computer-aided system for helping in the RA diagnosis, based on quantitative and easy-to-acquire variables, is presented. The participants in this study were all female, grouped into two classes: class I, patients diagnosed with RA (n = 100), and class II corresponding to controls without RA (n = 100). The novel approach is constituted by the acquisition of thermal and RGB images, recording their hand grip strength or gripping force. The weight, height, and age were also obtained from all participants. The color layout descriptors (CLD) were obtained from each image for having a compact representation. After, a wrapper forward selection method in a range of classification algorithms included in WEKA was performed. In the feature selection process, variables such as hand images, grip force, and age were found relevant, whereas weight and height did not provide important information to the classification. Our system obtains an AUC ROC curve greater than 0.94 for both thermal and RGB images using the RandomForest classifier. Thirty-eight subjects were considered for an external test in order to evaluate and validate the model implementation. In this test, an accuracy of 94.7% was obtained using RGB images; the confusion matrix revealed our system provides a correct diagnosis for all participants and failed in only two of them (5.3%). Graphical abstract.
Collapse
Affiliation(s)
| | - Iris P Guzmán-Guzmán
- Facultad de Ciencias Químico-Biológicas, Universidad Autónoma de Guerrero, Chilpancingo, Mexico
| | | | | | - Jessica Cantillo-Negrete
- Division of Medical Engineering Research, Instituto Nacional de Rehabilitación "Luis Guillermo Ibarra Ibarra", Mexico City, Mexico
| | | | - Gustavo A Alonso
- Facultad de Ingeniería, Universidad Autónoma de Guerrero, Chilpancingo, Mexico.
| |
Collapse
|
4
|
Wu X, Liu L, Zhao L, Guo C, Li R, Wang T, Yang X, Xie P, Liu Y, Lin H. Application of artificial intelligence in anterior segment ophthalmic diseases: diversity and standardization. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:714. [PMID: 32617334 PMCID: PMC7327317 DOI: 10.21037/atm-20-976] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Artificial intelligence (AI) based on machine learning (ML) and deep learning (DL) techniques has gained tremendous global interest in this era. Recent studies have demonstrated the potential of AI systems to provide improved capability in various tasks, especially in image recognition field. As an image-centric subspecialty, ophthalmology has become one of the frontiers of AI research. Trained on optical coherence tomography, slit-lamp images and even ordinary eye images, AI can achieve robust performance in the detection of glaucoma, corneal arcus and cataracts. Moreover, AI models based on other forms of data also performed satisfactorily. Nevertheless, several challenges with AI application in ophthalmology have also arisen, including standardization of data sets, validation and applicability of AI models, and ethical issues. In this review, we provided a summary of the state-of-the-art AI application in anterior segment ophthalmic diseases, potential challenges in clinical implementation and our prospects.
Collapse
Affiliation(s)
- Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ting Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaonan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Peichen Xie
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
5
|
Idri A, Benhar H, Fernández-Alemán JL, Kadi I. A systematic map of medical data preprocessing in knowledge discovery. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 162:69-85. [PMID: 29903496 DOI: 10.1016/j.cmpb.2018.05.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Revised: 04/25/2018] [Accepted: 05/03/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Datamining (DM) has, over the last decade, received increased attention in the medical domain and has been widely used to analyze medical datasets in order to extract useful knowledge and previously unknown patterns. However, historical medical data can often comprise inconsistent, noisy, imbalanced, missing and high dimensional data. These challenges lead to a serious bias in predictive modeling and reduce the performance of DM techniques. Data preprocessing is, therefore, an essential step in knowledge discovery as regards improving the quality of data and making it appropriate and suitable for DM techniques. The objective of this paper is to review the use of preprocessing techniques in clinical datasets. METHODS We performed a systematic map of studies regarding the application of data preprocessing to healthcare and published between January 2000 and December 2017. A search string was determined on the basis of the mapping questions and the PICO categories. The search string was then applied in digital databases covering the fields of computer science and medical informatics in order to identify relevant studies. The studies were initially selected by reading their titles, abstracts and keywords. Those that were selected at that stage were then reviewed using a set of inclusion and exclusion criteria in order to eliminate any that were not relevant. This process resulted in 126 primary studies. RESULTS Selected studies were analyzed and classified according to their publication years and channels, research type, empirical type and contribution type. The findings of this mapping study revealed that researchers have paid a considerable amount of attention to preprocessing in medical DM in last decade. A significant number of the selected studies used data reduction and cleaning preprocessing tasks. Moreover, the disciplines in which preprocessing have received most attention are: cardiology, endocrinology and oncology. CONCLUSIONS Researchers should develop and implement standards for an effective integration of multiple medical data types. Moreover, we identified the need to perform literature reviews.
Collapse
Affiliation(s)
- A Idri
- Software Project Management Research Team, ENSIAS, University Mohammed V of Rabat, Morocco.
| | - H Benhar
- Software Project Management Research Team, ENSIAS, University Mohammed V of Rabat, Morocco.
| | - J L Fernández-Alemán
- Department of Informatics and Systems, Faculty of Computer Science, University of Murcia, Spain.
| | - I Kadi
- Software Project Management Research Team, ENSIAS, University Mohammed V of Rabat, Morocco.
| |
Collapse
|
6
|
Pang S, Orgun MA, Yu Z. A novel biomedical image indexing and retrieval system via deep preference learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 158:53-69. [PMID: 29544790 DOI: 10.1016/j.cmpb.2018.02.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 11/23/2017] [Accepted: 02/02/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVES The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. METHODS We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. RESULTS We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state-of-the-art techniques in indexing biomedical images. CONCLUSIONS We propose a novel and automated indexing system based on deep preference learning to characterize biomedical images for developing computer aided diagnosis (CAD) systems in healthcare. Our proposed system shows an outstanding indexing ability and high efficiency for biomedical image retrieval applications and it can be used to collect and annotate the high-resolution images in a biomedical database for further biomedical image research and applications.
Collapse
Affiliation(s)
- Shuchao Pang
- College of Computer Science and Technology, Jilin University, Qianjin Street: 2699, Jilin Province, China; Department of Computing, Macquarie University, Sydney, NSW 2109, Australia.
| | - Mehmet A Orgun
- Department of Computing, Macquarie University, Sydney, NSW 2109, Australia.
| | - Zhezhou Yu
- College of Computer Science and Technology, Jilin University, Qianjin Street: 2699, Jilin Province, China.
| |
Collapse
|
7
|
Sánchez Brea L, Barreira Rodríguez N, Mosquera González A, Pena-Verdeal H, Yebra-Pimentel Vilar E. Precise segmentation of the bulbar conjunctiva for hyperaemia images. Pattern Anal Appl 2017. [DOI: 10.1007/s10044-017-0658-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|