1
|
Zaharieva MS, Salvadori EA, Messinger DS, Visser I, Colonnesi C. Automated facial expression measurement in a longitudinal sample of 4- and 8-month-olds: Baby FaceReader 9 and manual coding of affective expressions. Behav Res Methods 2024; 56:5709-5731. [PMID: 38273072 DOI: 10.3758/s13428-023-02301-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/20/2023] [Indexed: 01/27/2024]
Abstract
Facial expressions are among the earliest behaviors infants use to express emotional states, and are crucial to preverbal social interaction. Manual coding of infant facial expressions, however, is laborious and poses limitations to replicability. Recent developments in computer vision have advanced automated facial expression analyses in adults, providing reproducible results at lower time investment. Baby FaceReader 9 is commercially available software for automated measurement of infant facial expressions, but has received little validation. We compared Baby FaceReader 9 output to manual micro-coding of positive, negative, or neutral facial expressions in a longitudinal dataset of 58 infants at 4 and 8 months of age during naturalistic face-to-face interactions with the mother, father, and an unfamiliar adult. Baby FaceReader 9's global emotional valence formula yielded reasonable classification accuracy (AUC = .81) for discriminating manually coded positive from negative/neutral facial expressions; however, the discrimination of negative from neutral facial expressions was not reliable (AUC = .58). Automatically detected a priori action unit (AU) configurations for distinguishing positive from negative facial expressions based on existing literature were also not reliable. A parsimonious approach using only automatically detected smiling (AU12) yielded good performance for discriminating positive from negative/neutral facial expressions (AUC = .86). Likewise, automatically detected brow lowering (AU3+AU4) reliably distinguished neutral from negative facial expressions (AUC = .79). These results provide initial support for the use of selected automatically detected individual facial actions to index positive and negative affect in young infants, but shed doubt on the accuracy of complex a priori formulas.
Collapse
Affiliation(s)
- Martina S Zaharieva
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands.
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands.
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands.
| | - Eliala A Salvadori
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| | - Daniel S Messinger
- Department of Psychology, University of Miami, Coral Gables, FL, USA
- Department of Pediatrics, University of Miami, Coral Gables, FL, USA
- Department of Music Engineering, University of Miami, Coral Gables, FL, USA
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Ingmar Visser
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| | - Cristina Colonnesi
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Hajianfar G, Hosseini SA, Bagherieh S, Oveisi M, Shiri I, Zaidi H. Impact of harmonization on the reproducibility of MRI radiomic features when using different scanners, acquisition parameters, and image pre-processing techniques: a phantom study. Med Biol Eng Comput 2024; 62:2319-2332. [PMID: 38536580 DOI: 10.1007/s11517-024-03071-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 03/05/2024] [Indexed: 07/31/2024]
Abstract
This study investigated the impact of ComBat harmonization on the reproducibility of radiomic features extracted from magnetic resonance images (MRI) acquired on different scanners, using various data acquisition parameters and multiple image pre-processing techniques using a dedicated MRI phantom. Four scanners were used to acquire an MRI of a nonanatomic phantom as part of the TCIA RIDER database. In fast spin-echo inversion recovery (IR) sequences, several inversion durations were employed, including 50, 100, 250, 500, 750, 1000, 1500, 2000, 2500, and 3000 ms. In addition, a 3D fast spoiled gradient recalled echo (FSPGR) sequence was used to investigate several flip angles (FA): 2, 5, 10, 15, 20, 25, and 30 degrees. Nineteen phantom compartments were manually segmented. Different approaches were used to pre-process each image: Bin discretization, Wavelet filter, Laplacian of Gaussian, logarithm, square, square root, and gradient. Overall, 92 first-, second-, and higher-order statistical radiomic features were extracted. ComBat harmonization was also applied to the extracted radiomic features. Finally, the Intraclass Correlation Coefficient (ICC) and Kruskal-Wallis's (KW) tests were implemented to assess the robustness of radiomic features. The number of non-significant features in the KW test ranged between 0-5 and 29-74 for various scanners, 31-91 and 37-92 for three times tests, 0-33 to 34-90 for FAs, and 3-68 to 65-89 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The number of features with ICC over 90% ranged between 0-8 and 6-60 for various scanners, 11-75 and 17-80 for three times tests, 3-83 to 9-84 for FAs, and 3-49 to 3-63 for IRs before and after ComBat harmonization, with different image pre-processing techniques, respectively. The use of various scanners, IRs, and FAs has a great impact on radiomic features. However, the majority of scanner-robust features is also robust to IR and FA. Among the effective parameters in MR images, several tests in one scanner have a negligible impact on radiomic features. Different scanners and acquisition parameters using various image pre-processing might affect radiomic features to a large extent. ComBat harmonization might significantly impact the reproducibility of MRI radiomic features.
Collapse
Affiliation(s)
- Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Seyyed Ali Hosseini
- Translational Neuroimaging Laboratory, McGill University Research Centre for Studies in Aging, Douglas Hospital, McGill University, Montréal, Québec, Canada
- Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - Sara Bagherieh
- School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mehrdad Oveisi
- Department of Computer Science, University of British Columbia, Vancouver, BC, Canada
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
3
|
Fan Y, Mao S, Li M, Kang J, Li B. LMFD: lightweight multi-feature descriptors for image stitching. Sci Rep 2023; 13:21162. [PMID: 38036564 PMCID: PMC10689729 DOI: 10.1038/s41598-023-48432-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/27/2023] [Indexed: 12/02/2023] Open
Abstract
Image stitching is a fundamental pillar of computer vision, and its effectiveness hinges significantly on the quality of the feature descriptors. However, the existing feature descriptors face several challenges, including inadequate robustness to noise or rotational transformations and limited adaptability during hardware deployment. To address these limitations, this paper proposes a set of feature descriptors for image stitching named Lightweight Multi-Feature Descriptors (LMFD). Based on the extensive extraction of gradients, means, and global information surrounding the feature points, feature descriptors are generated through various combinations to enhance the image stitching process. This endows the algorithm with formidable rotational invariance and noise resistance, thereby improving its accuracy and reliability. Furthermore, the feature descriptors take the form of binary matrices consisting of 0s and 1s, not only facilitating more efficient hardware deployment but also enhancing computational efficiency. The utilization of binary matrices significantly reduces the computational complexity of the algorithm while preserving its efficacy. To validate the effectiveness of LMFD, rigorous experimentation was conducted on the Hpatches and 2D-HeLa datasets. The results demonstrate that LMFD outperforms state-of-the-art image matching algorithms in terms of accuracy. This empirical evidence solidifies the superiority of LMFD and substantiates its potential for practical applications in various domains.
Collapse
Affiliation(s)
- Yingbo Fan
- Institute of Remote Sensing and Geographic Information Systems, Peking University, No.5 Summer Palace Road, Beijing, 100000, China
| | - Shanjun Mao
- Institute of Remote Sensing and Geographic Information Systems, Peking University, No.5 Summer Palace Road, Beijing, 100000, China.
| | - Mei Li
- Institute of Remote Sensing and Geographic Information Systems, Peking University, No.5 Summer Palace Road, Beijing, 100000, China
| | - Jitong Kang
- Institute of Remote Sensing and Geographic Information Systems, Peking University, No.5 Summer Palace Road, Beijing, 100000, China
| | - Ben Li
- Institute of Remote Sensing and Geographic Information Systems, Peking University, No.5 Summer Palace Road, Beijing, 100000, China
| |
Collapse
|
4
|
Gudadhe SS, Thakare AD, Oliva D. Classification of intracranial hemorrhage CT images based on texture analysis using ensemble-based machine learning algorithms: A comparative study. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2023]
|
5
|
Chattopadhyay S, Singh PK, Ijaz MF, Kim S, Sarkar R. SnapEnsemFS: a snapshot ensembling-based deep feature selection model for colorectal cancer histological analysis. Sci Rep 2023; 13:9937. [PMID: 37336964 DOI: 10.1038/s41598-023-36921-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 06/12/2023] [Indexed: 06/21/2023] Open
Abstract
Colorectal cancer is the third most common type of cancer diagnosed annually, and the second leading cause of death due to cancer. Early diagnosis of this ailment is vital for preventing the tumours to spread and plan treatment to possibly eradicate the disease. However, population-wide screening is stunted by the requirement of medical professionals to analyse histological slides manually. Thus, an automated computer-aided detection (CAD) framework based on deep learning is proposed in this research that uses histological slide images for predictions. Ensemble learning is a popular strategy for fusing the salient properties of several models to make the final predictions. However, such frameworks are computationally costly since it requires the training of multiple base learners. Instead, in this study, we adopt a snapshot ensemble method, wherein, instead of the traditional method of fusing decision scores from the snapshots of a Convolutional Neural Network (CNN) model, we extract deep features from the penultimate layer of the CNN model. Since the deep features are extracted from the same CNN model but for different learning environments, there may be redundancy in the feature set. To alleviate this, the features are fed into Particle Swarm Optimization, a popular meta-heuristic, for dimensionality reduction of the feature space and better classification. Upon evaluation on a publicly available colorectal cancer histology dataset using a five-fold cross-validation scheme, the proposed method obtains a highest accuracy of 97.60% and F1-Score of 97.61%, outperforming existing state-of-the-art methods on the same dataset. Further, qualitative investigation of class activation maps provide visual explainability to medical practitioners, as well as justifies the use of the CAD framework in screening of colorectal histology. Our source codes are publicly accessible at: https://github.com/soumitri2001/SnapEnsemFS .
Collapse
Affiliation(s)
- Soumitri Chattopadhyay
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India
| | - Muhammad Fazal Ijaz
- Department of Mechanical Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, Grattam Street, Parkville, VIC, 3010, Australia.
| | - SeongKi Kim
- National Centre of Excellence in Software, Sangmyung University, Seoul, 03016, Korea.
| | - Ram Sarkar
- Department of Computer Science & Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
6
|
Ullah M, Hadi F, Song J, Yu DJ. PScL-2LSAESM: bioimage-based prediction of protein subcellular localization by integrating heterogeneous features with the two-level SAE-SM and mean ensemble method. Bioinformatics 2023; 39:6839969. [PMID: 36413068 PMCID: PMC9947927 DOI: 10.1093/bioinformatics/btac727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/23/2022] Open
Abstract
MOTIVATION Over the past decades, a variety of in silico methods have been developed to predict protein subcellular localization within cells. However, a common and major challenge in the design and development of such methods is how to effectively utilize the heterogeneous feature sets extracted from bioimages. In this regards, limited efforts have been undertaken. RESULTS We propose a new two-level stacked autoencoder network (termed 2L-SAE-SM) to improve its performance by integrating the heterogeneous feature sets. In particular, in the first level of 2L-SAE-SM, each optimal heterogeneous feature set is fed to train our designed stacked autoencoder network (SAE-SM). All the trained SAE-SMs in the first level can output the decision sets based on their respective optimal heterogeneous feature sets, known as 'intermediate decision' sets. Such intermediate decision sets are then ensembled using the mean ensemble method to generate the 'intermediate feature' set for the second-level SAE-SM. Using the proposed framework, we further develop a novel predictor, referred to as PScL-2LSAESM, to characterize image-based protein subcellular localization. Extensive benchmarking experiments on the latest benchmark training and independent test datasets collected from the human protein atlas databank demonstrate the effectiveness of the proposed 2L-SAE-SM framework for the integration of heterogeneous feature sets. Moreover, performance comparison of the proposed PScL-2LSAESM with current state-of-the-art methods further illustrates that PScL-2LSAESM clearly outperforms the existing state-of-the-art methods for the task of protein subcellular localization. AVAILABILITY AND IMPLEMENTATION https://github.com/csbio-njust-edu/PScL-2LSAESM. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Matee Ullah
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Fazal Hadi
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | | | - Dong-Jun Yu
- To whom correspondence should be addressed. or
| |
Collapse
|
7
|
Felipe GZ, Teixeira LO, Pereira RM, Zanoni JN, Souza SRG, Nanni L, Cavalcanti GDC, Costa YMG. Cancer Identification in Enteric Nervous System Preclinical Images Using Handcrafted and Automatic Learned Features. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-11114-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
8
|
Texture and material classification with multi-scale ternary and septenary patterns. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2022.12.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
9
|
Francis A, Pandian IA, Anitha J. A boon to aged society: Early diagnosis of Alzheimer's disease-An opinion. Front Public Health 2022; 10:1076472. [PMID: 36530651 PMCID: PMC9751990 DOI: 10.3389/fpubh.2022.1076472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 11/14/2022] [Indexed: 12/04/2022] Open
Affiliation(s)
- Ambily Francis
- Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India,Department of Electronics and Communication Engineering, Sahrdaya College of Engineering and Technology, Kodakara, India
| | - Immanuel Alex Pandian
- Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - J. Anitha
- Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India,*Correspondence: J. Anitha
| |
Collapse
|
10
|
Fekri-Ershad S, Al-Imari MJ, Hamad MH, Alsaffar MF, Hassan FG, Hadi ME, Mahdi KS. Cell Phenotype Classification Based on Joint of Texture Information and Multilayer Feature Extraction in DenseNet. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6895833. [PMID: 36479023 PMCID: PMC9722294 DOI: 10.1155/2022/6895833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 11/05/2022] [Accepted: 11/16/2022] [Indexed: 11/30/2022]
Abstract
Cell phenotype classification is a critical task in many medical applications, such as protein localization, gene effect identification, and cancer diagnosis in some types. Fluorescence imaging is the most efficient tool to analyze the biological characteristics of cells. So cell phenotype classification in fluorescence microscopy images has received increased attention from scientists in the last decade. The visible structures of cells are usually different in terms of shape, texture, relationship between intensities, etc. In this scope, most of the presented approaches use one type or joint of low-level and high-level features. In this paper, a new approach is proposed based on a combination of low-level and high-level features. An improved version of local quinary patterns is used to extract low-level texture features. Also, an innovative multilayer deep feature extraction method is performed to extract high-level features from DenseNet. In this respect, an output feature map of dense blocks is entered in a separate way to pooling and flatten layers, and finally, feature vectors are concatenated. The performance of the proposed approach is evaluated on the benchmark dataset 2D-HeLa in terms of accuracy. Also, the proposed approach is compared with state-of-the-art methods in terms of classification accuracy. Comparison of results demonstrates higher performance of the proposed approach in comparison with some efficient methods.
Collapse
Affiliation(s)
- Shervan Fekri-Ershad
- Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
- Big Data Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran
| | - Mustafa Jawad Al-Imari
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| | - Mohammed Hayder Hamad
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| | - Marwa Fadhil Alsaffar
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| | - Fuad Ghazi Hassan
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| | - Mazin Eidan Hadi
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| | - Karrar Salih Mahdi
- Department of Medical Laboratory Techniques, Al-Mustaqbal University College, Hillah 51001, Babylon, Iraq
| |
Collapse
|
11
|
Yang Y, Hu Y, Zhang X, Wang S. Two-Stage Selective Ensemble of CNN via Deep Tree Training for Medical Image Classification. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:9194-9207. [PMID: 33705343 DOI: 10.1109/tcyb.2021.3061147] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Medical image classification is an important task in computer-aided diagnosis systems. Its performance is critically determined by the descriptiveness and discriminative power of features extracted from images. With rapid development of deep learning, deep convolutional neural networks (CNNs) have been widely used to learn the optimal high-level features from the raw pixels of images for a given classification task. However, due to the limited amount of labeled medical images with certain quality distortions, such techniques crucially suffer from the training difficulties, including overfitting, local optimums, and vanishing gradients. To solve these problems, in this article, we propose a two-stage selective ensemble of CNN branches via a novel training strategy called deep tree training (DTT). In our approach, DTT is adopted to jointly train a series of networks constructed from the hidden layers of CNN in a hierarchical manner, leading to the advantage that vanishing gradients can be mitigated by supplementing gradients for hidden layers of CNN, and intrinsically obtain the base classifiers on the middle-level features with minimum computation burden for an ensemble solution. Moreover, the CNN branches as base learners are combined into the optimal classifier via the proposed two-stage selective ensemble approach based on both accuracy and diversity criteria. Extensive experiments on CIFAR-10 benchmark and two specific medical image datasets illustrate that our approach achieves better performance in terms of accuracy, sensitivity, specificity, and F1 score measurement.
Collapse
|
12
|
Multi-class nucleus detection and classification using deep convolutional neural network with enhanced high dimensional dissimilarity translation model on cervical cells. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.06.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
13
|
Chen W, Shen W, Gao L, Li X. Hybrid Loss-Constrained Lightweight Convolutional Neural Networks for Cervical Cell Classification. SENSORS 2022; 22:s22093272. [PMID: 35590961 PMCID: PMC9101629 DOI: 10.3390/s22093272] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/11/2022] [Accepted: 04/21/2022] [Indexed: 02/04/2023]
Abstract
Artificial intelligence (AI) technologies have resulted in remarkable achievements and conferred massive benefits to computer-aided systems in medical imaging. However, the worldwide usage of AI-based automation-assisted cervical cancer screening systems is hindered by computational cost and resource limitations. Thus, a highly economical and efficient model with enhanced classification ability is much more desirable. This paper proposes a hybrid loss function with label smoothing to improve the distinguishing power of lightweight convolutional neural networks (CNNs) for cervical cell classification. The results strengthen our confidence in hybrid loss-constrained lightweight CNNs, which can achieve satisfactory accuracy with much lower computational cost for the SIPakMeD dataset. In particular, ShufflenetV2 obtained a comparable classification result (96.18% in accuracy, 96.30% in precision, 96.23% in recall, and 99.08% in specificity) with only one-seventh of the memory usage, one-sixth of the number of parameters, and one-fiftieth of total flops compared with Densenet-121 (96.79% in accuracy). GhostNet achieved an improved classification result (96.39% accuracy, 96.42% precision, 96.39% recall, and 99.09% specificity) with one-half of the memory usage, one-quarter of the number of parameters, and one-fiftieth of total flops compared with Densenet-121 (96.79% in accuracy). The proposed lightweight CNNs are likely to lead to an easily-applicable and cost-efficient automation-assisted system for cervical cancer diagnosis and prevention.
Collapse
|
14
|
Deep G, Kaur J, Singh SP, Nayak SR, Kumar M, Kautish S. MeQryEP: A Texture Based Descriptor for Biomedical Image Retrieval. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:9505229. [PMID: 35449840 PMCID: PMC9017451 DOI: 10.1155/2022/9505229] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 02/21/2022] [Indexed: 02/08/2023]
Abstract
Image texture analysis is a dynamic area of research in computer vision and image processing, with applications ranging from medical image analysis to image segmentation to content-based image retrieval and beyond. "Quinary encoding on mesh patterns (MeQryEP)" is a new approach to extracting texture features for indexing and retrieval of biomedical images, which is implemented in this work. An extension of the previous study, this research investigates the use of local quinary patterns (LQP) on mesh patterns in three different orientations. To encode the gray scale relationship between the central pixel and its surrounding neighbors in a two-dimensional (2D) local region of an image, binary and nonbinary coding, such as local binary patterns (LBP), local ternary patterns (LTP), and LQP, are used, while the proposed strategy uses three selected directions of mesh patterns to encode the gray scale relationship between the surrounding neighbors for a given center pixel in a 2D image. An innovative aspect of the proposed method is that it makes use of mesh image structure quinary pattern features to encode additional spatial structure information, resulting in better retrieval. On three different kinds of benchmark biomedical data sets, analyses have been completed to assess the viability of MeQryEP. LIDC-IDRI-CT and VIA/I-ELCAP-CT are the lung image databases based on computed tomography (CT), while OASIS-MRI is a brain database based on magnetic resonance imaging (MRI). This method outperforms state-of-the-art texture extraction methods, such as LBP, LQEP, LTP, LMeP, LMeTerP, DLTerQEP, LQEQryP, and so on in terms of average retrieval precision (ARP) and average retrieval rate (ARR).
Collapse
Affiliation(s)
- G. Deep
- Chandigarh Engineering College Landran, Mohali, India
| | - J. Kaur
- Chandigarh Engineering College Landran, Mohali, India
| | | | - Soumya Ranjan Nayak
- Amity School of Engineering and Technology, Amity University Uttar Pradesh, Noida, India
| | - Manoj Kumar
- School of Computer Science, University of Petroleum and Energy Studies, Dehradun, India
| | | |
Collapse
|
15
|
An Image Processing Protocol to Extract Variables Predictive of Human Embryo Fitness for Assisted Reproduction. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Despite the use of new techniques on embryo selection and the presence of equipment on the market, such as EmbryoScope® and Geri®, which help in the evaluation of embryo quality, there is still a subjectivity between the embryologist’s classifications, which are subjected to inter- and intra-observer variability, therefore compromising the successful implantation of the embryo. Nonetheless, with the acquisition of images through the time-lapse system, it is possible to perform digital processing of these images, providing a better analysis of the embryo, in addition to enabling the automatic analysis of a large volume of information. An image processing protocol was developed using well-established techniques to segment the image of blastocysts and extract variables of interest. A total of 33 variables were automatically generated by digital image processing, each one representing a different aspect of the embryo and describing a different characteristic of the blastocyst. These variables can be categorized into texture, gray-level average, gray-level standard deviation, modal value, relations, and light level. The automated and directed steps of the proposed processing protocol exclude spurious results, except when image quality (e.g., focus) prevents correct segmentation. The image processing protocol can segment human blastocyst images and automatically extract 33 variables that describe quantitative aspects of the blastocyst’s regions, with potential utility in embryo selection for assisted reproductive technology (ART).
Collapse
|
16
|
Li H, Mukundan R, Boyd S. Spatial Distribution Analysis of Novel Texture Feature Descriptors for Accurate Breast Density Classification. SENSORS 2022; 22:s22072672. [PMID: 35408286 PMCID: PMC9002800 DOI: 10.3390/s22072672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 03/27/2022] [Accepted: 03/28/2022] [Indexed: 12/10/2022]
Abstract
Breast density has been recognised as an important biomarker that indicates the risk of developing breast cancer. Accurate classification of breast density plays a crucial role in developing a computer-aided detection (CADe) system for mammogram interpretation. This paper proposes a novel texture descriptor, namely, rotation invariant uniform local quinary patterns (RIU4-LQP), to describe texture patterns in mammograms and to improve the robustness of image features. In conventional processing schemes, image features are obtained by computing histograms from texture patterns. However, such processes ignore very important spatial information related to the texture features. This study designs a new feature vector, namely, K-spectrum, by using Baddeley's K-inhom function to characterise the spatial distribution information of feature point sets. Texture features extracted by RIU4-LQP and K-spectrum are utilised to classify mammograms into BI-RADS density categories. Three feature selection methods are employed to optimise the feature set. In our experiment, two mammogram datasets, INbreast and MIAS, are used to test the proposed methods, and comparative analyses and statistical tests between different schemes are conducted. Experimental results show that our proposed method outperforms other approaches described in the literature, with the best classification accuracy of 92.76% (INbreast) and 86.96% (MIAS).
Collapse
Affiliation(s)
- Haipeng Li
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
- Correspondence:
| | - Ramakrishnan Mukundan
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
| | - Shelley Boyd
- Canterbury Breastcare, St. George’s Medical Centre, Christchurch 8014, New Zealand;
| |
Collapse
|
17
|
Deep localization of subcellular protein structures from fluorescence microscopy images. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06715-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
18
|
Lightweight convolutional neural network with knowledge distillation for cervical cells classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
19
|
Tian R, Lei X, Ouyang M. A local binary patterns/variance operator based on guided filtering for seismic fault detection. SN APPLIED SCIENCES 2021. [DOI: 10.1007/s42452-021-04866-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
AbstractAiming at suppressing noise interference, improving the fault detection ability of seismic data, fully excavating the effective information in seismic data, and further improving the accuracy of fault detection, this study proposes a seismic fault detection method that combines the local binary pattern/variance (LBP/VAR) operator with guided filtering. The proposed method combines the advantages of LBP/VAR and guided filtering to remove noise from seismic data, and can simultaneously smooth the data and preserve linear features. When compared with several existing methods (coherent operator, LBP/VAR operator, LBP/VAR operator based on median filtering, and Canny operator based on guided filtering), the proposed method exhibits a better SNR, a better ability to identify small faults, and robustness to noise. This novel algorithm can control the balance between noise attenuation and effective signal preservation as well as effectively detect faults in seismic data. Therefore, the proposed method effectively improves the fault identification accuracy, facilitates the gas-bearing analysis of the structure, provides guidance for the actual well location deployment of the project, and has important practical significance for oil and gas exploration and development.
Collapse
|
20
|
Bianconi F, Fernández A, Smeraldi F, Pascoletti G. Colour and Texture Descriptors for Visual Recognition: A Historical Overview. J Imaging 2021; 7:jimaging7110245. [PMID: 34821876 PMCID: PMC8622414 DOI: 10.3390/jimaging7110245] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 11/14/2021] [Accepted: 11/16/2021] [Indexed: 11/25/2022] Open
Abstract
Colour and texture are two perceptual stimuli that determine, to a great extent, the appearance of objects, materials and scenes. The ability to process texture and colour is a fundamental skill in humans as well as in animals; therefore, reproducing such capacity in artificial (‘intelligent’) systems has attracted considerable research attention since the early 70s. Whereas the main approach to the problem was essentially theory-driven (‘hand-crafted’) up to not long ago, in recent years the focus has moved towards data-driven solutions (deep learning). In this overview we retrace the key ideas and methods that have accompanied the evolution of colour and texture analysis over the last five decades, from the ‘early years’ to convolutional networks. Specifically, we review geometric, differential, statistical and rank-based approaches. Advantages and disadvantages of traditional methods vs. deep learning are also critically discussed, including a perspective on which traditional methods have already been subsumed by deep learning or would be feasible to integrate in a data-driven approach.
Collapse
Affiliation(s)
- Francesco Bianconi
- Department of Engineering, Università degli Studi di Perugia, Via Goffredo Duranti 93, 06135 Perugia, Italy
- Correspondence: ; Tel.: +39-075-5853706
| | - Antonio Fernández
- School of Industrial Engineering, Universidade de Vigo, Rúa Maxwell s/n, 36310 Vigo, Spain;
| | - Fabrizio Smeraldi
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London E1 4NS, UK;
| | - Giulia Pascoletti
- Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy;
| |
Collapse
|
21
|
Ketola JHJ, Inkinen SI, Karppinen J, Niinimäki J, Tervonen O, Nieminen MT. T 2 -weighted magnetic resonance imaging texture as predictor of low back pain: A texture analysis-based classification pipeline to symptomatic and asymptomatic cases. J Orthop Res 2021; 39:2428-2438. [PMID: 33368707 DOI: 10.1002/jor.24973] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 11/20/2020] [Accepted: 12/21/2020] [Indexed: 02/04/2023]
Abstract
Low back pain is a very common symptom and the leading cause of disability throughout the world. Several degenerative imaging findings seen on magnetic resonance imaging are associated with low back pain but none of them is specific for the presence of low back pain as abnormal findings are prevalent among asymptomatic subjects as well. The purpose of this population-based study was to investigate if more specific magnetic resonance imaging predictors of low back pain could be found via texture analysis and machine learning. We used this methodology to classify T2 -weighted magnetic resonance images from the Northern Finland Birth Cohort 1966 data to symptomatic and asymptomatic groups. Lumbar spine magnetic resonance imaging was performed using a fast spin-echo sequence at 1.5 T. Texture analysis pipeline consisting of textural feature extraction, principal component analysis, and logistic regression classifier was applied to the data to classify them into symptomatic (clinically relevant pain with frequency ≥30 days and intensity ≥6/10) and asymptomatic (frequency ≤7 days, intensity ≤3/10, and no previous pain episodes in the follow-up period) groups. Best classification results were observed applying texture analysis to the two lowest intervertebral discs (L4-L5 and L5-S1), with accuracy of 83%, specificity of 83%, sensitivity of 82%, negative predictive value of 94%, precision of 56%, and receiver operating characteristic area-under-curve of 0.91. To conclude, textural features from T2 -weighted magnetic resonance images can be applied in low back pain classification.
Collapse
Affiliation(s)
- Juuso H J Ketola
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - Satu I Inkinen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - Jaro Karppinen
- Medical Research Center Oulu, Oulu University Hospital and University of Oulu, Oulu, Finland.,Department of Physical and Rehabilitation Medicine, Rehabilitation Services of South Karelia Social and Health Care District, Lappeenranta, Finland.,Department of Occupational Health, Finnish Institute of Occupational Health, Oulu, Finland
| | - Jaakko Niinimäki
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Medical Research Center Oulu, Oulu University Hospital and University of Oulu, Oulu, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - Osmo Tervonen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Medical Research Center Oulu, Oulu University Hospital and University of Oulu, Oulu, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - Miika T Nieminen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Medical Research Center Oulu, Oulu University Hospital and University of Oulu, Oulu, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| |
Collapse
|
22
|
Li H, Mukundan R, Boyd S. Novel Texture Feature Descriptors Based on Multi-Fractal Analysis and LBP for Classifying Breast Density in Mammograms. J Imaging 2021; 7:jimaging7100205. [PMID: 34677291 PMCID: PMC8540831 DOI: 10.3390/jimaging7100205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 09/26/2021] [Accepted: 10/01/2021] [Indexed: 11/16/2022] Open
Abstract
This paper investigates the usefulness of multi-fractal analysis and local binary patterns (LBP) as texture descriptors for classifying mammogram images into different breast density categories. Multi-fractal analysis is also used in the pre-processing step to segment the region of interest (ROI). We use four multi-fractal measures and the LBP method to extract texture features, and to compare their classification performance in experiments. In addition, a feature descriptor combining multi-fractal features and multi-resolution LBP (MLBP) features is proposed and evaluated in this study to improve classification accuracy. An autoencoder network and principal component analysis (PCA) are used for reducing feature redundancy in the classification model. A full field digital mammogram (FFDM) dataset, INBreast, which contains 409 mammogram images, is used in our experiment. BI-RADS density labels given by radiologists are used as the ground truth to evaluate the classification results using the proposed methods. Experimental results show that the proposed feature descriptor based on multi-fractal features and LBP result in higher classification accuracy than using individual texture feature sets.
Collapse
Affiliation(s)
- Haipeng Li
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
- Correspondence:
| | - Ramakrishnan Mukundan
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
| | - Shelley Boyd
- Canterbury Breastcare, St. George’s Medical Centre, Christchurch 8014, New Zealand;
| |
Collapse
|
23
|
Image retrieval based on texture using latent space representation of discrete Fourier transformed maps. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05955-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
24
|
Ghalati MK, Nunes A, Ferreira H, Serranho P, Bernardes R. Texture Analysis and its Applications in Biomedical Imaging: A Survey. IEEE Rev Biomed Eng 2021; 15:222-246. [PMID: 34570709 DOI: 10.1109/rbme.2021.3115703] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Texture analysis describes a variety of image analysis techniques that quantify the variation in intensity and pattern. This paper provides an overview of several texture analysis approaches addressing the rationale supporting them, their advantages, drawbacks, and applications. This surveys emphasis is in collecting and categorising over five decades of active research on texture analysis. Brief descriptions of different approaches are presented along with application examples. From a broad range of texture analysis applications, this surveys final focus is on biomedical image analysis. An up-to-date list of biological tissues and organs in which disorders produce texture changes that may be used to spot disease onset and progression is provided. Finally, the role of texture analysis methods as biomarkers of disease is summarised.
Collapse
|
25
|
Chan YM, Ng E, Jahmunah V, Koh JEW, Oh SL, Han WS, Yip LWL, Acharya UR. Automated detection of glaucoma using elongated quinary patterns technique with optical coherence tomography angiogram images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
26
|
Combining Spectral and Texture Features of UAV Images for the Remote Estimation of Rice LAI throughout the Entire Growing Season. REMOTE SENSING 2021. [DOI: 10.3390/rs13153001] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Leaf area index (LAI) estimation is very important, and not only for canopy structure analysis and yield prediction. The unmanned aerial vehicle (UAV) serves as a promising solution for LAI estimation due to its great applicability and flexibility. At present, vegetation index (VI) is still the most widely used method in LAI estimation because of its fast speed and simple calculation. However, VI only reflects the spectral information and ignores the texture information of images, so it is difficult to adapt to the unique and complex morphological changes of rice in different growth stages. In this study we put forward a novel method by combining the texture information derived from the local binary pattern and variance features (LBP and VAR) with the spectral information based on VI to improve the estimation accuracy of rice LAI throughout the entire growing season. The multitemporal images of two study areas located in Hainan and Hubei were acquired by a 12-band camera, and the main typical bands for constituting VIs such as green, red, red edge, and near-infrared were selected to analyze their changes in spectrum and texture during the entire growing season. After the mathematical combination of plot-level spectrum and texture values, new indices were constructed to estimate rice LAI. Comparing the corresponding VI, the new indices were all less sensitive to the appearance of panicles and slightly weakened the saturation issue. The coefficient of determination (R2) can be improved for all tested VIs throughout the entire growing season. The results showed that the combination of spectral and texture features exhibited a better predictive ability than VI for estimating rice LAI. This method only utilized the texture and spectral information of the UAV image itself, which is fast, easy to operate, does not need manual intervention, and can be a low-cost method for monitoring crop growth.
Collapse
|
27
|
Susam B, Riek N, Akcakaya M, Xu X, de Sa V, Nezamfar H, Diaz D, Craig K, Goodwin M, Huang J. Automated Pain Assessment in Children using Electrodermal Activity and Video Data Fusion via Machine Learning. IEEE Trans Biomed Eng 2021; 69:422-431. [PMID: 34242161 DOI: 10.1109/tbme.2021.3096137] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Pain assessment in children continues to challenge clinicians and researchers, as subjective experiences of pain require inference through observable behaviors, both involuntary and deliberate. The presented approach supplements the subjective self-report-based method by fusing electrodermal activity (EDA) recordings with video facial expressions to develop an objective pain assessment metric. Such an approach is specifically important for assessing pain in children who are not capable of providing accurate self-pain reports, requiring nonverbal pain assessment. We demonstrate the performance of our approach using data recorded from children in post-operative recovery following laparoscopic appendectomy. We examined separately and combined the usefulness of EDA and video facial expression data as predictors of childrens self-reports of pain following surgery through recovery. Findings indicate that EDA and facial expression data independently provide above chance sensitivities and specificities, but their fusion for classifying clinically significant pain vs. clinically nonsignificant pain achieved substantial improvement, yielding 90.91% accuracy, with 100% sensitivity and 81.82% specificity. The multimodal measures capitalize upon different features of the complex pain response. Thus, this paper presents both evidence for the utility of a weighted maximum likelihood algorithm as a novel feature selection method for EDA and video facial expression data and an accurate and objective automated classification algorithm capable of discriminating clinically significant pain from clinically nonsignificant pain in children.
Collapse
|
28
|
Yin Y, He C, Xu B, Li Z. Coronary Plaque Characterization From Optical Coherence Tomography Imaging With a Two-Pathway Cascade Convolutional Neural Network Architecture. Front Cardiovasc Med 2021; 8:670502. [PMID: 34222368 PMCID: PMC8241907 DOI: 10.3389/fcvm.2021.670502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Accepted: 05/10/2021] [Indexed: 11/14/2022] Open
Abstract
Background: The morphological structure and tissue composition of a coronary atherosclerotic plaque determine its stability, which can be assessed by intravascular optical coherence tomography (OCT) imaging. However, plaque characterization relies on the interpretation of large datasets by well-trained observers. This study aims to develop a convolutional neural network (CNN) method to automatically extract tissue features from OCT images to characterize the main components of a coronary atherosclerotic plaque (fibrous, lipid, and calcification). The method is based on a novel CNN architecture called TwopathCNN, which is utilized in a cascaded structure. According to the evaluation, this proposed method is effective and robust in the characterization of coronary plaque composition from in vivo OCT imaging. On average, the method achieves 0.86 in F1-score and 0.88 in accuracy. The TwopathCNN architecture and cascaded structure show significant improvement in performance (p < 0.05). CNN with cascaded structure can greatly improve the performance of characterization compared to the conventional CNN methods and machine learning methods. This method has a higher efficiency, which may be proven to be a promising diagnostic tool in the detection of coronary plaques.
Collapse
Affiliation(s)
- Yifan Yin
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Chunliu He
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Biao Xu
- Department of Cardiology, Nanjing Drum Tower Hospital, Nanjing, China
| | - Zhiyong Li
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China.,School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
29
|
Govindarajan S, Swaminathan R. Extreme Learning Machine based Differentiation of Pulmonary Tuberculosis in Chest Radiographs using Integrated Local Feature Descriptors. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 204:106058. [PMID: 33789212 DOI: 10.1016/j.cmpb.2021.106058] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Accepted: 03/16/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer aided diagnostics of Pulmonary Tuberculosis in chest radiographs relies on the differentiation of subtle and non-specific alterations in the images. In this study, an attempt has been made to identify and classify Tuberculosis conditions from healthy subjects in chest radiographs using integrated local feature descriptors and variants of extreme learning machine. METHODS Lung fields in the chest images are segmented using Reaction Diffusion Level Set method. Local feature descriptors such as Median Robust Extended Local Binary Patterns and Gradient Local Ternary Patterns are extracted. Extreme Learning Machine (ELM) and Online Sequential ELM (OSELM) classifiers are employed to identify Tuberculosis conditions and, their performances are analysed using standard metrics. RESULTS Results show that the adopted segmentation method is able to delineate lung fields in both healthy and Tuberculosis images. Extracted features are statistically significant even in images with inter and intra subject variability. Sigmoid activation function yields accuracy and sensitivity values greater than 98% for both the classifiers. Highest sensitivity is observed with OSELM for minimal significant features in detecting Tuberculosis images. CONCLUSION As ELM based method is able to differentiate the subtle changes in inter and intra subject variations of chest X-ray images, the proposed methodology seems to be useful for computer-based detection of Pulmonary Tuberculosis.
Collapse
Affiliation(s)
- Satyavratan Govindarajan
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India.
| | - Ramakrishnan Swaminathan
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India
| |
Collapse
|
30
|
Rajpal S, Lakhyani N, Singh AK, Kohli R, Kumar N. Using handpicked features in conjunction with ResNet-50 for improved detection of COVID-19 from chest X-ray images. CHAOS, SOLITONS, AND FRACTALS 2021; 145:110749. [PMID: 33589854 PMCID: PMC7874964 DOI: 10.1016/j.chaos.2021.110749] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 01/17/2021] [Accepted: 01/31/2021] [Indexed: 05/23/2023]
Abstract
Coronaviruses are a family of viruses that majorly cause respiratory disorders in humans. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a new strain of coronavirus that causes the coronavirus disease 2019 (COVID-19). WHO has identified COVID-19 as a pandemic as it has spread across the globe due to its highly contagious nature. For early diagnosis of COVID-19, the reverse transcription-polymerase chain reaction (RT-PCR) test is commonly done. However, it suffers from a high false-negative rate of up to 67% if the test is done during the first five days of exposure. As an alternative, research on the efficacy of deep learning techniques employed in the identification of COVID-19 disease using chest X-ray images is intensely pursued. As pneumonia and COVID-19 exhibit similar/ overlapping symptoms and affect the human lungs, a distinction between the chest X-ray images of pneumonia patients and COVID-19 patients becomes challenging. In this work, we have modeled the COVID-19 classification problem as a multiclass classification problem involving three classes, namely COVID-19, pneumonia, and normal. We have proposed a novel classification framework which combines a set of handpicked features with those obtained from a deep convolutional neural network. The proposed framework comprises of three modules. In the first module, we exploit the strength of transfer learning using ResNet-50 for training the network on a set of preprocessed images and obtain a vector of 2048 features. In the second module, we construct a pool of frequency and texture based 252 handpicked features that are further reduced to a set of 64 features using PCA. Subsequently, these are passed to a feed forward neural network to obtain a set of 16 features. The third module concatenates the features obtained from first and second modules, and passes them to a dense layer followed by the softmax layer to yield the desired classification model. We have used chest X-ray images of COVID-19 patients from four independent publicly available repositories, in addition to images from the Mendeley and Kaggle Chest X-Ray Datasets for pneumonia and normal cases. To establish the efficacy of the proposed model, 10-fold cross-validation is carried out. The model generated an overall classification accuracy of 0.974 ± 0.02 and a sensitivity of 0.987 ± 0.05, 0.963 ± 0.05, and 0.973 ± 0.04 at 95% confidence interval for COVID-19, normal, and pneumonia classes, respectively. To ensure the effectiveness of the proposed model, it was validated using an independent Chest X-ray cohort and an overall classification accuracy of 0.979 was achieved. Comparison of the proposed framework with state-of-the-art methods reveal that the proposed framework outperforms others in terms of accuracy and sensitivity. Since interpretability of results is crucial in the medical domain, the gradient-based localizations are captured using Gradient-weighted Class Activation Mapping (Grad-CAM). In summary, the results obtained are stable over independent cohorts and interpretable using Grad-CAM localizations that serve as clinical evidence.
Collapse
Affiliation(s)
- Sheetal Rajpal
- Department of Computer Science, University of Delhi, Delhi, India
| | - Navin Lakhyani
- Department of Radiology, Saral Diagnostics, New Delhi, India
| | | | - Rishav Kohli
- Department of Computer Science, University of Delhi, Delhi, India
| | - Naveen Kumar
- Department of Computer Science, University of Delhi, Delhi, India
| |
Collapse
|
31
|
Trial by trial EEG based BCI for distress versus non distress classification in individuals with ASD. Sci Rep 2021; 11:6000. [PMID: 33727625 PMCID: PMC7971030 DOI: 10.1038/s41598-021-85362-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 03/01/2021] [Indexed: 01/31/2023] Open
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder that is often accompanied by impaired emotion regulation (ER). There has been increasing emphasis on developing evidence-based approaches to improve ER in ASD. Electroencephalography (EEG) has shown success in reducing ASD symptoms when used in neurofeedback-based interventions. Also, certain EEG components are associated with ER. Our overarching goal is to develop a technology that will use EEG to monitor real-time changes in ER and perform intervention based on these changes. As a first step, an EEG-based brain computer interface that is based on an Affective Posner task was developed to identify patterns associated with ER on a single trial basis, and EEG data collected from 21 individuals with ASD. Accordingly, our aim in this study is to investigate EEG features that could differentiate between distress and non-distress conditions. Specifically, we investigate if the EEG time-locked to the visual feedback presentation could be used to classify between WIN (non-distress) and LOSE (distress) conditions in a game with deception. Results showed that the extracted EEG features could differentiate between WIN and LOSE conditions (average accuracy of 81%), LOSE and rest-EEG conditions (average accuracy 94.8%), and WIN and rest-EEG conditions (average accuracy 94.9%).
Collapse
|
32
|
Esmaeili N, Boese A, Davaris N, Arens C, Navab N, Friebe M, Illanes A. Cyclist Effort Features: A Novel Technique for Image Texture Characterization Applied to Larynx Cancer Classification in Contact Endoscopy-Narrow Band Imaging. Diagnostics (Basel) 2021; 11:432. [PMID: 33802625 PMCID: PMC8001098 DOI: 10.3390/diagnostics11030432] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 02/24/2021] [Accepted: 02/26/2021] [Indexed: 11/20/2022] Open
Abstract
BACKGROUND Feature extraction is an essential part of a Computer-Aided Diagnosis (CAD) system. It is usually preceded by a pre-processing step and followed by image classification. Usually, a large number of features is needed to end up with the desired classification results. In this work, we propose a novel approach for texture feature extraction. This method was tested on larynx Contact Endoscopy (CE)-Narrow Band Imaging (NBI) image classification to provide more objective information for otolaryngologists regarding the stage of the laryngeal cancer. METHODS The main idea of the proposed methods is to represent an image as a hilly surface, where different paths can be identified between a starting and an ending point. Each of these paths can be thought of as a Tour de France stage profile where a cyclist needs to perform a specific effort to arrive at the finish line. Several paths can be generated in an image where different cyclists produce an average cyclist effort representing important textural characteristics of the image. Energy and power as two Cyclist Effort Features (CyEfF) were extracted using this concept. The performance of the proposed features was evaluated for the classification of 2701 CE-NBI images into benign and malignant lesions using four supervised classifiers and subsequently compared with the performance of 24 Geometrical Features (GF) and 13 Entropy Features (EF). RESULTS The CyEfF features showed maximum classification accuracy of 0.882 and improved the GF classification accuracy by 3 to 12 percent. Moreover, CyEfF features were ranked as the top 10 features along with some features from GF set in two feature ranking methods. CONCLUSION The results prove that CyEfF with only two features can describe the textural characterization of CE-NBI images and can be part of the CAD system in combination with GF for laryngeal cancer diagnosis.
Collapse
Affiliation(s)
- Nazila Esmaeili
- INKA—Innovation Laboratory for Image Guided Therapy, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany; (A.B.); (M.F.); (A.I.)
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technical University Munich, 85748 Munich, Germany;
| | - Axel Boese
- INKA—Innovation Laboratory for Image Guided Therapy, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany; (A.B.); (M.F.); (A.I.)
| | - Nikolaos Davaris
- Department of Otorhinolaryngology, Head and Neck Surgery, Magdeburg University Hospital, 39120 Magdeburg, Germany; (N.D.); (C.A.)
| | - Christoph Arens
- Department of Otorhinolaryngology, Head and Neck Surgery, Magdeburg University Hospital, 39120 Magdeburg, Germany; (N.D.); (C.A.)
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technical University Munich, 85748 Munich, Germany;
| | - Michael Friebe
- INKA—Innovation Laboratory for Image Guided Therapy, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany; (A.B.); (M.F.); (A.I.)
- IDTM GmbH, 45657 Recklinghausen, Germany
| | - Alfredo Illanes
- INKA—Innovation Laboratory for Image Guided Therapy, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany; (A.B.); (M.F.); (A.I.)
| |
Collapse
|
33
|
Valencia-Hernandez I, Peregrina-Barreto H, Reyes-Garcia CA, Lopez-Armas GC. Density map and fuzzy classification for breast density by using BI-RADS. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105825. [PMID: 33190944 DOI: 10.1016/j.cmpb.2020.105825] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 10/29/2020] [Indexed: 06/11/2023]
Abstract
Mammographic density (MD) is conformed by a different percentage of stromal, epithelial, and adipose tissue within the breast. One of the most critical findings in mammographic patterns for establishing a diagnosis of breast cancer is high breast tissue density. There is a wide variety of works focused on the study and automatic calculation of general breast density; however, they do not provide more detailed information about the changes that may occur within the breast tissue. This work proposes to generate a breast density map based on a texture analysis to identify the internal composition and distribution of the breast tissue through the diffuse division technique of the different densities inside the breast. Therefore, it is possible to obtain a density map associated with the breast that allows us to distinguish and quantify the different types of breast densities and their distribution according to the Breast Imaging Reporting and Data System (BI-RADS Breast Density Category). The proposed methodology was tested with mammograms from the BCDR and InBreast databases, demonstrating consistency in results and reaching an accuracy of 84.2% and 81.3%, respectively. Finally, the information obtained from the density map and its analysis could be a support tool for the specialist physician to monitor changes in breast density over time, since the fuzzy classification carried out allows quantifying the degree of membership in the BI-RADS breast density classes.
Collapse
Affiliation(s)
- I Valencia-Hernandez
- Instituto Nacional de Astrofísica, Óptica y Electrónica, Luis Enrique Erro 1, Santa Maria Tonantzintla, Puebla 72840, México
| | - H Peregrina-Barreto
- Instituto Nacional de Astrofísica, Óptica y Electrónica, Luis Enrique Erro 1, Santa Maria Tonantzintla, Puebla 72840, México.
| | - C A Reyes-Garcia
- Instituto Nacional de Astrofísica, Óptica y Electrónica, Luis Enrique Erro 1, Santa Maria Tonantzintla, Puebla 72840, México
| | - G C Lopez-Armas
- Centro de Enseñanza Técnica Industrial, Nueva Escocia 1885, Guadalajara, Jalisco, 44638, México
| |
Collapse
|
34
|
Medical Image Retrieval Using Empirical Mode Decomposition with Deep Convolutional Neural Network. BIOMED RESEARCH INTERNATIONAL 2021; 2020:6687733. [PMID: 33426062 PMCID: PMC7781707 DOI: 10.1155/2020/6687733] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Revised: 12/07/2020] [Accepted: 12/14/2020] [Indexed: 11/17/2022]
Abstract
Content-based medical image retrieval (CBMIR) systems attempt to search medical image database to narrow the semantic gap in medical image analysis. The efficacy of high-level medical information representation using features is a major challenge in CBMIR systems. Features play a vital role in the accuracy and speed of the search process. In this paper, we propose a deep convolutional neural network- (CNN-) based framework to learn concise feature vector for medical image retrieval. The medical images are decomposed into five components using empirical mode decomposition (EMD). The deep CNN is trained in a supervised way with multicomponent input, and the learned features are used to retrieve medical images. The IRMA dataset, containing 11,000 X-ray images, 116 classes, is used to validate the proposed method. We achieve a total IRMA error of 43.21 and a mean average precision of 0.86 for retrieval task and IRMA error of 68.48 and F1 measure of 0.66 on classification task, which is the best result compared with existing literature for this dataset.
Collapse
|
35
|
|
36
|
Liu GH, Zhang BW, Qian G, Wang B, Mao B, Bichindaritz I. Bioimage-Based Prediction of Protein Subcellular Location in Human Tissue with Ensemble Features and Deep Networks. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2020; 17:1966-1980. [PMID: 31107658 DOI: 10.1109/tcbb.2019.2917429] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Prediction of protein subcellular location has currently become a hot topic because it has been proven to be useful for understanding both the disease mechanisms and novel drug design. With the rapid development of automated microscopic imaging technology in recent years, classification methods of bioimage-based protein subcellular location have attracted considerable attention for images can describe the protein distribution intuitively and in detail. In the current study, a prediction method of protein subcellular location was proposed based on multi-view image features that are extracted from three different views, including the four texture features of the original image, the global and local features of the protein extracted from the protein channel images after color segmentation, and the global features of DNA extracted from the DNA channel image. Finally, the extracted features were combined together to improve the performance of subcellular localization prediction. From the performance comparison of different combination features under the same classifier, the best ensemble features could be obtained. In this work, a classifier based on Stacked Auto-encoders and the random forest was also put forward. To improve the prediction results, the deep network was combined with the traditional statistical classification methods. Stringent cross-validation and independent validation tests on the benchmark dataset demonstrated the efficacy of the proposed method.
Collapse
|
37
|
Improving Computer-Aided Cervical Cells Classification Using Transfer Learning Based Snapshot Ensemble. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10207292] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cervical cells classification is a crucial component of computer-aided cervical cancer detection. Fine-grained classification is of great clinical importance when guiding clinical decisions on the diagnoses and treatment, which remains very challenging. Recently, convolutional neural networks (CNN) provide a novel way to classify cervical cells by using automatically learned features. Although the ensemble of CNN models can increase model diversity and potentially boost the classification accuracy, it is a multi-step process, as several CNN models need to be trained respectively and then be selected for ensemble. On the other hand, due to the small training samples, the advantages of powerful CNN models may not be effectively leveraged. In order to address such a challenging issue, this paper proposes a transfer learning based snapshot ensemble (TLSE) method by integrating snapshot ensemble learning with transfer learning in a unified and coordinated way. Snapshot ensemble provides ensemble benefits within a single model training procedure, while transfer learning focuses on the small sample problem in cervical cells classification. Furthermore, a new training strategy is proposed for guaranteeing the combination. The TLSE method is evaluated on a pap-smear dataset called Herlev dataset and is proved to have some superiorities over the exiting methods. It demonstrates that TLSE can improve the accuracy in an ensemble manner with only one single training process for the small sample in fine-grained cervical cells classification.
Collapse
|
38
|
The Influence of CLBP Window Size on Urban Vegetation Type Classification Using High Spatial Resolution Satellite Images. REMOTE SENSING 2020. [DOI: 10.3390/rs12203393] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Urban vegetation can regulate ecological balance, reduce the influence of urban heat islands, and improve human beings’ mental state. Accordingly, classification of urban vegetation types plays a significant role in urban vegetation research. This paper presents various window sizes of completed local binary pattern (CLBP) texture features classifying urban vegetation based on high spatial-resolution WorldView-2 images in areas of Shanghai (China) and Lianyungang (Jiangsu province, China). To demonstrate the stability and universality of different CLBP window textures, two study areas were selected. Using spectral information alone and spectral information combined with texture information, imagery is classified using random forest (RF) method based on vegetation type, showing that use of spectral information with CLBP window textures can achieve 7.28% greater accuracy than use of only spectral information for urban vegetation type classification, with accuracy greater for single vegetation types than for mixed ones. Optimal window sizes of CLBP textures for grass, shrub, arbor, shrub-grass, arbor-grass, and arbor-shrub-grass are 3 × 3, 3 × 3, 11 × 11, 9 × 9, 9 × 9, 7 × 7 for urban vegetation type classification. Furthermore, optimal CLBP window size is determined by the roughness of vegetation texture.
Collapse
|
39
|
Pereira RM, Bertolini D, Teixeira LO, Silla CN, Costa YMG. COVID-19 identification in chest X-ray images on flat and hierarchical classification scenarios. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105532. [PMID: 32446037 PMCID: PMC7207172 DOI: 10.1016/j.cmpb.2020.105532] [Citation(s) in RCA: 205] [Impact Index Per Article: 51.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 05/05/2020] [Accepted: 05/06/2020] [Indexed: 05/02/2023]
Abstract
BACKGROUND AND OBJECTIVE The COVID-19 can cause severe pneumonia and is estimated to have a high impact on the healthcare system. Early diagnosis is crucial for correct treatment in order to possibly reduce the stress in the healthcare system. The standard image diagnosis tests for pneumonia are chest X-ray (CXR) and computed tomography (CT) scan. Although CT scan is the gold standard, CXR are still useful because it is cheaper, faster and more widespread. This study aims to identify pneumonia caused by COVID-19 from other types and also healthy lungs using only CXR images. METHODS In order to achieve the objectives, we have proposed a classification schema considering the following perspectives: i) a multi-class classification; ii) hierarchical classification, since pneumonia can be structured as a hierarchy. Given the natural data imbalance in this domain, we also proposed the use of resampling algorithms in the schema in order to re-balance the classes distribution. We observed that, texture is one of the main visual attributes of CXR images, our classification schema extract features using some well-known texture descriptors and also using a pre-trained CNN model. We also explored early and late fusion techniques in the schema in order to leverage the strength of multiple texture descriptors and base classifiers at once. To evaluate the approach, we composed a database, named RYDLS-20, containing CXR images of pneumonia caused by different pathogens as well as CXR images of healthy lungs. The classes distribution follows a real-world scenario in which some pathogens are more common than others. RESULTS The proposed approach tested in RYDLS-20 achieved a macro-avg F1-Score of 0.65 using a multi-class approach and a F1-Score of 0.89 for the COVID-19 identification in the hierarchical classification scenario. CONCLUSIONS As far as we know, the top identification rate obtained in this paper is the best nominal rate obtained for COVID-19 identification in an unbalanced environment with more than three classes. We must also highlight the novel proposed hierarchical classification approach for this task, which considers the types of pneumonia caused by the different pathogens and lead us to the best COVID-19 recognition rate obtained here.
Collapse
Affiliation(s)
- Rodolfo M Pereira
- Instituto Federal de Educação, Ciência e Tecnologia do Paraná (IFPR), Pinhais, PR, Brazil; Pontifícia Universidade Catalica do Paraná (PUCPR), Curitiba, PR, Brazil.
| | - Diego Bertolini
- Universidade Tecnologica Federal do Paraná (UTFPR), Campo Mourão, PR, Brazil; Universidade Estadual de Maringá (UEM), Maringá, PR, Brazil
| | | | - Carlos N Silla
- Pontifícia Universidade Catalica do Paraná (PUCPR), Curitiba, PR, Brazil
| | | |
Collapse
|
40
|
A Class-Independent Texture-Separation Method Based on a Pixel-Wise Binary Classification. SENSORS 2020; 20:s20185432. [PMID: 32971871 PMCID: PMC7571054 DOI: 10.3390/s20185432] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 09/16/2020] [Accepted: 09/18/2020] [Indexed: 12/25/2022]
Abstract
Texture segmentation is a challenging problem in computer vision due to the subjective nature of textures, the variability in which they occur in images, their dependence on scale and illumination variation, and the lack of a precise definition in the literature. This paper proposes a method to segment textures through a binary pixel-wise classification, thereby without the need for a predefined number of textures classes. Using a convolutional neural network, with an encoder–decoder architecture, each pixel is classified as being inside an internal texture region or in a border between two different textures. The network is trained using the Prague Texture Segmentation Datagenerator and Benchmark and tested using the same dataset, besides the Brodatz textures dataset, and the Describable Texture Dataset. The method is also evaluated on the separation of regions in images from different applications, namely remote sensing images and H&E-stained tissue images. It is shown that the method has a good performance on different test sets, can precisely identify borders between texture regions and does not suffer from over-segmentation.
Collapse
|
41
|
Babukarthik RG, Adiga VAK, Sambasivam G, Chandramohan D, Amudhavel J. Prediction of COVID-19 Using Genetic Deep Learning Convolutional Neural Network (GDCNN). IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:177647-177666. [PMID: 34786292 PMCID: PMC8545287 DOI: 10.1109/access.2020.3025164] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 09/07/2020] [Indexed: 05/14/2023]
Abstract
Rapid spread of Coronavirus disease COVID-19 leads to severe pneumonia and it is estimated to create a high impact on the healthcare system. An urgent need for early diagnosis is required for precise treatment, which in turn reduces the pressure in the health care system. Some of the standard image diagnosis available is Computed Tomography (CT) scan and Chest X-Ray (CXR). Even though a CT scan is considered a gold standard in diagnosis, CXR is most widely used due to widespread, faster, and cheaper. This study aims to provide a solution for identifying pneumonia due to COVID-19 and healthy lungs (normal person) using CXR images. One of the remarkable methods used for extracting a high dimensional feature from medical images is the Deep learning method. In this research, the state-of-the-art techniques used is Genetic Deep Learning Convolutional Neural Network (GDCNN). It is trained from the scratch for extracting features for classifying them between COVID-19 and normal images. A dataset consisting of more than 5000 CXR image samples is used for classifying pneumonia, normal and other pneumonia diseases. Training a GDCNN from scratch proves that, the proposed method performs better compared to other transfer learning techniques. Classification accuracy of 98.84%, the precision of 93%, the sensitivity of 100%, and specificity of 97.0% in COVID-19 prediction is achieved. Top classification accuracy obtained in this research reveals the best nominal rate in the identification of COVID-19 disease prediction in an unbalanced environment. The novel model proposed for classification proves to be better than the existing models such as ReseNet18, ReseNet50, Squeezenet, DenseNet-121, and Visual Geometry Group (VGG16).
Collapse
Affiliation(s)
- R. G. Babukarthik
- Department of Computer Science and EngineeringDayananda Sagar UniversityBengaluru560078India
| | - V. Ananth Krishna Adiga
- Department of Computer Science and EngineeringDayananda Sagar UniversityBengaluru560078India
| | - G. Sambasivam
- Faculty of Information and Communication TechnologyISBAT UniversityKampalaUganda
| | - D. Chandramohan
- Department of Computer Science and EngineeringMadanapalle Institute of Technology and ScienceMadanapalle517325India
| | - J. Amudhavel
- School of Computer Science and EngineeringVIT Bhopal UniversityBhopal466114India
| |
Collapse
|
42
|
Mazo C, Alegre E, Trujillo M. Using an ontology of the human cardiovascular system to improve the classification of histological images. Sci Rep 2020; 10:12276. [PMID: 32703995 PMCID: PMC7378259 DOI: 10.1038/s41598-020-69037-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 06/30/2020] [Indexed: 11/09/2022] Open
Abstract
The advantages of automatically recognition of fundamental tissues using computer vision techniques are well known, but one of its main limitations is that sometimes it is not possible to classify correctly an image because the visual information is insufficient or the descriptors extracted are not discriminative enough. An Ontology could solve in part this problem, because it gathers and makes possible to use the specific knowledge that allows detecting clear mistakes in the classification, occasionally simply by pointing out impossible configurations in that domain. One of the main contributions of this work is that we used a Histological Ontology to correct, and therefore improve the classification of histological images. First, we described small regions of images, denoted as blocks, using Local Binary Pattern (LBP) based descriptors and we determined which tissue of the cardiovascular system was present using a cascade Support Vector Machine (SVM). Later, we built Resource Description Framework (RDF) triples for the occurrences of each discriminant class. Based on that, we used a Histological Ontology to correct, among others, “not possible” situations, improving in this way the global accuracy in the blocks first and in tissues classification later. For the experimental validation, we used a set of 6000 blocks of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$100\times100$$\end{document}100×100 pixels, obtaining F-Scores between 0.769 and 0.886. Thus, there is an improvement between 0.003 and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.769\%$$\end{document}0.769% when compared to the approach without the histological ontology. The methodology improves the automatic classification of histological images using a histological ontology. Another advantage of our proposal is that using the Ontology, we were capable of recognising the epithelial tissue, previously not detected by any of the computer vision methods used, including a CNN proposal called HistoResNet evaluated in the experiments. Finally, we also have created and made publicly available a dataset consisting of 6000 blocks of labelled histological tissues.
Collapse
Affiliation(s)
- Claudia Mazo
- UCD School of Computer Science, University College Dublin, Dublin, Ireland. .,CeADAR: Centre for Applied Data Analytics Research, Dublin, Ireland.
| | - Enrique Alegre
- Industrial and Informatics Engineering School, Universidad de León, León, Spain
| | - Maria Trujillo
- Computer and Systems Engineering School, Universidad del Valle, Cali, Colombia
| |
Collapse
|
43
|
Rampun A, Morrow PJ, Scotney BW, Wang H. Breast density classification in mammograms: An investigation of encoding techniques in binary-based local patterns. Comput Biol Med 2020; 122:103842. [DOI: 10.1016/j.compbiomed.2020.103842] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 06/02/2020] [Accepted: 06/02/2020] [Indexed: 12/01/2022]
|
44
|
Navdeep, Singh V, Rani A, Goyal S. An improved hyper smoothing function based edge detection algorithm for noisy images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-179713] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Navdeep
- Department of Instrumentation and Control Engineering, Netaji Subhas Institute of Technology, University of Delhi, New Delhi, India
| | - Vijander Singh
- Department of Instrumentation and Control Engineering, Netaji Subhas Institute of Technology, University of Delhi, New Delhi, India
| | - Asha Rani
- Department of Instrumentation and Control Engineering, Netaji Subhas Institute of Technology, University of Delhi, New Delhi, India
| | - Sonal Goyal
- Department of Instrumentation and Control Engineering, Netaji Subhas Institute of Technology, University of Delhi, New Delhi, India
| |
Collapse
|
45
|
Gupta S, Roy PP, Dogra DP, Kim BG. Retrieval of colour and texture images using local directional peak valley binary pattern. Pattern Anal Appl 2020. [DOI: 10.1007/s10044-020-00879-4] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
46
|
Srivastava V, Purwar RK. Classification of CT Scan Images of Lungs Using Deep Convolutional Neural Network with External Shape-Based Features. J Digit Imaging 2020; 33:252-261. [PMID: 31243590 PMCID: PMC7064668 DOI: 10.1007/s10278-019-00245-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
In this paper, a simplified yet efficient architecture of a deep convolutional neural network is presented for lung image classification. The images used for classification are computed tomography (CT) scan images obtained from two scientifically used databases available publicly. Six external shape-based features, viz. solidity, circularity, discrete Fourier transform of radial length (RL) function, histogram of oriented gradient (HOG), moment, and histogram of active contour image, have also been identified and embedded into the proposed convolutional neural network. The performance is measured in terms of average recall and average precision values and compared with six similar methods for biomedical image classification. The average precision obtained for the proposed system is found to be 95.26% and the average recall value is found to be 69.56% in average for the two databases.
Collapse
Affiliation(s)
- Varun Srivastava
- University School of Information and Communication Technology, Guru Gobind Singh Indraprastha University, Dwarka Sector 16C, New Delhi, 110075 India
| | - Ravindra Kr. Purwar
- University School of Information and Communication Technology, Guru Gobind Singh Indraprastha University, Dwarka Sector 16C, New Delhi, 110075 India
| |
Collapse
|
47
|
Wang Y, Chu X, Zhang K, Bao C, Li X, Zhang J, Fu CW, Hurter C, Deussen O, Lee B. ShapeWordle: Tailoring Wordles using Shape-aware Archimedean Spirals. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:991-1000. [PMID: 31443014 DOI: 10.1109/tvcg.2019.2934783] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present a new technique to enable the creation of shape-bounded Wordles, we call ShapeWordle, in which we fit words to form a given shape. To guide word placement within a shape, we extend the traditional Archimedean spirals to be shape-aware by formulating the spirals in a differential form using the distance field of the shape. To handle non-convex shapes, we introduce a multi-centric Wordle layout method that segments the shape into parts for our shape-aware spirals to adaptively fill the space and generate word placements. In addition, we offer a set of editing interactions to facilitate the creation of semantically-meaningful Wordles. Lastly, we present three evaluations: a comprehensive comparison of our results against the state-of-the-art technique (WordArt), case studies with 14 users, and a gallery to showcase the coverage of our technique.
Collapse
|
48
|
Rasti P, Wolf C, Dorez H, Sablong R, Moussata D, Samiei S, Rousseau D. Machine Learning-Based Classification of the Health State of Mice Colon in Cancer Study from Confocal Laser Endomicroscopy. Sci Rep 2019; 9:20010. [PMID: 31882817 PMCID: PMC6934609 DOI: 10.1038/s41598-019-56583-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Accepted: 12/09/2019] [Indexed: 01/26/2023] Open
Abstract
In this article, we address the problem of the classification of the health state of the colon's wall of mice, possibly injured by cancer with machine learning approaches. This problem is essential for translational research on cancer and is a priori challenging since the amount of data is usually limited in all preclinical studies for practical and ethical reasons. Three states considered including cancer, health, and inflammatory on tissues. Fully automated machine learning-based methods are proposed, including deep learning, transfer learning, and shallow learning with SVM. These methods addressed different training strategies corresponding to clinical questions such as the automatic clinical state prediction on unseen data using a pre-trained model, or in an alternative setting, real-time estimation of the clinical state of individual tissue samples during the examination. Experimental results show the best performance of 99.93% correct recognition rate obtained for the second strategy as well as the performance of 98.49% which were achieved for the more difficult first case.
Collapse
Affiliation(s)
- Pejman Rasti
- Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), UMR INRA IRHS, Université d'Angers, Angers, 49000, France
| | - Christian Wolf
- INSA-Lyon, INRIA, LIRIS, CITI, CNRS, Villeurbanne, France
| | - Hugo Dorez
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, 69621, France
| | - Raphael Sablong
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, 69621, France
| | - Driffa Moussata
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, 69621, France
| | - Salma Samiei
- Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), UMR INRA IRHS, Université d'Angers, Angers, 49000, France
| | - David Rousseau
- Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), UMR INRA IRHS, Université d'Angers, Angers, 49000, France.
| |
Collapse
|
49
|
Ma J, Song Y, Tian X, Hua Y, Zhang R, Wu J. Survey on deep learning for pulmonary medical imaging. Front Med 2019; 14:450-469. [PMID: 31840200 DOI: 10.1007/s11684-019-0726-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 10/12/2019] [Indexed: 12/27/2022]
Abstract
As a promising method in artificial intelligence, deep learning has been proven successful in several domains ranging from acoustics and images to natural language processing. With medical imaging becoming an important part of disease screening and diagnosis, deep learning-based approaches have emerged as powerful techniques in medical image areas. In this process, feature representations are learned directly and automatically from data, leading to remarkable breakthroughs in the medical field. Deep learning has been widely applied in medical imaging for improved image analysis. This paper reviews the major deep learning techniques in this time of rapid evolution and summarizes some of its key contributions and state-of-the-art outcomes. The topics include classification, detection, and segmentation tasks on medical image analysis with respect to pulmonary medical images, datasets, and benchmarks. A comprehensive overview of these methods implemented on various lung diseases consisting of pulmonary nodule diseases, pulmonary embolism, pneumonia, and interstitial lung disease is also provided. Lastly, the application of deep learning techniques to the medical image and an analysis of their future challenges and potential directions are discussed.
Collapse
Affiliation(s)
| | - Yang Song
- Dalian Municipal Central Hospital Affiliated to Dalian Medical University, Dalian, 116033, China
| | - Xi Tian
- InferVision, Beijing, 100020, China
| | | | | | - Jianlin Wu
- Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China.
| |
Collapse
|
50
|
Bazaga A, Roldán M, Badosa C, Jiménez-Mallebrera C, Porta JM. A Convolutional Neural Network for the automatic diagnosis of collagen VI-related muscular dystrophies. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|