1
|
Sharma D, Selwal A. A survey on face presentation attack detection mechanisms: hitherto and future perspectives. Multimed Syst 2023; 29:1527-1577. [PMID: 37261261 PMCID: PMC10025066 DOI: 10.1007/s00530-023-01070-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 02/20/2023] [Indexed: 06/02/2023]
Abstract
The advances in human face recognition (FR) systems have recorded sublime success for automatic and secured authentication in diverse domains. Although the traditional methods have been overshadowed by face recognition counterpart during this progress, computer vision gains rapid traction, and the modern accomplishments address problems with real-world complexity. However, security threats in FR-based systems are a growing concern that offers a new-fangled track to the research community. In particular, recent past has witnessed ample instances of spoofing attacks where imposter breaches security of the system with an artifact of human face to circumvent the sensor module. Therefore, presentation attack detection (PAD) capabilities are instilled in the system for discriminating genuine and fake traits and anticipation of their impact on the overall behavior of the FR-based systems. To scrutinize exhaustively the current state-of-the-art efforts, provide insights, and identify potential research directions on face PAD mechanisms, this systematic study presents a review of face anti-spoofing techniques that use computational approaches. The study includes advancements in face PAD mechanisms ranging from traditional hardware-based solutions to up-to-date handcrafted features or deep learning-based approaches. We also present an analytical overview of face artifacts, performance protocols, and benchmark face anti-spoofing datasets. In addition, we perform analysis of the twelve recent state-of-the-art (SOTA) face PAD techniques on a common platform using identical dataset (i.e., REPLAY-ATTACK) and performance protocols (i.e., HTER and ACA). Our overall analysis investigates that despite prevalent face PAD mechanisms demonstrate potential performance, there exist some crucial issues that requisite a futuristic attention. Our analysis put forward a number of open issues such as; limited generalization to unknown attacks, inadequacy of face datasets for DL-models, training models with new fakes, efficient DL-enabled face PAD with smaller datasets, and limited discrimination of handcrafted features. Furthermore, the COVID-19 pandemic is an additional challenge to the existing face-based recognition systems, and hence to the PAD methods. Our motive is to present a complete reference of studies in this field and orient researchers to promising directions.
Collapse
Affiliation(s)
- Deepika Sharma
- Department of Computer Science and Information Technology, Central University of Jammu, Samba, 181143 India
| | - Arvind Selwal
- Department of Computer Science and Information Technology, Central University of Jammu, Samba, 181143 India
| |
Collapse
|
2
|
Zhang F, Xu Y, Zhou Z, Zhang H, Yang K. Critical element prediction of tracheal intubation difficulty: Automatic Mallampati classification by jointly using handcrafted and attention-based deep features. Comput Biol Med 2022; 150:106182. [PMID: 36242810 DOI: 10.1016/j.compbiomed.2022.106182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 09/14/2022] [Accepted: 10/01/2022] [Indexed: 11/20/2022]
Abstract
Preoperative assessment of the difficulty of tracheal intubation is of great importance in anesthesia practice because failed intubation can lead to severe complications and even death. The Mallampati score is widely used as a critical assessment criterion in combination with other measures to assess the difficulty of tracheal intubation. The performance of existing methods for Mallampati classification with artificial intelligence (AI) is unreliable to the extent that the current clinical judgment of the Mallampati score relies entirely on doctors' experience. In this paper, we propose a new method for automatic Mallampati classification. Our method extracts deep features that are more favorable for the Mallampati classification task by introducing an attention mechanism into the basic deep convolutional neural network (DCNN) and then further improves the classification performance by jointly using attention-based deep features with handcrafted features. We conducted experiments on a dataset consisting of 321 oral images collected online. The proposed method has a classification accuracy of 97.50%, a sensitivity of 96.52%, a specificity of 98.05%, and an F1 score of 96.52% after five-fold cross-validation. The experimental results show that our proposed method is superior to other methods, can assist doctors in determining Mallampati class objectively and accurately, and provide an essential reference element for assessing the difficulty of tracheal intubation.
Collapse
Affiliation(s)
- Fan Zhang
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, China.
| | - Yuelei Xu
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, China.
| | - Zhaoyun Zhou
- Department of Anesthesiology, Tai'an Central Hospital, Tai'an, 271000, Shandong, China.
| | - Han Zhang
- Department of Anesthesiology, Honghui Hospital, Xi'an Jiaotong University, Xi'an, 710054, Shaanxi, China.
| | - Ke Yang
- Department of Anesthesiology, Fuwai Yunnan Cardiovascular Hospital, Kunming, 650102, Yunnan, China.
| |
Collapse
|
3
|
Habib M, Ramzan M, Khan SA. A Deep Learning and Handcrafted Based Computationally Intelligent Technique for Effective COVID-19 Detection from X-ray/CT-scan Imaging. J Grid Comput 2022; 20:23. [PMID: 35874855 PMCID: PMC9294765 DOI: 10.1007/s10723-022-09615-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 06/27/2022] [Indexed: 06/15/2023]
Abstract
The world has witnessed dramatic changes because of the advent of COVID19 in the last few days of 2019. During the last more than two years, COVID-19 has badly affected the world in diverse ways. It has not only affected human health and mortality rate but also the economic condition on a global scale. There is an urgent need today to cope with this pandemic and its diverse effects. Medical imaging has revolutionized the treatment of various diseases during the last four decades. Automated detection and classification systems have proven to be of great assistance to the doctors and scientific community for the treatment of various diseases. In this paper, a novel framework for an efficient COVID-19 classification system is proposed which uses the hybrid feature extraction approach. After preprocessing image data, two types of features i.e., deep learning and handcrafted, are extracted. For Deep learning features, two pre-trained models namely ResNet101 and DenseNet201 are used. Handcrafted features are extracted using Weber Local Descriptor (WLD). The Excitation component of WLD is utilized and features are reduced using DCT. Features are extracted from both models, handcrafted features are fused, and significant features are selected using entropy. Experiments have proven the effectiveness of the proposed model. A comprehensive set of experiments have been performed and results are compared with the existing well-known methods. The proposed technique has performed better in terms of accuracy and time.
Collapse
Affiliation(s)
- Mohammed Habib
- Department of Computer Science, College of Computing and Informatics, Saudi Electronic University, 11673 Riyadh, Saudi Arabia
- Department of Electrical Engineering, Faculty of Engineering, PortSaid University, Port Said, 42526 Egypt
| | - Muhammad Ramzan
- Department of Computer Science, College of Computing and Informatics, Saudi Electronic University, 11673 Riyadh, Saudi Arabia
| | - Sajid Ali Khan
- Department of Software Engineering, Foundation University Islamabad, 44000 Islamabad, Pakistan
| |
Collapse
|
4
|
Yousif M, van Diest PJ, Laurinavicius A, Rimm D, van der Laak J, Madabhushi A, Schnitt S, Pantanowitz L. Artificial intelligence applied to breast pathology. Virchows Arch 2021; 480:191-209. [PMID: 34791536 DOI: 10.1007/s00428-021-03213-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 09/12/2021] [Accepted: 09/27/2021] [Indexed: 12/12/2022]
Abstract
The convergence of digital pathology and computer vision is increasingly enabling computers to perform tasks performed by humans. As a result, artificial intelligence (AI) is having an astoundingly positive effect on the field of pathology, including breast pathology. Research using machine learning and the development of algorithms that learn patterns from labeled digital data based on "deep learning" neural networks and feature-engineered approaches to analyze histology images have recently provided promising results. Thus far, image analysis and more complex AI-based tools have demonstrated excellent success performing tasks such as the quantification of breast biomarkers and Ki67, mitosis detection, lymph node metastasis recognition, tissue segmentation for diagnosing breast carcinoma, prognostication, computational assessment of tumor-infiltrating lymphocytes, and prediction of molecular expression as well as treatment response and benefit of therapy from routine H&E images. This review critically examines the literature regarding these applications of AI in the area of breast pathology.
Collapse
Affiliation(s)
- Mustafa Yousif
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA.
- Department of Pathology, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Paul J van Diest
- Department of Pathology, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Arvydas Laurinavicius
- Department of Pathology, Pharmacology and Forensic Medicine, Faculty of Medicine, Vilnius University, and National Center of Pathology, Affiliate of Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
| | - David Rimm
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, and Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA
| | - Stuart Schnitt
- Department of Pathology, Brigham and Women's Hospital and Harvard Medical School, Breast Oncology Program, Dana-Farber/Brigham and Women's Cancer Center, Boston, MA, USA
| | | |
Collapse
|
5
|
Bal A, Banerjee M, Chaki R, Sharma P. An efficient brain tumor image classifier by combining multi-pathway cascaded deep neural network and handcrafted features in MR images. Med Biol Eng Comput 2021; 59:1495-527. [PMID: 34184181 DOI: 10.1007/s11517-021-02370-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Accepted: 04/27/2021] [Indexed: 10/21/2022]
Abstract
Accurate segmentation and delineation of the sub-tumor regions are very challenging tasks due to the nature of the tumor. Traditionally, convolutional neural networks (CNNs) have succeeded in achieving most promising performance for the segmentation of brain tumor; however, handcrafted features remain very important in identification of tumor's boundary regions accurately. The present work proposes a robust deep learning-based model with three different CNN architectures along with pre-defined handcrafted features for brain tumor segmentation, mainly to find out more prominent boundaries of the core and enhanced tumor regions. Generally, automatic CNN architecture does not use the pre-defined handcrafted features because it extracts the features automatically. In this present work, several pre-defined handcrafted features are computed from four MRI modalities (T2, FLAIR, T1c, and T1) with the help of additional handcrafted masks according to user interest and fed to the convolutional features (automatic features) to improve the overall performance of the proposed CNN model for tumor segmentation. Multi-pathway CNN is explored in this present work along with single-pathway CNN, which extracts simultaneously both local and global features to identify the accurate sub-regions of the tumor with the help of handcrafted features. The present work uses a cascaded CNN architecture, where the outcome of a CNN is considered as an additional input information to next subsequent CNNs. To extract the handcrafted features, convolutional operation was applied on the four MRI modalities with the help of several pre-defined masks to produce a predefined set of handcrafted features. The present work also investigates the usefulness of intensity normalization and data augmentation in pre-processing stage in order to handle the difficulties related to the imbalance of tumor labels. The proposed method is experimented on the BraST 2018 datasets and achieved promising results than the existing (currently published) methods with respect to different metrics such as specificity, sensitivity, and dice similarity coefficient (DSC) for complete, core, and enhanced tumor regions. Quantitatively, a notable gain is achieved around the boundaries of the sub-tumor regions using the proposed two-pathway CNN along with the handcrafted features. Graphical Abstract This data is mandatory. Please provide.
Collapse
|
6
|
Kumari P, Seeja KR. A novel periocular biometrics solution for authentication during Covid-19 pandemic situation. J Ambient Intell Humaniz Comput 2021; 12:10321-10337. [PMID: 33425055 PMCID: PMC7778849 DOI: 10.1007/s12652-020-02814-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 12/09/2020] [Indexed: 06/01/2023]
Abstract
The outbreak of novel coronavirus in 2019 has shaken the whole world and it quickly evolved as a global pandemic, placing everyone in a panic situation. Considering its long-term effects on day to day lives, the necessity of wearing face mask and social distancing brings in picture the requirement of a contact less biometric system for all future authentication systems. One of the solutions is to use periocular biometric as it does not need physical contact like fingerprint biometric and is able to identify even people wearing face masks. Since, the periocular region is a small area as compared to face, extraction of required number of features from that small region is the major concern to make the system highly robust. This research proposes a feature fusion approach which combines the handcrafted features HOG, non-handcrafted features extracted using pretrained CNN models and gender related features extracted using a five layer CNN model. The proposed feature fusion approach is evaluated using multiclass SVM classifier with three different benchmark databases, UBIPr, Color FERET and Ethnic Ocular as well as for three non-ideal scenarios i.e. the effect of eyeglasses, effect of eye occlusion and pose variations. The proposed approach shows remarkable improvement in performance over pre-existing approaches.
Collapse
Affiliation(s)
- Punam Kumari
- Department of Computer Science and Engineering, Indira Gandhi Delhi Technical University for Women, Delhi, India
| | - K. R. Seeja
- Department of Computer Science and Engineering, Indira Gandhi Delhi Technical University for Women, Delhi, India
| |
Collapse
|
7
|
Sharma S, Mehra R. Conventional Machine Learning and Deep Learning Approach for Multi-Classification of Breast Cancer Histopathology Images-a Comparative Insight. J Digit Imaging 2020; 33:632-654. [PMID: 31900812 PMCID: PMC7256154 DOI: 10.1007/s10278-019-00307-y] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Automatic multi-classification of breast cancer histopathological images has remained one of the top-priority research areas in the field of biomedical informatics, due to the great clinical significance of multi-classification in providing diagnosis and prognosis of breast cancer. In this work, two machine learning approaches are thoroughly explored and compared for the task of automatic magnification-dependent multi-classification on a balanced BreakHis dataset for the detection of breast cancer. The first approach is based on handcrafted features which are extracted using Hu moment, color histogram, and Haralick textures. The extracted features are then utilized to train the conventional classifiers, while the second approach is based on transfer learning where the pre-existing networks (VGG16, VGG19, and ResNet50) are utilized as feature extractor and as a baseline model. The results reveal that the use of pre-trained networks as feature extractor exhibited superior performance in contrast to baseline approach and handcrafted approach for all the magnifications. Moreover, it has been observed that the augmentation plays a pivotal role in further enhancing the classification accuracy. In this context, the VGG16 network with linear SVM provides the highest accuracy that is computed in two forms, (a) patch-based accuracies (93.97% for 40×, 92.92% for 100×, 91.23% for 200×, and 91.79% for 400×); (b) patient-based accuracies (93.25% for 40×, 91.87% for 100×, 91.5% for 200×, and 92.31% for 400×) for the classification of magnification-dependent histopathological images. Additionally, "Fibro-adenoma" (benign) and "Mucous Carcinoma" (malignant) classes have been found to be the most complex classes for the entire magnification factors.
Collapse
Affiliation(s)
| | - Rajesh Mehra
- ECE Department, NITTTR, Chandigarh, 160019, India
| |
Collapse
|
8
|
Abstract
BACKGROUND Writing composition is a significant factor for measuring test-takers' ability in any language exam. However, the assessment (scoring) of these writing compositions or essays is a very challenging process in terms of reliability and time. The need for objective and quick scores has raised the need for a computer system that can automatically grade essay questions targeting specific prompts. Automated Essay Scoring (AES) systems are used to overcome the challenges of scoring writing tasks by using Natural Language Processing (NLP) and machine learning techniques. The purpose of this paper is to review the literature for the AES systems used for grading the essay questions. METHODOLOGY We have reviewed the existing literature using Google Scholar, EBSCO and ERIC to search for the terms "AES", "Automated Essay Scoring", "Automated Essay Grading", or "Automatic Essay" for essays written in English language. Two categories have been identified: handcrafted features and automatically featured AES systems. The systems of the former category are closely bonded to the quality of the designed features. On the other hand, the systems of the latter category are based on the automatic learning of the features and relations between an essay and its score without any handcrafted features. We reviewed the systems of the two categories in terms of system primary focus, technique(s) used in the system, the need for training data, instructional application (feedback system), and the correlation between e-scores and human scores. The paper includes three main sections. First, we present a structured literature review of the available Handcrafted Features AES systems. Second, we present a structured literature review of the available Automatic Featuring AES systems. Finally, we draw a set of discussions and conclusions. RESULTS AES models have been found to utilize a broad range of manually-tuned shallow and deep linguistic features. AES systems have many strengths in reducing labor-intensive marking activities, ensuring a consistent application of scoring criteria, and ensuring the objectivity of scoring. Although many techniques have been implemented to improve the AES systems, three primary challenges have been identified. The challenges are lacking of the sense of the rater as a person, the potential that the systems can be deceived into giving a lower or higher score to an essay than it deserves, and the limited ability to assess the creativity of the ideas and propositions and evaluate their practicality. Many techniques have only been used to address the first two challenges.
Collapse
Affiliation(s)
| | - Hesham Hassan
- Faculty of Computers and Information, Computer Science Department, Cairo University, Cairo, Egypt
| | - Mohammad Nassef
- Faculty of Computers and Information, Computer Science Department, Cairo University, Cairo, Egypt
| |
Collapse
|
9
|
Saha M, Chakraborty C, Racoceanu D. Efficient deep learning model for mitosis detection using breast histopathology images. Comput Med Imaging Graph 2017; 64:29-40. [PMID: 29409716 DOI: 10.1016/j.compmedimag.2017.12.001] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2017] [Revised: 06/28/2017] [Accepted: 12/07/2017] [Indexed: 01/18/2023]
Abstract
Mitosis detection is one of the critical factors of cancer prognosis, carrying significant diagnostic information required for breast cancer grading. It provides vital clues to estimate the aggressiveness and the proliferation rate of the tumour. The manual mitosis quantification from whole slide images is a very labor-intensive and challenging task. The aim of this study is to propose a supervised model to detect mitosis signature from breast histopathology WSI images. The model has been designed using deep learning architecture with handcrafted features. We used handcrafted features issued from previous medical challenges MITOS @ ICPR 2012, AMIDA-13 and projects (MICO ANR TecSan) expertise. The deep learning architecture mainly consists of five convolution layers, four max-pooling layers, four rectified linear units (ReLU), and two fully connected layers. ReLU has been used after each convolution layer as an activation function. Dropout layer has been included after first fully connected layer to avoid overfitting. Handcrafted features mainly consist of morphological, textural and intensity features. The proposed architecture has shown to have an improved 92% precision, 88% recall and 90% F-score. Prospectively, the proposed model will be very beneficial in routine exam, providing pathologists with efficient and - as we will prove - effective second opinion for breast cancer grading from whole slide images. Last but not the least, this model could lead junior and senior pathologists, as medical researchers, to a superior understanding and evaluation of breast cancer stage and genesis.
Collapse
Affiliation(s)
- Monjoy Saha
- School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, West Bengal, India
| | - Chandan Chakraborty
- School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, West Bengal, India
| | - Daniel Racoceanu
- Sorbonne University, Paris, France; Pontifical Catholic University of Peru, Lima, Peru.
| |
Collapse
|