1
|
Althenayan AS, AlSalamah SA, Aly S, Nouh T, Mahboub B, Salameh L, Alkubeyyer M, Mirza A. COVID-19 Hierarchical Classification Using a Deep Learning Multi-Modal. SENSORS (BASEL, SWITZERLAND) 2024; 24:2641. [PMID: 38676257 PMCID: PMC11053684 DOI: 10.3390/s24082641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/15/2024] [Accepted: 04/18/2024] [Indexed: 04/28/2024]
Abstract
Coronavirus disease 2019 (COVID-19), originating in China, has rapidly spread worldwide. Physicians must examine infected patients and make timely decisions to isolate them. However, completing these processes is difficult due to limited time and availability of expert radiologists, as well as limitations of the reverse-transcription polymerase chain reaction (RT-PCR) method. Deep learning, a sophisticated machine learning technique, leverages radiological imaging modalities for disease diagnosis and image classification tasks. Previous research on COVID-19 classification has encountered several limitations, including binary classification methods, single-feature modalities, small public datasets, and reliance on CT diagnostic processes. Additionally, studies have often utilized a flat structure, disregarding the hierarchical structure of pneumonia classification. This study aims to overcome these limitations by identifying pneumonia caused by COVID-19, distinguishing it from other types of pneumonia and healthy lungs using chest X-ray (CXR) images and related tabular medical data, and demonstrate the value of incorporating tabular medical data in achieving more accurate diagnoses. Resnet-based and VGG-based pre-trained convolutional neural network (CNN) models were employed to extract features, which were then combined using early fusion for the classification of eight distinct classes. We leveraged the hierarchal structure of pneumonia classification within our approach to achieve improved classification outcomes. Since an imbalanced dataset is common in this field, a variety of versions of generative adversarial networks (GANs) were used to generate synthetic data. The proposed approach tested in our private datasets of 4523 patients achieved a macro-avg F1-score of 95.9% and an F1-score of 87.5% for COVID-19 identification using a Resnet-based structure. In conclusion, in this study, we were able to create an accurate deep learning multi-modal to diagnose COVID-19 and differentiate it from other kinds of pneumonia and normal lungs, which will enhance the radiological diagnostic process.
Collapse
Affiliation(s)
- Albatoul S. Althenayan
- Information Systems Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia; (S.A.A.); (A.M.)
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammed Bin Saud Islamic University, Riyadh 11432, Saudi Arabia
| | - Shada A. AlSalamah
- Information Systems Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia; (S.A.A.); (A.M.)
- National Health Information Center, Saudi Health Council, Riyadh 13315, Saudi Arabia
- Digital Health and Innovation Department, Science Division, World Health Organization, 1211 Geneva, Switzerland
| | - Sherin Aly
- Institute of Graduate Studies and Research, Alexandria University, Alexandria 21526, Egypt;
| | - Thamer Nouh
- Trauma and Acute Care Surgery Unit, College of Medicine, King Saud University, Riyadh 12271, Saudi Arabia;
| | - Bassam Mahboub
- Clinical Sciences Department, College of Medicine, University of Sharjah, Sharjah P.O. Box 27272, United Arab Emirates;
| | - Laila Salameh
- Sharjah Institute for Medical Research, University of Sharjah, Sharjah P.O. Box 27272, United Arab Emirates;
| | - Metab Alkubeyyer
- Department of Radiology and Medical Imaging, King Khalid University Hospital, King Saud University, Riyadh 12372, Saudi Arabia;
| | - Abdulrahman Mirza
- Information Systems Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia; (S.A.A.); (A.M.)
| |
Collapse
|
2
|
Monaco S, Bussola N, Buttò S, Sona D, Giobergia F, Jurman G, Xinaris C, Apiletti D. AI models for automated segmentation of engineered polycystic kidney tubules. Sci Rep 2024; 14:2847. [PMID: 38310171 DOI: 10.1038/s41598-024-52677-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 01/21/2024] [Indexed: 02/05/2024] Open
Abstract
Autosomal dominant polycystic kidney disease (ADPKD) is a monogenic, rare disease, characterized by the formation of multiple cysts that grow out of the renal tubules. Despite intensive attempts to develop new drugs or repurpose existing ones, there is currently no definitive cure for ADPKD. This is primarily due to the complex and variable pathogenesis of the disease and the lack of models that can faithfully reproduce the human phenotype. Therefore, the development of models that allow automated detection of cysts' growth directly on human kidney tissue is a crucial step in the search for efficient therapeutic solutions. Artificial Intelligence methods, and deep learning algorithms in particular, can provide powerful and effective solutions to such tasks, and indeed various architectures have been proposed in the literature in recent years. Here, we comparatively review state-of-the-art deep learning segmentation models, using as a testbed a set of sequential RGB immunofluorescence images from 4 in vitro experiments with 32 engineered polycystic kidney tubules. To gain a deeper understanding of the detection process, we implemented both pixel-wise and cyst-wise performance metrics to evaluate the algorithms. Overall, two models stand out as the best performing, namely UNet++ and UACANet: the latter uses a self-attention mechanism introducing some explainability aspects that can be further exploited in future developments, thus making it the most promising algorithm to build upon towards a more refined cyst-detection platform. UACANet model achieves a cyst-wise Intersection over Union of 0.83, 0.91 for Recall, and 0.92 for Precision when applied to detect large-size cysts. On all-size cysts, UACANet averages at 0.624 pixel-wise Intersection over Union. The code to reproduce all results is freely available in a public GitHub repository.
Collapse
Affiliation(s)
| | - Nicole Bussola
- Fondazione Bruno Kessler, 38123, Trento, Italy
- CIBIO, Università degli Studi di Trento, 38123, Trento, Italy
| | - Sara Buttò
- Istituto di Ricerche Farmacologiche Mario Negri - IRCCS, 24126, Bergamo, Italy
| | - Diego Sona
- Fondazione Bruno Kessler, 38123, Trento, Italy
| | | | | | | | | |
Collapse
|
3
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
4
|
Kim B, Lee GY, Park SH. Attention fusion network with self-supervised learning for staging of osteonecrosis of the femoral head (ONFH) using multiple MR protocols. Med Phys 2023; 50:5528-5540. [PMID: 36945733 DOI: 10.1002/mp.16380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 11/21/2022] [Accepted: 02/20/2023] [Indexed: 03/23/2023] Open
Abstract
BACKGROUND Osteonecrosis of the femoral head (ONFH) is characterized as bone cell death in the hip joint, involving a severe pain in the groin. The staging of ONFH is commonly based on Magnetic resonance imaging and computed tomography (CT), which are important for establishing effective treatment plans. There have been some attempts to automate ONFH staging using deep learning, but few of them used only MR images. PURPOSE To propose a deep learning model for MR-only ONFH staging, which can reduce additional cost and radiation exposure from the acquisition of CT images. METHODS We integrated information from the MR images of five different imaging protocols by a newly proposed attention fusion method, which was composed of intra-modality attention and inter-modality attention. In addition, a self-supervised learning was used to learn deep representations from a large amount of paired MR-CT dataset. The encoder part of the MR-CT translation network was used as a pretraining network for the staging, which aimed to overcome the lack of annotated data for staging. Ablation studies were performed to investigate the contributions of each proposed method. The area under the receiver operating characteristic curve (AUROC) was used to evaluate the performance of the networks. RESULTS Our model improved the performance of the four-way classification of the association research circulation osseous (ARCO) stage using MR images of the multiple protocols by 6.8%p in AUROC over a plain VGG network. Each proposed method increased the performance by 4.7%p (self-supervised learning) and 2.6%p (attention fusion) in AUROC, which was demonstrated by the ablation experiments. CONCLUSIONS We have shown the feasibility of the MR-only ONFH staging by using self-supervised learning and attention fusion. A large amount of paired MR-CT data in hospitals can be used to further improve the performance of the staging, and the proposed method has potential to be used in the diagnosis of various diseases that require staging from multiple MR protocols.
Collapse
Affiliation(s)
- Bomin Kim
- Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| | - Geun Young Lee
- Department of Radiology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Republic of Korea
| | - Sung-Hong Park
- Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| |
Collapse
|
5
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
6
|
Hassan A, Elhoseny M, Kayed M. A novel and accurate deep learning-based Covid-19 diagnostic model for heart patients. SIGNAL, IMAGE AND VIDEO PROCESSING 2023; 17:1-8. [PMID: 37362230 PMCID: PMC10197036 DOI: 10.1007/s11760-023-02561-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 03/08/2023] [Accepted: 03/14/2023] [Indexed: 06/28/2023]
Abstract
Using radiographic changes of COVID-19 in the medical images, artificial intelligence techniques such as deep learning are used to extract some graphical features of COVID-19 and present a Covid-19 diagnostic tool. Differently from previous works that focus on using deep learning to analyze CT scans or X-ray images, this paper uses deep learning to scan electro diagram (ECG) images to diagnose Covid-19. Covid-19 patients with heart disease are the most people exposed to violent symptoms of Covid-19 and death. This shows that there is a special, unclear relation (until now) and parameters between covid-19 and heart disease. So, as previous works, using a general diagnostic model to detect covid-19 from all patients, based on the same rules, is not accurate as we prove later in the practical section of our paper because the model faces dispersion in the data during the training process. So, this paper aims to propose a novel model that focuses on diagnosing accurately Covid-19 for heart patients only to increase the accuracy and to reduce the waiting time of a heart patient to perform a covid-19 diagnosis. Also, we handle the only one existed dataset that contains ECGs of Covid-19 patients and produce a new version, with the help of a heart diseases expert, which consists of two classes: ECGs of heart patients with positive Covid-19 and ECGs of heart patients with negative Covid-19 cases. This dataset will help medical experts and data scientists to study the relation between Covid-19 and heart patients. We achieve overall accuracy, sensitivity and specificity 99.1%, 99% and 100%, respectively. Supplementary Information The online version contains supplementary material available at 10.1007/s11760-023-02561-8.
Collapse
Affiliation(s)
- Ahmed Hassan
- Faculty of Science, Beni-Suef University, Beni-Suef, 62511 Egypt
| | - Mohamed Elhoseny
- Faculty of Computers and Information, Mansoura University, Mansoura, 35516 Egypt
| | - Mohammed Kayed
- Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suef, 62511 Egypt
| |
Collapse
|
7
|
Annamalai B, Saravanan P, Varadharajan I. ABOA-CNN: auction-based optimization algorithm with convolutional neural network for pulmonary disease prediction. Neural Comput Appl 2023; 35:7463-7474. [PMID: 36788792 PMCID: PMC9910772 DOI: 10.1007/s00521-022-08033-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 11/04/2022] [Indexed: 02/12/2023]
Abstract
Nowadays, deep learning plays a vital role behind many of the emerging technologies. Few applications of deep learning include speech recognition, virtual assistant, healthcare, entertainment, and so on. In healthcare applications, deep learning can be used to predict diseases effectively. It is a type of computer model that learns in conducting classification tasks directly from text, sound, or images. It also provides better accuracy and sometimes outdoes human performance. We presented a novel approach that makes use of the deep learning method in our proposed work. The prediction of pulmonary disease can be performed with the aid of convolutional neural network (CNN) incorporated with auction-based optimization algorithm (ABOA) and DSC process. The traditional CNN ignores the dominant features from the X-ray images while performing the feature extraction process. This can be effectively circumvented by the adoption of ABOA, and the DSC is used to classify the pulmonary disease types such as fibrosis, pneumonia, cardiomegaly, and normal from the X-ray images. We have taken two datasets, namely the NIH Chest X-ray dataset and ChestX-ray8. The performances of the proposed approach are compared with deep learning-based state-of-art works such as BPD, DL, CSS-DL, and Grad-CAM. From the performance analyses, it is confirmed that the proposed approach effectively extracts the features from the X-ray images, and thus, the prediction of pulmonary diseases is more accurate than the state-of-art approaches.
Collapse
Affiliation(s)
- Balaji Annamalai
- School of Computing Science and Engineering (SCSE), VIT Bhopal University, Bhopal, MP India
| | - Prabakeran Saravanan
- Department of Networking and Communications, School of Computing, SRM Institute of Science & Technology (SRMIST), Kattankulathur, Tamil Nadu India
| | - Indumathi Varadharajan
- Department of Computational Intelligence School of Computing, SRM Institute of Science and Technology (SRMIST), Kattankulathur, Tamil Nadu India
| |
Collapse
|
8
|
Nafisah SI, Muhammad G. Tuberculosis detection in chest radiograph using convolutional neural network architecture and explainable artificial intelligence. Neural Comput Appl 2022; 36:1-21. [PMID: 35462630 PMCID: PMC9016694 DOI: 10.1007/s00521-022-07258-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 03/29/2022] [Indexed: 12/18/2022]
Abstract
In most regions of the world, tuberculosis (TB) is classified as a malignant infectious disease that can be fatal. Using advanced tools and technology, automatic analysis and classification of chest X-rays (CXRs) into TB and non-TB can be a reliable alternative to the subjective assessment performed by healthcare professionals. Thus, in the study, we propose an automatic TB detection system using advanced deep learning (DL) models. A significant portion of a CXR image is dark, providing no information for diagnosis and potentially confusing DL models. Therefore, in the proposed system, we use sophisticated segmentation networks to extract the region of interest from multimedia CXRs. Then, segmented images are fed into the DL models. For the subjective assessment, we use explainable artificial intelligence to visualize TB-infected parts of the lung. We use different convolutional neural network (CNN) models in our experiments and compare their classification performance using three publicly available CXR datasets. EfficientNetB3, one of the CNN models, achieves the highest accuracy of 99.1%, with a receiver operating characteristic of 99.9%, and an average accuracy of 98.7%. Experiment results confirm that using segmented lung CXR images produces better performance than does using raw lung CXR images.
Collapse
Affiliation(s)
- Saad I. Nafisah
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, 11543 Saudi Arabia
| | - Ghulam Muhammad
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, 11543 Saudi Arabia
| |
Collapse
|
9
|
The Accuracy and Radiomics Feature Effects of Multiple U-net-Based Automatic Segmentation Models for Transvaginal Ultrasound Images of Cervical Cancer. J Digit Imaging 2022; 35:983-992. [PMID: 35355160 PMCID: PMC9485324 DOI: 10.1007/s10278-022-00620-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 10/21/2021] [Accepted: 03/11/2022] [Indexed: 10/18/2022] Open
Abstract
Ultrasound (US) imaging has been recognized and widely used as a screening and diagnostic imaging modality for cervical cancer all over the world. However, few studies have investigated the U-net-based automatic segmentation models for cervical cancer on US images and investigated the effects of automatic segmentations on radiomics features. A total of 1102 transvaginal US images from 796 cervical cancer patients were collected and randomly divided into training (800), validation (100) and test sets (202), respectively, in this study. Four U-net models (U-net, U-net with ResNet, context encoder network (CE-net), and Attention U-net) were adapted to segment the target of cervical cancer automatically on these US images. Radiomics features were extracted and evaluated from both manually and automatically segmented area. The mean Dice similarity coefficient (DSC) of U-net, Attention U-net, CE-net, and U-net with ResNet were 0.88, 0.89, 0.88, and 0.90, respectively. The average Pearson coefficients for the evaluation of the reliability of US image-based radiomics were 0.94, 0.96, 0.94, and 0.95 for U-net, U-net with ResNet, Attention U-net, and CE-net, respectively, in their comparison with manual segmentation. The reproducibility of the radiomics parameters evaluated by intraclass correlation coefficients (ICC) showed robustness of automatic segmentation with an average ICC coefficient of 0.99. In conclusion, high accuracy of U-net-based automatic segmentations was achieved in delineating the target area of cervical cancer US images. It is feasible and reliable for further radiomics studies with features extracted from automatic segmented target areas.
Collapse
|
10
|
Shi W, Xu T, Yang H, Xi Y, Du Y, Li J, Li J. Attention Gate based dual-pathway Network for Vertebra Segmentation of X-ray Spine images. IEEE J Biomed Health Inform 2022; 26:3976-3987. [PMID: 35290194 DOI: 10.1109/jbhi.2022.3158968] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automatic spine and vertebra segmentation from X-ray spine images is a critical and challenging problem in many computer-aid spinal image analysis and disease diagnosis applications. In this paper, a two-stage automatic segmentation framework for spine X-ray images is proposed, which can firstly locate the spine regions (including backbone, sacrum and illum) in the coarse stage and then identify eighteen vertebrae (i.e., cervical vertebra 1, thoracic vertebra 1-12 and lumbar vertebra 1-5) with isolate and clear boundary in the fine stage. A novel Attention Gate based dual-pathway Network (AGNet) composed of context and edge pathways is designed to extract semantic and boundary information for segmentation of both spine and vertebra regions. Multi-scale supervision mechanism is applied to explore comprehensive features and an Edge aware Fusion Mechanism (EFM) is proposed to fuse features extracted from the two pathways. Some other image processing skills, such as centralized backbone clipping, patch cropping and convex hull detection are introduced to further refine the vertebra segmentation results. Experimental validations on spine X-ray images dataset and vertebrae dataset suggest that the proposed AGNet achieves superior performance compared with state-of-the-art segmentation methods, and the coarse-to-fine framework can be implemented in real spinal diagnosis systems.
Collapse
|
11
|
Malhotra P, Gupta S, Koundal D, Zaguia A, Enbeyle W. Deep Neural Networks for Medical Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:9580991. [PMID: 35310182 PMCID: PMC8930223 DOI: 10.1155/2022/9580991] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 01/06/2022] [Accepted: 01/10/2022] [Indexed: 12/31/2022]
Abstract
Image segmentation is a branch of digital image processing which has numerous applications in the field of analysis of images, augmented reality, machine vision, and many more. The field of medical image analysis is growing and the segmentation of the organs, diseases, or abnormalities in medical images has become demanding. The segmentation of medical images helps in checking the growth of disease like tumour, controlling the dosage of medicine, and dosage of exposure to radiations. Medical image segmentation is really a challenging task due to the various artefacts present in the images. Recently, deep neural models have shown application in various image segmentation tasks. This significant growth is due to the achievements and high performance of the deep learning strategies. This work presents a review of the literature in the field of medical image segmentation employing deep convolutional neural networks. The paper examines the various widely used medical image datasets, the different metrics used for evaluating the segmentation tasks, and performances of different CNN based networks. In comparison to the existing review and survey papers, the present work also discusses the various challenges in the field of segmentation of medical images and different state-of-the-art solutions available in the literature.
Collapse
Affiliation(s)
- Priyanka Malhotra
- Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh, Punjab, India
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Chandigarh, Punjab, India
| | - Deepika Koundal
- Department of Systemics, School of Computer Science, University of Petroleum and Energy Studies, Dehradun, India
| | - Atef Zaguia
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | | |
Collapse
|
12
|
Data-driven machine learning: A new approach to process and utilize biomedical data. PREDICTIVE MODELING IN BIOMEDICAL DATA MINING AND ANALYSIS 2022. [PMCID: PMC9464259 DOI: 10.1016/b978-0-323-99864-2.00017-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
13
|
Seah JCY, Tang CHM, Buchlak QD, Holt XG, Wardman JB, Aimoldin A, Esmaili N, Ahmad H, Pham H, Lambert JF, Hachey B, Hogg SJF, Johnston BP, Bennett C, Oakden-Rayner L, Brotchie P, Jones CM. Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study. Lancet Digit Health 2021; 3:e496-e506. [PMID: 34219054 DOI: 10.1016/s2589-7500(21)00106-0] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 05/02/2021] [Accepted: 05/12/2021] [Indexed: 02/01/2023]
Abstract
BACKGROUND Chest x-rays are widely used in clinical practice; however, interpretation can be hindered by human error and a lack of experienced thoracic radiologists. Deep learning has the potential to improve the accuracy of chest x-ray interpretation. We therefore aimed to assess the accuracy of radiologists with and without the assistance of a deep-learning model. METHODS In this retrospective study, a deep-learning model was trained on 821 681 images (284 649 patients) from five data sets from Australia, Europe, and the USA. 2568 enriched chest x-ray cases from adult patients (≥16 years) who had at least one frontal chest x-ray were included in the test dataset; cases were representative of inpatient, outpatient, and emergency settings. 20 radiologists reviewed cases with and without the assistance of the deep-learning model with a 3-month washout period. We assessed the change in accuracy of chest x-ray interpretation across 127 clinical findings when the deep-learning model was used as a decision support by calculating area under the receiver operating characteristic curve (AUC) for each radiologist with and without the deep-learning model. We also compared AUCs for the model alone with those of unassisted radiologists. If the lower bound of the adjusted 95% CI of the difference in AUC between the model and the unassisted radiologists was more than -0·05, the model was considered to be non-inferior for that finding. If the lower bound exceeded 0, the model was considered to be superior. FINDINGS Unassisted radiologists had a macroaveraged AUC of 0·713 (95% CI 0·645-0·785) across the 127 clinical findings, compared with 0·808 (0·763-0·839) when assisted by the model. The deep-learning model statistically significantly improved the classification accuracy of radiologists for 102 (80%) of 127 clinical findings, was statistically non-inferior for 19 (15%) findings, and no findings showed a decrease in accuracy when radiologists used the deep-learning model. Unassisted radiologists had a macroaveraged mean AUC of 0·713 (0·645-0·785) across all findings, compared with 0·957 (0·954-0·959) for the model alone. Model classification alone was significantly more accurate than unassisted radiologists for 117 (94%) of 124 clinical findings predicted by the model and was non-inferior to unassisted radiologists for all other clinical findings. INTERPRETATION This study shows the potential of a comprehensive deep-learning model to improve chest x-ray interpretation across a large breadth of clinical practice. FUNDING Annalise.ai.
Collapse
Affiliation(s)
- Jarrel C Y Seah
- Annalise.ai, Sydney, NSW, Australia; Department of Radiology, Alfred Health, Melbourne, VIC, Australia
| | | | | | | | | | | | - Nazanin Esmaili
- School of Medicine, University of Notre Dame Australia, Sydney, NSW, Australia; Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW, Australia
| | | | | | | | | | | | | | - Christine Bennett
- School of Medicine, University of Notre Dame Australia, Sydney, NSW, Australia
| | - Luke Oakden-Rayner
- Australian Institute for Machine Learning, The University of Adelaide, Adelaide, SA, Australia
| | - Peter Brotchie
- Annalise.ai, Sydney, NSW, Australia; Department of Radiology, St Vincent's Health Australia, Melbourne, VIC, Australia
| | | |
Collapse
|
14
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
15
|
Afzali A, Babapour Mofrad F, Pouladian M. 2D Statistical Lung Shape Analysis Using Chest Radiographs: Modelling and Segmentation. J Digit Imaging 2021; 34:523-540. [PMID: 33754214 PMCID: PMC8329117 DOI: 10.1007/s10278-021-00440-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 11/30/2020] [Accepted: 02/24/2021] [Indexed: 11/26/2022] Open
Abstract
Accurate information of the lung shape analysis and its anatomical variations is very noticeable in medical imaging. The normal variations of the lung shape can be interpreted as a normal lung. In contrast, abnormal variations of the lung shape can be a result of one of the pulmonary diseases. The goal of this study is twofold: (1) represent two lung shape models which are different at the reference points in registration process considering to show their impact on estimating the inter-patient 2D lung shape variations and (2) using the obtained models in lung field segmentation by utilizing active shape model (ASM) technique. The represented models which showed the inter-patient 2D lung shape variations in two different forms are fully compared and evaluated. The results show that the models along with standard principal component analysis (PCA) can be able to explain more than 95% of total variations in all cases using only first 7 principal component (PC) modes for both lungs. Both models are used in ASM-based segmentation technique for lung field segmentation. The segmentation results are evaluated using leave-one-out cross validation technique. According to the experimental results, the proposed method has average dice similarity coefficient of 97.1% and 96.1% for the right and the left lung, respectively. The results show that the proposed segmentation method is more stable and accurate than other model-based techniques to inter-patient lung field segmentation.
Collapse
Affiliation(s)
- Ali Afzali
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Farshid Babapour Mofrad
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Majid Pouladian
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| |
Collapse
|
16
|
Joint image and feature adaptative attention-aware networks for cross-modality semantic segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06064-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
17
|
Mohammed KK, Hassanien AE, Afify HM. A 3D image segmentation for lung cancer using V.Net architecture based deep convolutional networks. J Med Eng Technol 2021; 45:337-343. [PMID: 33843414 DOI: 10.1080/03091902.2021.1905895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Lung segmentation of chest CT scan is utilised to identify lung cancer and this step is also critical in other diagnostic pathways. Therefore, powerful algorithms to accomplish this accurate segmentation task are highly needed in the medical imaging domain, where the tumours are required to be segmented with the lung parenchyma. Also, the lung parenchyma needs to be detached from the tumour regions that are often confused with the lung tissue. Recently, lung semantic segmentation is more suitable to allocate each pixel in the image to a predefined class based on fully convolutional networks (FCNs). In this paper, CT cancer scans from the Task06_Lung database were applied to FCN that was inspired by V.Net architecture for efficiently selecting a region of interest (ROI) using the 3D segmentation. This lung database is segregated into 64 training images and 32 testing images. The proposed system is generalised by three steps including data preprocessing, data augmentation and neural network based on the V-Net model. Then, it was evaluated by dice score coefficient (DSC) to calculate the ratio of the segmented image and the ground truth image. This proposed system outperformed other previous schemes for 3D lung segmentation with an average DCS of 80% for ROI and 98% for surrounding lung tissues. Moreover, this system demonstrated that 3D views of lung tumours in CT images precisely carried tumour estimation and robust lung segmentation.
Collapse
Affiliation(s)
- Kamel K Mohammed
- Center for Virus Research and Studies, Al-Azhar University, Cairo, Egypt.,Scientific Research Group in Egypt (SRGE), Cairo, Egypt
| | - Aboul Ella Hassanien
- Scientific Research Group in Egypt (SRGE), Cairo, Egypt.,Faculty of Computers and Information, Cairo University, Giza, Egypt
| | - Heba M Afify
- Scientific Research Group in Egypt (SRGE), Cairo, Egypt.,Systems and Biomedical Engineering Department, Higher Institute of Engineering in El-Shorouk City, Cairo, Egypt
| |
Collapse
|
18
|
Sarvamangala DR, Kulkarni RV. Convolutional neural networks in medical image understanding: a survey. EVOLUTIONARY INTELLIGENCE 2021; 15:1-22. [PMID: 33425040 PMCID: PMC7778711 DOI: 10.1007/s12065-020-00540-3] [Citation(s) in RCA: 127] [Impact Index Per Article: 42.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 10/05/2020] [Accepted: 11/22/2020] [Indexed: 12/23/2022]
Abstract
Imaging techniques are used to capture anomalies of the human body. The captured images must be understood for diagnosis, prognosis and treatment planning of the anomalies. Medical image understanding is generally performed by skilled medical professionals. However, the scarce availability of human experts and the fatigue and rough estimate procedures involved with them limit the effectiveness of image understanding performed by skilled medical professionals. Convolutional neural networks (CNNs) are effective tools for image understanding. They have outperformed human experts in many image understanding tasks. This article aims to provide a comprehensive survey of applications of CNNs in medical image understanding. The underlying objective is to motivate medical image understanding researchers to extensively apply CNNs in their research and diagnosis. A brief introduction to CNNs has been presented. A discussion on CNN and its various award-winning frameworks have been presented. The major medical image understanding tasks, namely image classification, segmentation, localization and detection have been introduced. Applications of CNN in medical image understanding of the ailments of brain, breast, lung and other organs have been surveyed critically and comprehensively. A critical discussion on some of the challenges is also presented.
Collapse
|
19
|
Toraman S, Alakus TB, Turkoglu I. Convolutional capsnet: A novel artificial neural network approach to detect COVID-19 disease from X-ray images using capsule networks. CHAOS, SOLITONS, AND FRACTALS 2020; 140:110122. [PMID: 32834634 PMCID: PMC7357532 DOI: 10.1016/j.chaos.2020.110122] [Citation(s) in RCA: 103] [Impact Index Per Article: 25.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Accepted: 07/10/2020] [Indexed: 05/17/2023]
Abstract
Coronavirus is an epidemic that spreads very quickly. For this reason, it has very devastating effects in many areas worldwide. It is vital to detect COVID-19 diseases as quickly as possible to restrain the spread of the disease. The similarity of COVID-19 disease with other lung infections makes the diagnosis difficult. In addition, the high spreading rate of COVID-19 increased the need for a fast system for the diagnosis of cases. For this purpose, interest in various computer-aided (such as CNN, DNN, etc.) deep learning models has been increased. In these models, mostly radiology images are applied to determine the positive cases. Recent studies show that, radiological images contain important information in the detection of coronavirus. In this study, a novel artificial neural network, Convolutional CapsNet for the detection of COVID-19 disease is proposed by using chest X-ray images with capsule networks. The proposed approach is designed to provide fast and accurate diagnostics for COVID-19 diseases with binary classification (COVID-19, and No-Findings), and multi-class classification (COVID-19, and No-Findings, and Pneumonia). The proposed method achieved an accuracy of 97.24%, and 84.22% for binary class, and multi-class, respectively. It is thought that the proposed method may help physicians to diagnose COVID-19 disease and increase the diagnostic performance. In addition, we believe that the proposed method may be an alternative method to diagnose COVID-19 by providing fast screening.
Collapse
Affiliation(s)
- Suat Toraman
- Department of Informatics, Firat University, 23119, Elazig, Turkey
| | - Talha Burak Alakus
- Department of Software Engineering, Kirklareli University, 39000, Kirklareli, Turkey
| | - Ibrahim Turkoglu
- Department of Software Engineering, Firat University, 23119, Elazig, Turkey
| |
Collapse
|
20
|
Negassi M, Suarez-Ibarrola R, Hein S, Miernik A, Reiterer A. Application of artificial neural networks for automated analysis of cystoscopic images: a review of the current status and future prospects. World J Urol 2020; 38:2349-2358. [PMID: 31925551 PMCID: PMC7508959 DOI: 10.1007/s00345-019-03059-0] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 12/13/2019] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Optimal detection and surveillance of bladder cancer (BCa) rely primarily on the cystoscopic visualization of bladder lesions. AI-assisted cystoscopy may improve image recognition and accelerate data acquisition. OBJECTIVE To provide a comprehensive review of machine learning (ML), deep learning (DL) and convolutional neural network (CNN) applications in cystoscopic image recognition. EVIDENCE ACQUISITION A detailed search of original articles was performed using the PubMed-MEDLINE database to identify recent English literature relevant to ML, DL and CNN applications in cystoscopic image recognition. EVIDENCE SYNTHESIS In total, two articles and one conference abstract were identified addressing the application of AI methods in cystoscopic image recognition. These investigations showed accuracies exceeding 90% for tumor detection; however, future work is necessary to incorporate these methods into AI-aided cystoscopy and compared to other tumor visualization tools. Furthermore, we present results from the RaVeNNA-4pi consortium initiative which has extracted 4200 frames from 62 videos, analyzed them with the U-Net network and achieved an average dice score of 0.67. Improvements in its precision can be achieved by augmenting the video/frame database. CONCLUSION AI-aided cystoscopy has the potential to outperform urologists at recognizing and classifying bladder lesions. To ensure their real-life implementation, however, these algorithms require external validation to generalize their results across other data sets.
Collapse
Affiliation(s)
- Misgana Negassi
- Department of Sustainable Systems Engineering INATECH, University of Freiburg, Emmy-Noether-Straße 2, Freiburg, Germany
- Department Object and Shape Detection, Fraunhofer Institute for Physical Measurement Techniques IPM, Heidenhofstraße 8, Freiburg, Germany
| | - Rodrigo Suarez-Ibarrola
- Department of Urology, Faculty of Medicine, University of Freiburg-Medical Centre, Hugstetter Str. 55, Freiburg, Germany
| | - Simon Hein
- Department of Urology, Faculty of Medicine, University of Freiburg-Medical Centre, Hugstetter Str. 55, Freiburg, Germany
| | - Arkadiusz Miernik
- Department of Urology, Faculty of Medicine, University of Freiburg-Medical Centre, Hugstetter Str. 55, Freiburg, Germany
| | - Alexander Reiterer
- Department of Sustainable Systems Engineering INATECH, University of Freiburg, Emmy-Noether-Straße 2, Freiburg, Germany
- Department Object and Shape Detection, Fraunhofer Institute for Physical Measurement Techniques IPM, Heidenhofstraße 8, Freiburg, Germany
| |
Collapse
|
21
|
COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist-Adjudicated Chest X-Ray Images as Training Data: Preliminary Findings. Int J Biomed Imaging 2020; 2020:8828855. [PMID: 32849861 PMCID: PMC7439162 DOI: 10.1155/2020/8828855] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 07/27/2020] [Accepted: 08/07/2020] [Indexed: 12/23/2022] Open
Abstract
The key component in deep learning research is the availability of training data sets. With a limited number of publicly available COVID-19 chest X-ray images, the generalization and robustness of deep learning models to detect COVID-19 cases developed based on these images are questionable. We aimed to use thousands of readily available chest radiograph images with clinical findings associated with COVID-19 as a training data set, mutually exclusive from the images with confirmed COVID-19 cases, which will be used as the testing data set. We used a deep learning model based on the ResNet-101 convolutional neural network architecture, which was pretrained to recognize objects from a million of images and then retrained to detect abnormality in chest X-ray images. The performance of the model in terms of area under the receiver operating curve, sensitivity, specificity, and accuracy was 0.82, 77.3%, 71.8%, and 71.9%, respectively. The strength of this study lies in the use of labels that have a strong clinical association with COVID-19 cases and the use of mutually exclusive publicly available data for training, validation, and testing.
Collapse
|
22
|
Farhat H, Sakr GE, Kilany R. Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19. MACHINE VISION AND APPLICATIONS 2020; 31:53. [PMID: 32834523 PMCID: PMC7386599 DOI: 10.1007/s00138-020-01101-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 06/21/2020] [Accepted: 07/07/2020] [Indexed: 05/07/2023]
Abstract
Shortly after deep learning algorithms were applied to Image Analysis, and more importantly to medical imaging, their applications increased significantly to become a trend. Likewise, deep learning applications (DL) on pulmonary medical images emerged to achieve remarkable advances leading to promising clinical trials. Yet, coronavirus can be the real trigger to open the route for fast integration of DL in hospitals and medical centers. This paper reviews the development of deep learning applications in medical image analysis targeting pulmonary imaging and giving insights of contributions to COVID-19. It covers more than 160 contributions and surveys in this field, all issued between February 2017 and May 2020 inclusively, highlighting various deep learning tasks such as classification, segmentation, and detection, as well as different pulmonary pathologies like airway diseases, lung cancer, COVID-19 and other infections. It summarizes and discusses the current state-of-the-art approaches in this research domain, highlighting the challenges, especially with COVID-19 pandemic current situation.
Collapse
Affiliation(s)
- Hanan Farhat
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - George E. Sakr
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - Rima Kilany
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| |
Collapse
|
23
|
Yudistira N, Kavitha M, Itabashi T, Iwane AH, Kurita T. Prediction of Sequential Organelles Localization under Imbalance using A Balanced Deep U-Net. Sci Rep 2020; 10:2626. [PMID: 32060319 PMCID: PMC7021757 DOI: 10.1038/s41598-020-59285-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Accepted: 01/27/2020] [Indexed: 01/17/2023] Open
Abstract
Assessing the structure and function of organelles in living organisms of the primitive unicellular red algae Cyanidioschyzon merolae on three-dimensional sequential images demands a reliable automated technique in the class imbalance among various cellular structures during mitosis. Existing classification networks with commonly used loss functions were focused on larger numbers of cellular structures that lead to the unreliability of the system. Hence, we proposed a balanced deep regularized weighted compound dice loss (RWCDL) network for better localization of cell organelles. Specifically, we introduced two new loss functions, namely compound dice (CD) and RWCD by implementing multi-class variant dice and weighting mechanism, respectively for maximizing weights of peroxisome and nucleus among five classes as the main contribution of this study. We extended the Unet-like convolution neural network (CNN) architecture for evaluating the ability of our proposed loss functions for improved segmentation. The feasibility of the proposed approach is confirmed with three different large scale mitotic cycle data set with different number of occurrences of cell organelles. In addition, we compared the training behavior of our designed architectures with the ground truth segmentation using various performance measures. The proposed balanced RWCDL network generated the highest area under the curve (AUC) value in elevating the small and obscure peroxisome and nucleus, which is 30% higher than the network with commonly used mean square error (MSE) and dice loss (DL) functions. The experimental results indicated that the proposed approach can efficiently identify the cellular structures, even when the contour between the cells is obscure and thus convinced that the balanced deep RWCDL approach is reliable and can be helpful for biologist to accurately identify the relationship between the cell behavior and structures of cell organelles during mitosis.
Collapse
Affiliation(s)
- Novanto Yudistira
- Hiroshima University, Department of Information Engineering, Higashi Hiroshima, 739-8521, Japan.
- Universitas Brawijaya, Fakultas Ilmu Komputer, Malang, 65145, Indonesia.
| | - Muthusubash Kavitha
- Hiroshima University, Department of Information Engineering, Higashi Hiroshima, 739-8521, Japan
| | - Takeshi Itabashi
- Riken, Center for Biosystems Dynamics Research, Laboratory for Cell Field Structure, Higashi Hiroshima, 739-0046, Japan
- Hiroshima University, Graduate School of Integrated Sciences for Life, Higashi Hiroshima, 739-0046, Japan
- Osaka University, Graduate School of Frontier Biosciences, Osaka, 565-0871, Japan
| | - Atsuko H Iwane
- Riken, Center for Biosystems Dynamics Research, Laboratory for Cell Field Structure, Higashi Hiroshima, 739-0046, Japan
- Hiroshima University, Graduate School of Integrated Sciences for Life, Higashi Hiroshima, 739-0046, Japan
- Osaka University, Graduate School of Frontier Biosciences, Osaka, 565-0871, Japan
| | - Takio Kurita
- Hiroshima University, Department of Information Engineering, Higashi Hiroshima, 739-8521, Japan
| |
Collapse
|
24
|
Hesamian MH, Jia W, He X, Kennedy P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. J Digit Imaging 2019; 32:582-596. [PMID: 31144149 PMCID: PMC6646484 DOI: 10.1007/s10278-019-00227-x] [Citation(s) in RCA: 491] [Impact Index Per Article: 98.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Deep learning-based image segmentation is by now firmly established as a robust tool in image segmentation. It has been widely used to separate homogeneous areas as the first and critical component of diagnosis and treatment pipeline. In this article, we present a critical appraisal of popular methods that have employed deep-learning techniques for medical image segmentation. Moreover, we summarize the most common challenges incurred and suggest possible solutions.
Collapse
Affiliation(s)
- Mohammad Hesam Hesamian
- School of Electrical and Data Engineering (SEDE), University of Technology Sydney, 2007, Sydney, Australia.
- CB11.09, University of Technology Sydney, 81 Broadway, Ultimo NSW, 2007, Sydney, Australia.
| | - Wenjing Jia
- School of Electrical and Data Engineering (SEDE), University of Technology Sydney, 2007, Sydney, Australia
| | - Xiangjian He
- School of Electrical and Data Engineering (SEDE), University of Technology Sydney, 2007, Sydney, Australia
| | - Paul Kennedy
- School of Software, University of Technology Sydney, 2007, Sydney, Australia
| |
Collapse
|
25
|
Tureckova A, Rodríguez-Sánchez AJ. ISLES Challenge: U-Shaped Convolution Neural Network with Dilated Convolution for 3D Stroke Lesion Segmentation. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2019. [DOI: 10.1007/978-3-030-11723-8_32] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
26
|
Qin C, Yao D, Shi Y, Song Z. Computer-aided detection in chest radiography based on artificial intelligence: a survey. Biomed Eng Online 2018; 17:113. [PMID: 30134902 PMCID: PMC6103992 DOI: 10.1186/s12938-018-0544-y] [Citation(s) in RCA: 113] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Accepted: 08/13/2018] [Indexed: 11/10/2022] Open
Abstract
As the most common examination tool in medical practice, chest radiography has important clinical value in the diagnosis of disease. Thus, the automatic detection of chest disease based on chest radiography has become one of the hot topics in medical imaging research. Based on the clinical applications, the study conducts a comprehensive survey on computer-aided detection (CAD) systems, and especially focuses on the artificial intelligence technology applied in chest radiography. The paper presents several common chest X-ray datasets and briefly introduces general image preprocessing procedures, such as contrast enhancement and segmentation, and bone suppression techniques that are applied to chest radiography. Then, the CAD system in the detection of specific disease (pulmonary nodules, tuberculosis, and interstitial lung diseases) and multiple diseases is described, focusing on the basic principles of the algorithm, the data used in the study, the evaluation measures, and the results. Finally, the paper summarizes the CAD system in chest radiography based on artificial intelligence and discusses the existing problems and trends.
Collapse
Affiliation(s)
- Chunli Qin
- School of Basic Medical Sciences, Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Demin Yao
- School of Basic Medical Sciences, Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Yonghong Shi
- School of Basic Medical Sciences, Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Zhijian Song
- School of Basic Medical Sciences, Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| |
Collapse
|