1
|
Buongiorno R, Del Corso G, Germanese D, Colligiani L, Python L, Romei C, Colantonio S. Enhancing COVID-19 CT Image Segmentation: A Comparative Study of Attention and Recurrence in UNet Models. J Imaging 2023; 9:283. [PMID: 38132701 PMCID: PMC10744014 DOI: 10.3390/jimaging9120283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/09/2023] [Accepted: 12/13/2023] [Indexed: 12/23/2023] Open
Abstract
Imaging plays a key role in the clinical management of Coronavirus disease 2019 (COVID-19) as the imaging findings reflect the pathological process in the lungs. The visual analysis of High-Resolution Computed Tomography of the chest allows for the differentiation of parenchymal abnormalities of COVID-19, which are crucial to be detected and quantified in order to obtain an accurate disease stratification and prognosis. However, visual assessment and quantification represent a time-consuming task for radiologists. In this regard, tools for semi-automatic segmentation, such as those based on Convolutional Neural Networks, can facilitate the detection of pathological lesions by delineating their contour. In this work, we compared four state-of-the-art Convolutional Neural Networks based on the encoder-decoder paradigm for the binary segmentation of COVID-19 infections after training and testing them on 90 HRCT volumetric scans of patients diagnosed with COVID-19 collected from the database of the Pisa University Hospital. More precisely, we started from a basic model, the well-known UNet, then we added an attention mechanism to obtain an Attention-UNet, and finally we employed a recurrence paradigm to create a Recurrent-Residual UNet (R2-UNet). In the latter case, we also added attention gates to the decoding path of an R2-UNet, thus designing an R2-Attention UNet so as to make the feature representation and accumulation more effective. We compared them to gain understanding of both the cognitive mechanism that can lead a neural model to the best performance for this task and the good compromise between the amount of data, time, and computational resources required. We set up a five-fold cross-validation and assessed the strengths and limitations of these models by evaluating the performances in terms of Dice score, Precision, and Recall defined both on 2D images and on the entire 3D volume. From the results of the analysis, it can be concluded that Attention-UNet outperforms the other models by achieving the best performance of 81.93%, in terms of 2D Dice score, on the test set. Additionally, we conducted statistical analysis to assess the performance differences among the models. Our findings suggest that integrating the recurrence mechanism within the UNet architecture leads to a decline in the model's effectiveness for our particular application.
Collapse
Affiliation(s)
- Rossana Buongiorno
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| | - Giulio Del Corso
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| | - Danila Germanese
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| | - Leonardo Colligiani
- Department of Translational Research, Academic Radiology, University of Pisa, 56124 Pisa, PI, Italy;
| | - Lorenzo Python
- 2nd Radiology Unit, Pisa University Hospital, 56124 Pisa, PI, Italy; (L.P.)
| | - Chiara Romei
- 2nd Radiology Unit, Pisa University Hospital, 56124 Pisa, PI, Italy; (L.P.)
| | - Sara Colantonio
- Institute of Information Science and Technologies, National Research Council of Italy (ISTI-CNR), 56124 Pisa, PI, Italy; (G.D.C.); (S.C.)
| |
Collapse
|
2
|
Chen M, Yi S, Yang M, Yang Z, Zhang X. UNet segmentation network of COVID-19 CT images with multi-scale attention. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:16762-16785. [PMID: 37920033 DOI: 10.3934/mbe.2023747] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
In recent years, the global outbreak of COVID-19 has posed an extremely serious life-safety risk to humans, and in order to maximize the diagnostic efficiency of physicians, it is extremely valuable to investigate the methods of lesion segmentation in images of COVID-19. Aiming at the problems of existing deep learning models, such as low segmentation accuracy, poor model generalization performance, large model parameters and difficult deployment, we propose an UNet segmentation network integrating multi-scale attention for COVID-19 CT images. Specifically, the UNet network model is utilized as the base network, and the structure of multi-scale convolutional attention is proposed in the encoder stage to enhance the network's ability to capture multi-scale information. Second, a local channel attention module is proposed to extract spatial information by modeling local relationships to generate channel domain weights, to supplement detailed information about the target region to reduce information redundancy and to enhance important information. Moreover, the network model encoder segment uses the Meta-ACON activation function to avoid the overfitting phenomenon of the model and to improve the model's representational ability. A large number of experimental results on publicly available mixed data sets show that compared with the current mainstream image segmentation algorithms, the pro-posed method can more effectively improve the accuracy and generalization performance of COVID-19 lesions segmentation and provide help for medical diagnosis and analysis.
Collapse
Affiliation(s)
- Mingju Chen
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| | - Sihang Yi
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| | - Mei Yang
- Zigong Third People's Hospital, Zigong 643000, China
| | - Zhiwen Yang
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| | - Xingyue Zhang
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| |
Collapse
|
3
|
Saha S, Dutta S, Goswami B, Nandi D. ADU-Net: An Attention Dense U-Net based deep supervised DNN for automated lesion segmentation of COVID-19 from chest CT images. Biomed Signal Process Control 2023; 85:104974. [PMID: 37122956 PMCID: PMC10121143 DOI: 10.1016/j.bspc.2023.104974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 04/01/2023] [Accepted: 04/15/2023] [Indexed: 05/02/2023]
Abstract
An automatic method for qualitative and quantitative evaluation of chest Computed Tomography (CT) images is essential for diagnosing COVID-19 patients. We aim to develop an automated COVID-19 prediction framework using deep learning. We put forth a novel Deep Neural Network (DNN) composed of an attention-based dense U-Net with deep supervision for COVID-19 lung lesion segmentation from chest CT images. We incorporate dense U-Net where convolution kernel size 5×5 is used instead of 3×3. The dense and transition blocks are introduced to implement a densely connected network on each encoder level. Also, the attention mechanism is applied between the encoder, skip connection, and decoder. These are used to keep both the high and low-level features efficiently. The deep supervision mechanism creates secondary segmentation maps from the features. Deep supervision combines secondary supervision maps from various resolution levels and produces a better final segmentation map. The trained artificial DNN model takes the test data at its input and generates a prediction output for COVID-19 lesion segmentation. The proposed model has been applied to the MedSeg COVID-19 chest CT segmentation dataset. Data pre-processing methods help the training process and improve performance. We compare the performance of the proposed DNN model with state-of-the-art models by computing the well-known metrics: dice coefficient, Jaccard coefficient, accuracy, specificity, sensitivity, and precision. As a result, the proposed model outperforms the state-of-the-art models. This new model may be considered an efficient automated screening system for COVID-19 diagnosis and can potentially improve patient health care and management system.
Collapse
Affiliation(s)
- Sanjib Saha
- Department of Computer Science and Engineering, National Institute of Technology, Durgapur, 713209, West Bengal, India
- Department of Computer Science and Engineering, Dr. B. C. Roy Engineering College, Durgapur, 713206, West Bengal, India
| | - Subhadeep Dutta
- Department of Computer Science and Engineering, Dr. B. C. Roy Engineering College, Durgapur, 713206, West Bengal, India
| | - Biswarup Goswami
- Department of Respiratory Medicine, Health and Family Welfare, Government of West Bengal, Kolkata, 700091, West Bengal, India
| | - Debashis Nandi
- Department of Computer Science and Engineering, National Institute of Technology, Durgapur, 713209, West Bengal, India
| |
Collapse
|
4
|
Alshomrani S, Arif M, Al Ghamdi MA. SAA-UNet: Spatial Attention and Attention Gate UNet for COVID-19 Pneumonia Segmentation from Computed Tomography. Diagnostics (Basel) 2023; 13:diagnostics13091658. [PMID: 37175049 PMCID: PMC10178408 DOI: 10.3390/diagnostics13091658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/12/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023] Open
Abstract
The disaster of the COVID-19 pandemic has claimed numerous lives and wreaked havoc on the entire world due to its transmissible nature. One of the complications of COVID-19 is pneumonia. Different radiography methods, particularly computed tomography (CT), have shown outstanding performance in effectively diagnosing pneumonia. In this paper, we propose a spatial attention and attention gate UNet model (SAA-UNet) inspired by spatial attention UNet (SA-UNet) and attention UNet (Att-UNet) to deal with the problem of infection segmentation in the lungs. The proposed method was applied to the MedSeg, Radiopaedia 9P, combination of MedSeg and Radiopaedia 9P, and Zenodo 20P datasets. The proposed method showed good infection segmentation results (two classes: infection and background) with an average Dice similarity coefficient of 0.85, 0.94, 0.91, and 0.93 and a mean intersection over union (IOU) of 0.78, 0.90, 0.86, and 0.87, respectively, on the four datasets mentioned above. Moreover, it also performed well in multi-class segmentation with average Dice similarity coefficients of 0.693, 0.89, 0.87, and 0.93 and IOU scores of 0.68, 0.87, 0.78, and 0.89 on the four datasets, respectively. Classification accuracies of more than 97% were achieved for all four datasets. The F1-scores for the MedSeg, Radiopaedia P9, combination of MedSeg and Radiopaedia P9, and Zenodo 20P datasets were 0.865, 0.943, 0.917, and 0.926, respectively, for the binary classification. For multi-class classification, accuracies of more than 96% were achieved on all four datasets. The experimental results showed that the framework proposed can effectively and efficiently segment COVID-19 infection on CT images with different contrast and utilize this to aid in diagnosing and treating pneumonia caused by COVID-19.
Collapse
Affiliation(s)
- Shroog Alshomrani
- Department of Computer Science, Umm Al-Qura University, Makkah 24382, Saudi Arabia
| | - Muhammad Arif
- Department of Computer Science, Umm Al-Qura University, Makkah 24382, Saudi Arabia
| | - Mohammed A Al Ghamdi
- Department of Computer Science, Umm Al-Qura University, Makkah 24382, Saudi Arabia
| |
Collapse
|
5
|
Khattab R, Abdelmaksoud IR, Abdelrazek S. Deep Convolutional Neural Networks for Detecting COVID-19 Using Medical Images: A Survey. NEW GENERATION COMPUTING 2023; 41:343-400. [PMID: 37229176 PMCID: PMC10071474 DOI: 10.1007/s00354-023-00213-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
Coronavirus Disease 2019 (COVID-19), which is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2), surprised the world in December 2019 and has threatened the lives of millions of people. Countries all over the world closed worship places and shops, prevented gatherings, and implemented curfews to stand against the spread of COVID-19. Deep Learning (DL) and Artificial Intelligence (AI) can have a great role in detecting and fighting this disease. Deep learning can be used to detect COVID-19 symptoms and signs from different imaging modalities, such as X-Ray, Computed Tomography (CT), and Ultrasound Images (US). This could help in identifying COVID-19 cases as a first step to curing them. In this paper, we reviewed the research studies conducted from January 2020 to September 2022 about deep learning models that were used in COVID-19 detection. This paper clarified the three most common imaging modalities (X-Ray, CT, and US) in addition to the DL approaches that are used in this detection and compared these approaches. This paper also provided the future directions of this field to fight COVID-19 disease.
Collapse
Affiliation(s)
- Rana Khattab
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Islam R. Abdelmaksoud
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Samir Abdelrazek
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| |
Collapse
|
6
|
PDAtt-Unet: Pyramid Dual-Decoder Attention Unet for Covid-19 infection segmentation from CT-scans. Med Image Anal 2023; 86:102797. [PMID: 36966605 PMCID: PMC10027962 DOI: 10.1016/j.media.2023.102797] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 01/10/2023] [Accepted: 03/08/2023] [Indexed: 03/23/2023]
Abstract
Since the emergence of the Covid-19 pandemic in late 2019, medical imaging has been widely used to analyse this disease. Indeed, CT-scans of the lungs can help diagnose, detect, and quantify Covid-19 infection. In this paper, we address the segmentation of Covid-19 infection from CT-scans. To improve the performance of the Att-Unet architecture and maximize the use of the Attention Gate, we propose the PAtt-Unet and DAtt-Unet architectures. PAtt-Unet aims to exploit the input pyramids to preserve the spatial awareness in all of the encoder layers. On the other hand, DAtt-Unet is designed to guide the segmentation of Covid-19 infection inside the lung lobes. We also propose to combine these two architectures into a single one, which we refer to as PDAtt-Unet. To overcome the blurry boundary pixels segmentation of Covid-19 infection, we propose a hybrid loss function. The proposed architectures were tested on four datasets with two evaluation scenarios (intra and cross datasets). Experimental results showed that both PAtt-Unet and DAtt-Unet improve the performance of Att-Unet in segmenting Covid-19 infections. Moreover, the combination architecture PDAtt-Unet led to further improvement. To Compare with other methods, three baseline segmentation architectures (Unet, Unet++, and Att-Unet) and three state-of-the-art architectures (InfNet, SCOATNet, and nCoVSegNet) were tested. The comparison showed the superiority of the proposed PDAtt-Unet trained with the proposed hybrid loss (PDEAtt-Unet) over all other methods. Moreover, PDEAtt-Unet is able to overcome various challenges in segmenting Covid-19 infections in four datasets and two evaluation scenarios.
Collapse
|
7
|
Carmo D, Ribeiro J, Dertkigil S, Appenzeller S, Lotufo R, Rittner L. A Systematic Review of Automated Segmentation Methods and Public Datasets for the Lung and its Lobes and Findings on Computed Tomography Images. Yearb Med Inform 2022; 31:277-295. [PMID: 36463886 PMCID: PMC9719778 DOI: 10.1055/s-0042-1742517] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
OBJECTIVES Automated computational segmentation of the lung and its lobes and findings in X-Ray based computed tomography (CT) images is a challenging problem with important applications, including medical research, surgical planning, and diagnostic decision support. With the increase in large imaging cohorts and the need for fast and robust evaluation of normal and abnormal lungs and their lobes, several authors have proposed automated methods for lung assessment on CT images. In this paper we intend to provide a comprehensive summarization of these methods. METHODS We used a systematic approach to perform an extensive review of automated lung segmentation methods. We chose Scopus, PubMed, and Scopus to conduct our review and included methods that perform segmentation of the lung parenchyma, lobes or internal disease related findings. The review was not limited by date, but rather by only including methods providing quantitative evaluation. RESULTS We organized and classified all 234 included articles into various categories according to methodological similarities among them. We provide summarizations of quantitative evaluations, public datasets, evaluation metrics, and overall statistics indicating recent research directions of the field. CONCLUSIONS We noted the rise of data-driven models in the last decade, especially due to the deep learning trend, increasing the demand for high-quality data annotation. This has instigated an increase of semi-supervised and uncertainty guided works that try to be less dependent on human annotation. In addition, the question of how to evaluate the robustness of data-driven methods remains open, given that evaluations derived from specific datasets are not general.
Collapse
Affiliation(s)
- Diedre Carmo
- School of Electrical and Computer Engineering, University of Campinas, Brazil
| | - Jean Ribeiro
- School of Electrical and Computer Engineering, University of Campinas, Brazil
| | | | | | - Roberto Lotufo
- School of Electrical and Computer Engineering, University of Campinas, Brazil
| | - Leticia Rittner
- School of Electrical and Computer Engineering, University of Campinas, Brazil,Correspondence to: Leticia Rittner Av. Albert Einstein, 400, Cidade Universitária Zeferino Vaz, Barão Geraldo - Campinas - SP 13083-852Brazil
| |
Collapse
|
8
|
Hussain MA, Mirikharaji Z, Momeny M, Marhamati M, Neshat AA, Garbi R, Hamarneh G. Active deep learning from a noisy teacher for semi-supervised 3D image segmentation: Application to COVID-19 pneumonia infection in CT. Comput Med Imaging Graph 2022; 102:102127. [PMID: 36257092 PMCID: PMC9540707 DOI: 10.1016/j.compmedimag.2022.102127] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 09/23/2022] [Accepted: 09/28/2022] [Indexed: 01/27/2023]
Abstract
Supervised deep learning has become a standard approach to solving medical image segmentation tasks. However, serious difficulties in attaining pixel-level annotations for sufficiently large volumetric datasets in real-life applications have highlighted the critical need for alternative approaches, such as semi-supervised learning, where model training can leverage small expert-annotated datasets to enable learning from much larger datasets without laborious annotation. Most of the semi-supervised approaches combine expert annotations and machine-generated annotations with equal weights within deep model training, despite the latter annotations being relatively unreliable and likely to affect model optimization negatively. To overcome this, we propose an active learning approach that uses an example re-weighting strategy, where machine-annotated samples are weighted (i) based on the similarity of their gradient directions of descent to those of expert-annotated data, and (ii) based on the gradient magnitude of the last layer of the deep model. Specifically, we present an active learning strategy with a query function that enables the selection of reliable and more informative samples from machine-annotated batch data generated by a noisy teacher. When validated on clinical COVID-19 CT benchmark data, our method improved the performance of pneumonia infection segmentation compared to the state of the art.
Collapse
Affiliation(s)
| | - Zahra Mirikharaji
- Medical Image Analysis Lab, Simon Fraser University, Burnaby, BC V5A 1S6, Canada.
| | | | | | | | - Rafeef Garbi
- BiSICL, University of British Columbia, Vancouver, BC V6T 1Z4, Canada.
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, Simon Fraser University, Burnaby, BC V5A 1S6, Canada.
| |
Collapse
|
9
|
Lu X, Xu Y, Yuan W. DBF-Net: a semi-supervised dual-task balanced fusion network for segmenting infected regions from lung CT images. EVOLVING SYSTEMS 2022; 14:519-532. [PMID: 37193370 PMCID: PMC9483907 DOI: 10.1007/s12530-022-09466-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 09/11/2022] [Indexed: 11/25/2022]
Abstract
Accurate segmentation of infected regions in lung computed tomography (CT) images is essential to improve the timeliness and effectiveness of treatment for coronavirus disease 2019 (COVID-19). However, the main difficulties in developing of lung lesion segmentation in COVID-19 are still the fuzzy boundary of the lung-infected region, the low contrast between the infected region and the normal trend region, and the difficulty in obtaining labeled data. To this end, we propose a novel dual-task consistent network framework that uses multiple inputs to continuously learn and extract lung infection region features, which is used to generate reliable label images (pseudo-labels) and expand the dataset. Specifically, we periodically feed multiple sets of raw and data-enhanced images into two trunk branches of the network; the characteristics of the lung infection region are extracted by a lightweight double convolution (LDC) module and fusiform equilibrium fusion pyramid (FEFP) convolution in the backbone. According to the learned features, the infected regions are segmented, and pseudo-labels are made based on the semi-supervised learning strategy, which effectively alleviates the semi-supervised problem of unlabeled data. Our proposed semi-supervised dual-task balanced fusion network (DBF-Net) creates pseudo-labels on the COVID-SemiSeg dataset and the COVID-19 CT segmentation dataset. Furthermore, we perform lung infection segmentation on the DBF-Net model, with a segmentation sensitivity of 70.6% and specificity of 92.8%. The results of the investigation indicate that the proposed network greatly enhances the segmentation ability of COVID-19 infection.
Collapse
Affiliation(s)
- Xiaoyan Lu
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou People’s Republic of China
| | - Yang Xu
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou People’s Republic of China
- Guiyang Aluminum Magnesium Design and Research Institute Co., Ltd, Guiyang, Guizhou People’s Republic of China
| | - Wenhao Yuan
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou People’s Republic of China
| |
Collapse
|
10
|
Qayyum A, Lalande A, Meriaudeau F. Effective multiscale deep learning model for COVID19 segmentation tasks: A further step towards helping radiologist. Neurocomputing 2022; 499:63-80. [PMID: 35578654 PMCID: PMC9095500 DOI: 10.1016/j.neucom.2022.05.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 01/28/2022] [Accepted: 05/02/2022] [Indexed: 12/14/2022]
Abstract
Infection by the SARS-CoV-2 leading to COVID-19 disease is still rising and techniques to either diagnose or evaluate the disease are still thoroughly investigated. The use of CT as a complementary tool to other biological tests is still under scrutiny as the CT scans are prone to many false positives as other lung diseases display similar characteristics on CT scans. However, fully investigating CT images is of tremendous interest to better understand the disease progression and therefore thousands of scans need to be segmented by radiologists to study infected areas. Over the last year, many deep learning models for segmenting CT-lungs were developed. Unfortunately, the lack of large and shared annotated multicentric datasets led to models that were either under-tested (small dataset) or not properly compared (own metrics, none shared dataset), often leading to poor generalization performance. To address, these issues, we developed a model that uses a multiscale and multilevel feature extraction strategy for COVID19 segmentation and extensively validated it on several datasets to assess its generalization capability for other segmentation tasks on similar organs. The proposed model uses a novel encoder and decoder with a proposed kernel-based atrous spatial pyramid pooling module that is used at the bottom of the model to extract small features with a multistage skip connection concatenation approach. The results proved that our proposed model could be applied on a small-scale dataset and still produce generalizable performances on other segmentation tasks. The proposed model produced an efficient Dice score of 90% on a 100 cases dataset, 95% on the NSCLC dataset, 88.49% on the COVID19 dataset, and 97.33 on the StructSeg 2019 dataset as compared to existing state-of-the-art models. The proposed solution could be used for COVID19 segmentation in clinic applications. The source code is publicly available at https://github.com/RespectKnowledge/Mutiscale-based-Covid-_segmentation-usingDeep-Learning-models.
Collapse
Affiliation(s)
- Abdul Qayyum
- ImViA Laboratory, University of Bourgogne Franche-Comt́e, Dijon, France
| | - Alain Lalande
- ImViA Laboratory, University of Bourgogne Franche-Comt́e, Dijon, France
- Medical Imaging Department, University Hospital of Dijon, Dijon, France
| | | |
Collapse
|
11
|
Suri JS, Agarwal S, Chabert GL, Carriero A, Paschè A, Danna PSC, Saba L, Mehmedović A, Faa G, Singh IM, Turk M, Chadha PS, Johri AM, Khanna NN, Mavrogeni S, Laird JR, Pareek G, Miner M, Sobel DW, Balestrieri A, Sfikakis PP, Tsoulfas G, Protogerou AD, Misra DP, Agarwal V, Kitas GD, Teji JS, Al-Maini M, Dhanjil SK, Nicolaides A, Sharma A, Rathore V, Fatemi M, Alizad A, Krishnan PR, Nagy F, Ruzsa Z, Fouda MM, Naidu S, Viskovic K, Kalra MK. COVLIAS 1.0 Lesion vs. MedSeg: An Artificial Intelligence Framework for Automated Lesion Segmentation in COVID-19 Lung Computed Tomography Scans. Diagnostics (Basel) 2022; 12:1283. [PMID: 35626438 PMCID: PMC9141749 DOI: 10.3390/diagnostics12051283] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 05/18/2022] [Accepted: 05/19/2022] [Indexed: 02/01/2023] Open
Abstract
Background: COVID-19 is a disease with multiple variants, and is quickly spreading throughout the world. It is crucial to identify patients who are suspected of having COVID-19 early, because the vaccine is not readily available in certain parts of the world. Methodology: Lung computed tomography (CT) imaging can be used to diagnose COVID-19 as an alternative to the RT-PCR test in some cases. The occurrence of ground-glass opacities in the lung region is a characteristic of COVID-19 in chest CT scans, and these are daunting to locate and segment manually. The proposed study consists of a combination of solo deep learning (DL) and hybrid DL (HDL) models to tackle the lesion location and segmentation more quickly. One DL and four HDL models—namely, PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet—were trained by an expert radiologist. The training scheme adopted a fivefold cross-validation strategy on a cohort of 3000 images selected from a set of 40 COVID-19-positive individuals. Results: The proposed variability study uses tracings from two trained radiologists as part of the validation. Five artificial intelligence (AI) models were benchmarked against MedSeg. The best AI model, ResNet-UNet, was superior to MedSeg by 9% and 15% for Dice and Jaccard, respectively, when compared against MD 1, and by 4% and 8%, respectively, when compared against MD 2. Statistical tests—namely, the Mann−Whitney test, paired t-test, and Wilcoxon test—demonstrated its stability and reliability, with p < 0.0001. The online system for each slice was <1 s. Conclusions: The AI models reliably located and segmented COVID-19 lesions in CT scans. The COVLIAS 1.0Lesion lesion locator passed the intervariability test.
Collapse
Affiliation(s)
- Jasjit S. Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA;
| | - Sushant Agarwal
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA;
- Department of Computer Science Engineering, PSIT, Kanpur 209305, India
| | - Gian Luca Chabert
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Alessandro Carriero
- Department of Radiology, “Maggiore della Carità” Hospital, University of Piemonte Orientale (UPO), Via Solaroli 17, 28100 Novara, Italy;
| | - Alessio Paschè
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Pietro S. C. Danna
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Armin Mehmedović
- University Hospital for Infectious Diseases, 10000 Zagreb, Croatia; (A.M.); (K.V.)
| | - Gavino Faa
- Department of Pathology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy;
| | - Inder M. Singh
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27753 Delmenhorst, Germany;
| | - Paramjit S. Chadha
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Amer M. Johri
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110076, India;
| | - Sophie Mavrogeni
- Cardiology Clinic, Onassis Cardiac Surgery Center, 17674 Athens, Greece;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St Helena, CA 94574, USA;
| | - Gyan Pareek
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA; (G.P.); (D.W.S.)
| | - Martin Miner
- Men’s Health Center, Miriam Hospital, Providence, RI 02906, USA;
| | - David W. Sobel
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA; (G.P.); (D.W.S.)
| | - Antonella Balestrieri
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Petros P. Sfikakis
- Rheumatology Unit, National Kapodistrian University of Athens, 15772 Athens, Greece;
| | - George Tsoulfas
- Department of Surgery, Aristoteleion University of Thessaloniki, 54124 Thessaloniki, Greece;
| | - Athanasios D. Protogerou
- Cardiovascular Prevention and Research Unit, Department of Pathophysiology, National & Kapodistrian University of Athens, 15772 Athens, Greece;
| | - Durga Prasanna Misra
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - Vikas Agarwal
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - George D. Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK;
- Arthritis Research UK Epidemiology Unit, Manchester University, Manchester M13 9PL, UK
| | - Jagjit S. Teji
- Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA;
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON L4Z 4C4, Canada;
| | | | - Andrew Nicolaides
- Vascular Screening and Diagnostic Centre, University of Nicosia Medical School, Nicosia 2408, Cyprus;
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22908, USA;
| | - Vijay Rathore
- AtheroPoint LLC, Roseville, CA 95661, USA; (S.K.D.); (V.R.)
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | | | - Ferenc Nagy
- Internal Medicine Department, University of Szeged, 6725 Szeged, Hungary;
| | - Zoltan Ruzsa
- Invasive Cardiology Division, University of Szeged, 6725 Szeged, Hungary;
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
| | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN 55812, USA;
| | - Klaudija Viskovic
- University Hospital for Infectious Diseases, 10000 Zagreb, Croatia; (A.M.); (K.V.)
| | - Manudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA;
| |
Collapse
|
12
|
Attallah O. A computer-aided diagnostic framework for coronavirus diagnosis using texture-based radiomics images. Digit Health 2022; 8:20552076221092543. [PMID: 35433024 PMCID: PMC9005822 DOI: 10.1177/20552076221092543] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/21/2022] [Indexed: 12/14/2022] Open
Abstract
The accurate and rapid detection of the novel coronavirus infection, coronavirus is very important to prevent the fast spread of such disease. Thus, reducing negative effects that influenced many industrial sectors, especially healthcare. Artificial intelligence techniques in particular deep learning could help in the fast and precise diagnosis of coronavirus from computed tomography images. Most artificial intelligence-based studies used the original computed tomography images to build their models; however, the integration of texture-based radiomics images and deep learning techniques could improve the diagnostic accuracy of the novel coronavirus diseases. This study proposes a computer-assisted diagnostic framework based on multiple deep learning and texture-based radiomics approaches. It first trains three Residual Networks (ResNets) deep learning techniques with two texture-based radiomics images including discrete wavelet transform and gray-level covariance matrix instead of the original computed tomography images. Then, it fuses the texture-based radiomics deep features sets extracted from each using discrete cosine transform. Thereafter, it further combines the fused texture-based radiomics deep features obtained from the three convolutional neural networks. Finally, three support vector machine classifiers are utilized for the classification procedure. The proposed method is validated experimentally on the benchmark severe respiratory syndrome coronavirus 2 computed tomography image dataset. The accuracies attained indicate that using texture-based radiomics (gray-level covariance matrix, discrete wavelet transform) images for training the ResNet-18 (83.22%, 74.9%), ResNet-50 (80.94%, 78.39%), and ResNet-101 (80.54%, 77.99%) is better than using the original computed tomography images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. Furthermore, the sensitivity, specificity, accuracy, precision, and F1-score achieved using the proposed computer-assisted diagnostic after the two fusion steps are 99.47%, 99.72%, 99.60%, 99.72%, and 99.60% which proves that combining texture-based radiomics deep features obtained from the three ResNets has boosted its performance. Thus, fusing multiple texture-based radiomics deep features mined from several convolutional neural networks is better than using only one type of radiomics approach and a single convolutional neural network. The performance of the proposed computer-assisted diagnostic framework allows it to be used by radiologists in attaining fast and accurate diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
13
|
Verma AK, Vamsi I, Saurabh P, Sudha R, G R S, S R. Wavelet and deep learning-based detection of SARS-nCoV from thoracic X-ray images for rapid and efficient testing. EXPERT SYSTEMS WITH APPLICATIONS 2021; 185:115650. [PMID: 34366576 PMCID: PMC8327617 DOI: 10.1016/j.eswa.2021.115650] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Revised: 06/02/2021] [Accepted: 07/20/2021] [Indexed: 05/07/2023]
Abstract
This paper proposes a wavelet and artificial intelligence-enabled rapid and efficient testing procedure for patients with Severe Acute Respiratory Coronavirus Syndrome (SARS-nCoV) through a deep learning approach from thoracic X-ray images. Presently, the virus infection is diagnosed primarily by a process called the real-time Reverse Transcriptase-Polymerase Chain Reaction (rRT-PCR) based on its genetic prints. This whole procedure takes a substantial amount of time to identify and diagnose the patients infected by the virus. The proposed research uses a wavelet-based convolution neural network architectures to detect SARS-nCoV. CNN is pre-trained on the ImageNet and trained end-to-end using thoracic X-ray images. To execute Discrete Wavelet Transforms (DWT), the available mother wavelet functions from different families, namely Haar, Daubechies, Symlet, Biorthogonal, Coiflet, and Discrete Meyer, were considered. Two-level decomposition via DWT is adopted to extract prominent features peripheral and subpleural ground-glass opacities, often in the lower lobes explicitly from thoracic X-ray images to suppress noise effect, further enhancing the signal to noise ratio. The proposed wavelet-based deep learning models of both, two-class instances (COVID vs. Normal) and four-class instances (COVID-19 vs. PNA bacterial vs. PNA viral vs. Normal) were validated from publicly available databases using k-Fold Cross Validation (k-Fold CV) technique. In addition to these X-ray images, images of recent COVID-19 patients were further used to examine the model's practicality and real-time feasibility in combating the current pandemic situation. It was observed that the Symlet 7 approximation component with two-level manifested the highest test accuracy of 98.87%, followed by Biorthogonal 2.6 with an efficiency of 98.73%. While the test accuracy for Symlet 7 and Biorthogonal 2.6 is high, Haar and Daubechies with two levels have demonstrated excellent validation accuracy on unseen data. It was also observed that the precision, the recall rate, and the dice similarity coefficient for four-class instances were 98%, 98%, and 99%, respectively, using the proposed algorithm.
Collapse
Affiliation(s)
- Amar Kumar Verma
- Department of Electrical and Electronics, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, 500078, India
| | - Inturi Vamsi
- Department of Mechanical Engineering, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, 500078, India
| | - Prerna Saurabh
- Department of Computer Science and Engineering, Vellore Institute of Technology-Vellore Campus, Tamil Nadu, 632014, India
| | - Radhika Sudha
- Department of Electrical and Electronics, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, 500078, India
| | - Sabareesh G R
- Department of Mechanical Engineering, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, 500078, India
| | - Rajkumar S
- Department of Computer Science and Engineering, Vellore Institute of Technology-Vellore Campus, Tamil Nadu, 632014, India
| |
Collapse
|
14
|
Gudigar A, Raghavendra U, Nayak S, Ooi CP, Chan WY, Gangavarapu MR, Dharmik C, Samanth J, Kadri NA, Hasikin K, Barua PD, Chakraborty S, Ciaccio EJ, Acharya UR. Role of Artificial Intelligence in COVID-19 Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:8045. [PMID: 34884045 PMCID: PMC8659534 DOI: 10.3390/s21238045] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 11/26/2021] [Accepted: 11/26/2021] [Indexed: 12/15/2022]
Abstract
The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Sneha Nayak
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore;
| | - Wai Yee Chan
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur 50603, Malaysia;
| | - Mokshagna Rohit Gangavarapu
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chinmay Dharmik
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nahrizul Adib Kadri
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia;
- School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
| | - Subrata Chakraborty
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore;
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
- International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
| |
Collapse
|
15
|
AWEU-Net: An Attention-Aware Weight Excitation U-Net for Lung Nodule Segmentation. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112110132] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Lung cancer is a deadly cancer that causes millions of deaths every year around the world. Accurate lung nodule detection and segmentation in computed tomography (CT) images is a vital step for diagnosing lung cancer early. Most existing systems face several challenges, such as the heterogeneity in CT images and variation in nodule size, shape, and location, which limit their accuracy. In an attempt to handle these challenges, this article proposes a fully automated deep learning framework that consists of lung nodule detection and segmentation models. Our proposed system comprises two cascaded stages: (1) nodule detection based on fine-tuned Faster R-CNN to localize the nodules in CT images, and (2) nodule segmentation based on the U-Net architecture with two effective blocks, namely position attention-aware weight excitation (PAWE) and channel attention-aware weight excitation (CAWE), to enhance the ability to discriminate between nodule and non-nodule feature representations. The experimental results demonstrate that the proposed system yields a Dice score of 89.79% and 90.35%, and an intersection over union (IoU) of 82.34% and 83.21% on the publicly available LUNA16 and LIDC-IDRI datasets, respectively.
Collapse
|
16
|
Herrmann P, Busana M, Cressoni M, Lotz J, Moerer O, Saager L, Meissner K, Quintel M, Gattinoni L. Using Artificial Intelligence for Automatic Segmentation of CT Lung Images in Acute Respiratory Distress Syndrome. Front Physiol 2021; 12:676118. [PMID: 34594233 PMCID: PMC8476971 DOI: 10.3389/fphys.2021.676118] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 08/17/2021] [Indexed: 01/17/2023] Open
Abstract
Knowledge of gas volume, tissue mass and recruitability measured by the quantitative CT scan analysis (CT-qa) is important when setting the mechanical ventilation in acute respiratory distress syndrome (ARDS). Yet, the manual segmentation of the lung requires a considerable workload. Our goal was to provide an automatic, clinically applicable and reliable lung segmentation procedure. Therefore, a convolutional neural network (CNN) was used to train an artificial intelligence (AI) algorithm on 15 healthy subjects (1,302 slices), 100 ARDS patients (12,279 slices), and 20 COVID-19 (1,817 slices). Eighty percent of this populations was used for training, 20% for testing. The AI and manual segmentation at slice level were compared by intersection over union (IoU). The CT-qa variables were compared by regression and Bland Altman analysis. The AI-segmentation of a single patient required 5–10 s vs. 1–2 h of the manual. At slice level, the algorithm showed on the test set an IOU across all CT slices of 91.3 ± 10.0, 85.2 ± 13.9, and 84.7 ± 14.0%, and across all lung volumes of 96.3 ± 0.6, 88.9 ± 3.1, and 86.3 ± 6.5% for normal lungs, ARDS and COVID-19, respectively, with a U-shape in the performance: better in the lung middle region, worse at the apex and base. At patient level, on the test set, the total lung volume measured by AI and manual segmentation had a R2 of 0.99 and a bias −9.8 ml [CI: +56.0/−75.7 ml]. The recruitability measured with manual and AI-segmentation, as change in non-aerated tissue fraction had a bias of +0.3% [CI: +6.2/−5.5%] and −0.5% [CI: +2.3/−3.3%] expressed as change in well-aerated tissue fraction. The AI-powered lung segmentation provided fast and clinically reliable results. It is able to segment the lungs of seriously ill ARDS patients fully automatically.
Collapse
Affiliation(s)
- Peter Herrmann
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Mattia Busana
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | | | - Joachim Lotz
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Onnen Moerer
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Leif Saager
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Konrad Meissner
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Michael Quintel
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany.,Department of Anesthesiology, DONAUISAR Klinikum Deggendorf, Deggendorf, Germany
| | - Luciano Gattinoni
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
17
|
Deep Learning-Based Prediction of Paresthesia after Third Molar Extraction: A Preliminary Study. Diagnostics (Basel) 2021; 11:diagnostics11091572. [PMID: 34573914 PMCID: PMC8469771 DOI: 10.3390/diagnostics11091572] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 08/25/2021] [Accepted: 08/28/2021] [Indexed: 01/04/2023] Open
Abstract
The purpose of this study was to determine whether convolutional neural networks (CNNs) can predict paresthesia of the inferior alveolar nerve using panoramic radiographic images before extraction of the mandibular third molar. The dataset consisted of a total of 300 preoperative panoramic radiographic images of patients who had planned mandibular third molar extraction. A total of 100 images taken of patients who had paresthesia after tooth extraction were classified as Group 1, and 200 images taken of patients without paresthesia were classified as Group 2. The dataset was randomly divided into a training and validation set (n = 150 [50%]), and a test set (n = 150 [50%]). CNNs of SSD300 and ResNet-18 were used for deep learning. The average accuracy, sensitivity, specificity, and area under the curve were 0.827, 0.84, 0.82, and 0.917, respectively. This study revealed that CNNs can assist in the prediction of paresthesia of the inferior alveolar nerve after third molar extraction using panoramic radiographic images.
Collapse
|
18
|
Sharafeldeen A, Elsharkawy M, Alghamdi NS, Soliman A, El-Baz A. Precise Segmentation of COVID-19 Infected Lung from CT Images Based on Adaptive First-Order Appearance Model with Morphological/Anatomical Constraints. SENSORS (BASEL, SWITZERLAND) 2021; 21:5482. [PMID: 34450923 PMCID: PMC8399192 DOI: 10.3390/s21165482] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 08/08/2021] [Accepted: 08/10/2021] [Indexed: 12/16/2022]
Abstract
A new segmentation technique is introduced for delineating the lung region in 3D computed tomography (CT) images. To accurately model the distribution of Hounsfield scale values within both chest and lung regions, a new probabilistic model is developed that depends on a linear combination of Gaussian (LCG). Moreover, we modified the conventional expectation-maximization (EM) algorithm to be run in a sequential way to estimate both the dominant Gaussian components (one for the lung region and one for the chest region) and the subdominant Gaussian components, which are used to refine the final estimated joint density. To estimate the marginal density from the mixed density, a modified k-means clustering approach is employed to classify the Gaussian subdominant components to determine which components belong properly to a lung and which components belong to a chest. The initial segmentation, based on the LCG-model, is then refined by the imposition of 3D morphological constraints based on a 3D Markov-Gibbs random field (MGRF) with analytically estimated potentials. The proposed approach was tested on CT data from 32 coronavirus disease 2019 (COVID-19) patients. Segmentation quality was quantitatively evaluated using four metrics: Dice similarity coefficient (DSC), overlap coefficient, 95th-percentile bidirectional Hausdorff distance (BHD), and absolute lung volume difference (ALVD), and it achieved 95.67±1.83%, 91.76±3.29%, 4.86±5.01, and 2.93±2.39, respectively. The reported results showed the capability of the proposed approach to accurately segment healthy lung tissues in addition to pathological lung tissues caused by COVID-19, outperforming four current, state-of-the-art deep learning-based lung segmentation approaches.
Collapse
Affiliation(s)
- Ahmed Sharafeldeen
- BioImaging Laboratory, Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (A.S.); (M.E.); (A.S.)
| | - Mohamed Elsharkawy
- BioImaging Laboratory, Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (A.S.); (M.E.); (A.S.)
| | - Norah Saleh Alghamdi
- College of Computer and Information Science, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia
| | - Ahmed Soliman
- BioImaging Laboratory, Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (A.S.); (M.E.); (A.S.)
| | - Ayman El-Baz
- BioImaging Laboratory, Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (A.S.); (M.E.); (A.S.)
| |
Collapse
|
19
|
Özcan ANŞ, Aslan K. Diagnostic accuracy of sagittal TSE-T2W, variable flip angle 3D TSE-T2W and high-resolution 3D heavily T2W sequences for the stenosis of two localizations: the cerebral aqueduct and the superior medullary velum. Curr Med Imaging 2021; 17:1432-1438. [PMID: 34365953 DOI: 10.2174/1573405617666210806123720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 04/07/2021] [Accepted: 05/03/2021] [Indexed: 11/22/2022]
Abstract
OBJECTIVES This study aimed to investigate the accuracy of conventional sagittal turbo spin echo T2-weighted (Sag TSE-T2W), variable flip angle 3D TSE (VFA-3D-TSE) and high-resolution 3D heavily T2W (HR-3D-HT2W) sequences in the diagnosis of primary aqueductal stenosis (PAS) and superior medullary velum stenosis (SMV-S), and the effect of stenosis localization on diagnosis. METHODS Seventy-seven patients were included in the study. The diagnosis accuracy of the HR-3D-HT2W, Sag TSE-T2W and VFA-3D-TSE sequences, was classified into three grades by two experienced neuroradiologists: grade 0 (the sequence has no diagnostic ability), grade 1 (the sequence diagnoses stenosis but does not show focal stenosis itself or membrane formation), and grade 2 (the sequence makes a definitive diagnosis of stenosis and shows focal stenosis itself or membrane formation). Stenosis localizations were divided into three as Cerebral Aquaduct (CA), superior medullary velum (SMV) and SMV+CA. In the statistical analysis, the grades of the sequences were compared without making a differentiation based on localization. Then, the effect of localization on diagnosis was determined by comparing the grades for individual localizations. RESULTS In the sequence comparison, grade 0 was not detected in the VFA-3D-TSE and HR-3D-HT2W sequences, and these sequences diagnosed all cases. On the other hand, 25.4% of grade 0 was detected with the Sag TSE-T2W sequence (P<0.05). Grade 1 was detected by VFA-3D-TSE in 23% of the cases, while grade 1 (12.5%) was detected by HRH-3D-T2W in only one case, and the difference was statistically significant (P<0.05). When the sequences were examined according to localizations, the rate of grade 0 in the Sag TSE-T2W sequence was statistically significantly higher for the SMV localization (33.3%) compared to CA (66.7%) and SMV+CA (0%) (P<0.05). Localization had no effect on diagnosis using the other sequences. CONCLUSION In our study, we found that the VFA-3D-TSE and HR-3D-HT2W sequences were successful in the diagnosis of PAS and SMV-S contrary to the Sag TSE-T2W sequence.
Collapse
Affiliation(s)
| | - Kerim Aslan
- Samsun Ondokuz Mayıs University, Department of Radiology, Samsun. Turkey
| |
Collapse
|
20
|
Mondal MRH, Bharati S, Podder P. Diagnosis of COVID-19 Using Machine Learning and Deep Learning: A Review. Curr Med Imaging 2021; 17:1403-1418. [PMID: 34259149 DOI: 10.2174/1573405617666210713113439] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 03/29/2021] [Accepted: 04/08/2021] [Indexed: 02/08/2023]
Abstract
BACKGROUND This paper provides a systematic review of the application of artificial intelligence (AI) in the form of machine learning (ML) and deep learning (DL) techniques in fighting against the effects of novel coronavirus disease (COVID-19). OBJECTIVE & METHOD The objective is to perform a scoping review on AI for COVID-19 using preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines. A literature search was performed for relevant studies published from 1 January 2020 till 27 March 2021. Out of 4050 research papers available in reputed publishers, a full-text review of 440 articles was done based on the keywords of AI, COVID-19, ML, forecasting, DL, X-ray, and computed tomography (CT). Finally, 52 articles were included in the result synthesis of this paper. As part of the review, different ML regression methods were reviewed first in predicting the number of confirmed and death cases. Secondly, a comprehensive survey was carried out on the use of ML in classifying COVID-19 patients. Thirdly, different datasets on medical imaging were compared in terms of the number of images, number of positive samples and number of classes in the datasets. The different stages of the diagnosis, including preprocessing, segmentation and feature extraction were also reviewed. Fourthly, the performance results of different research papers were compared to evaluate the effectiveness of DL methods on different datasets. RESULTS Results show that residual neural network (ResNet-18) and densely connected convolutional network (DenseNet 169) exhibit excellent classification accuracy for X-ray images, while DenseNet-201 has the maximum accuracy in classifying CT scan images. This indicates that ML and DL are useful tools in assisting researchers and medical professionals in predicting, screening and detecting COVID-19. CONCLUSION Finally, this review highlights the existing challenges, including regulations, noisy data, data privacy, and the lack of reliable large datasets, then provides future research directions in applying AI in managing COVID-19.
Collapse
Affiliation(s)
| | - Subrato Bharati
- Institute of ICT, Bangladesh University of Engineering and Technology, Dhaka-1205, Bangladesh
| | - Prajoy Podder
- Institute of ICT, Bangladesh University of Engineering and Technology, Dhaka-1205, Bangladesh
| |
Collapse
|
21
|
El-Rashidy N, Abdelrazik S, Abuhmed T, Amer E, Ali F, Hu JW, El-Sappagh S. Comprehensive Survey of Using Machine Learning in the COVID-19 Pandemic. Diagnostics (Basel) 2021; 11:1155. [PMID: 34202587 PMCID: PMC8303306 DOI: 10.3390/diagnostics11071155] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 05/29/2021] [Accepted: 05/31/2021] [Indexed: 12/11/2022] Open
Abstract
Since December 2019, the global health population has faced the rapid spreading of coronavirus disease (COVID-19). With the incremental acceleration of the number of infected cases, the World Health Organization (WHO) has reported COVID-19 as an epidemic that puts a heavy burden on healthcare sectors in almost every country. The potential of artificial intelligence (AI) in this context is difficult to ignore. AI companies have been racing to develop innovative tools that contribute to arm the world against this pandemic and minimize the disruption that it may cause. The main objective of this study is to survey the decisive role of AI as a technology used to fight against the COVID-19 pandemic. Five significant applications of AI for COVID-19 were found, including (1) COVID-19 diagnosis using various data types (e.g., images, sound, and text); (2) estimation of the possible future spread of the disease based on the current confirmed cases; (3) association between COVID-19 infection and patient characteristics; (4) vaccine development and drug interaction; and (5) development of supporting applications. This study also introduces a comparison between current COVID-19 datasets. Based on the limitations of the current literature, this review highlights the open research challenges that could inspire the future application of AI in COVID-19.
Collapse
Affiliation(s)
- Nora El-Rashidy
- Machine Learning and Information Retrieval Department, Faculty of Artificial Intelligence, Kafrelsheiksh University, Kafrelsheiksh 13518, Egypt
| | - Samir Abdelrazik
- Information System Department, Faculty of Computer Science and Information Systems, Mansoura University, Mansoura 13518, Egypt;
| | - Tamer Abuhmed
- College of Computing and Informatics, Sungkyunkwan University, Seoul 03063, Korea
| | - Eslam Amer
- Faculty of Computer Science, Misr International University, Cairo 11828, Egypt;
| | - Farman Ali
- Department of Software, Sejong University, Seoul 05006, Korea;
| | - Jong-Wan Hu
- Department of Civil and Environmental Engineering, Incheon National University, Incheon 22012, Korea
| | - Shaker El-Sappagh
- Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS), Universidade de Santiago de Compostela, 15782 Santiago de Compostela, Spain
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha 13518, Egypt
| |
Collapse
|
22
|
He X, Yin Y. Non-Local and Multi-Scale Mechanisms for Image Inpainting. SENSORS (BASEL, SWITZERLAND) 2021; 21:3281. [PMID: 34068573 PMCID: PMC8126100 DOI: 10.3390/s21093281] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 04/30/2021] [Accepted: 05/06/2021] [Indexed: 11/21/2022]
Abstract
Recently, deep learning-based techniques have shown great power in image inpainting especially dealing with squared holes. However, they fail to generate plausible results inside the missing regions for irregular and large holes as there is a lack of understanding between missing regions and existing counterparts. To overcome this limitation, we combine two non-local mechanisms including a contextual attention module (CAM) and an implicit diversified Markov random fields (ID-MRF) loss with a multi-scale architecture which uses several dense fusion blocks (DFB) based on the dense combination of dilated convolution to guide the generative network to restore discontinuous and continuous large masked areas. To prevent color discrepancies and grid-like artifacts, we apply the ID-MRF loss to improve the visual appearance by comparing similarities of long-distance feature patches. To further capture the long-term relationship of different regions in large missing regions, we introduce the CAM. Although CAM has the ability to create plausible results via reconstructing refined features, it depends on initial predicted results. Hence, we employ the DFB to obtain larger and more effective receptive fields, which benefits to predict more precise and fine-grained information for CAM. Extensive experiments on two widely-used datasets demonstrate that our proposed framework significantly outperforms the state-of-the-art approaches both in quantity and quality.
Collapse
Affiliation(s)
| | - Yong Yin
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China;
| |
Collapse
|
23
|
Boundary Loss-Based 2.5D Fully Convolutional Neural Networks Approach for Segmentation: A Case Study of the Liver and Tumor on Computed Tomography. ALGORITHMS 2021. [DOI: 10.3390/a14050144] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Image segmentation plays an important role in the field of image processing, helping to understand images and recognize objects. However, most existing methods are often unable to effectively explore the spatial information in 3D image segmentation, and they neglect the information from the contours and boundaries of the observed objects. In addition, shape boundaries can help to locate the positions of the observed objects, but most of the existing loss functions neglect the information from the boundaries. To overcome these shortcomings, this paper presents a new cascaded 2.5D fully convolutional networks (FCNs) learning framework to segment 3D medical images. A new boundary loss that incorporates distance, area, and boundary information is also proposed for the cascaded FCNs to learning more boundary and contour features from the 3D medical images. Moreover, an effective post-processing method is developed to further improve the segmentation accuracy. We verified the proposed method on LITS and 3DIRCADb datasets that include the liver and tumors. The experimental results show that the performance of the proposed method is better than existing methods with a Dice Per Case score of 74.5% for tumor segmentation, indicating the effectiveness of the proposed method.
Collapse
|
24
|
Jeong SH, Yun JP, Yeom HG, Kim HK, Kim BC. Deep-Learning-Based Detection of Cranio-Spinal Differences between Skeletal Classification Using Cephalometric Radiography. Diagnostics (Basel) 2021; 11:591. [PMID: 33806132 PMCID: PMC8064489 DOI: 10.3390/diagnostics11040591] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/10/2021] [Accepted: 03/22/2021] [Indexed: 12/22/2022] Open
Abstract
The aim of this study was to reveal cranio-spinal differences between skeletal classification using convolutional neural networks (CNNs). Transverse and longitudinal cephalometric images of 832 patients were used for training and testing of CNNs (365 males and 467 females). Labeling was performed such that the jawbone was sufficiently masked, while the parts other than the jawbone were minimally masked. DenseNet was used as the feature extractor. Five random sampling crossvalidations were performed for two datasets. The average and maximum accuracy of the five crossvalidations were 90.43% and 92.54% for test 1 (evaluation of the entire posterior-anterior (PA) and lateral cephalometric images) and 88.17% and 88.70% for test 2 (evaluation of the PA and lateral cephalometric images obscuring the mandible). In this study, we found that even when jawbones of class I (normal mandible), class II (retrognathism), and class III (prognathism) are masked, their identification is possible through deep learning applied only in the cranio-spinal area. This suggests that cranio-spinal differences between each class exist.
Collapse
Affiliation(s)
- Seung Hyun Jeong
- Safety System Research Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (J.P.Y.)
| | - Jong Pil Yun
- Safety System Research Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (J.P.Y.)
| | - Han-Gyeol Yeom
- Department of Oral and Maxillofacial Radiology, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon 35233, Korea;
| | - Hwi Kang Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon 35233, Korea;
| | - Bong Chul Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon 35233, Korea;
| |
Collapse
|