1
|
Xing W, He C, Ma Y, Liu Y, Zhu Z, Li Q, Li W, Chen J, Ta D. Combining quantitative and qualitative analysis for scoring pleural line in lung ultrasound. Phys Med Biol 2024; 69:095008. [PMID: 38537298 DOI: 10.1088/1361-6560/ad3888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 03/27/2024] [Indexed: 04/18/2024]
Abstract
Objective.Accurate assessment of pleural line is crucial for the application of lung ultrasound (LUS) in monitoring lung diseases, thereby aim of this study is to develop a quantitative and qualitative analysis method for pleural line.Approach.The novel cascaded deep learning model based on convolution and multilayer perceptron was proposed to locate and segment the pleural line in LUS images, whose results were applied for quantitative analysis of textural and morphological features, respectively. By using gray-level co-occurrence matrix and self-designed statistical methods, eight textural and three morphological features were generated to characterize the pleural lines. Furthermore, the machine learning-based classifiers were employed to qualitatively evaluate the lesion degree of pleural line in LUS images.Main results.We prospectively evaluated 3770 LUS images acquired from 31 pneumonia patients. Experimental results demonstrated that the proposed pleural line extraction and evaluation methods all have good performance, with dice and accuracy of 0.87 and 94.47%, respectively, and the comparison with previous methods found statistical significance (P< 0.001 for all). Meanwhile, the generalization verification proved the feasibility of the proposed method in multiple data scenarios.Significance.The proposed method has great application potential for assessment of pleural line in LUS images and aiding lung disease diagnosis and treatment.
Collapse
Affiliation(s)
- Wenyu Xing
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Chao He
- Department of Emergency and Critical Care, Changzheng Hospital, Naval Medical University, Shanghai 200003, People's Republic of China
| | - Yebo Ma
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai 200241, People's Republic of China
| | - Yiman Liu
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai 200241, People's Republic of China
| | - Zhibin Zhu
- School of Information Science and Technology, Fudan University, Shanghai 200438, People's Republic of China
| | - Qingli Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai 200241, People's Republic of China
| | - Wenfang Li
- Department of Emergency and Critical Care, Changzheng Hospital, Naval Medical University, Shanghai 200003, People's Republic of China
| | - Jiangang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai 200241, People's Republic of China
| | - Dean Ta
- Department of Rehabilitation Medicine, Huashan Hospital, Fudan University, Shanghai 200040, People's Republic of China
| |
Collapse
|
2
|
Gupta U, Paluru N, Nankani D, Kulkarni K, Awasthi N. A comprehensive review on efficient artificial intelligence models for classification of abnormal cardiac rhythms using electrocardiograms. Heliyon 2024; 10:e26787. [PMID: 38562492 PMCID: PMC10982903 DOI: 10.1016/j.heliyon.2024.e26787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 02/20/2024] [Indexed: 04/04/2024] Open
Abstract
Deep learning has made many advances in data classification using electrocardiogram (ECG) waveforms. Over the past decade, data science research has focused on developing artificial intelligence (AI) based models that can analyze ECG waveforms to identify and classify abnormal cardiac rhythms accurately. However, the primary drawback of the current AI models is that most of these models are heavy, computationally intensive, and inefficient in terms of cost for real-time implementation. In this review, we first discuss the current state-of-the-art AI models utilized for ECG-based cardiac rhythm classification. Next, we present some of the upcoming modeling methodologies which have the potential to perform real-time implementation of AI-based heart rhythm diagnosis. These models hold significant promise in being lightweight and computationally efficient without compromising the accuracy. Contemporary models predominantly utilize 12-lead ECG for cardiac rhythm classification and cardiovascular status prediction, increasing the computational burden and making real-time implementation challenging. We also summarize research studies evaluating the potential of efficient data setups to reduce the number of ECG leads without affecting classification accuracy. Lastly, we present future perspectives on AI's utility in precision medicine by providing opportunities for accurate prediction and diagnostics of cardiovascular status in patients.
Collapse
Affiliation(s)
- Utkarsh Gupta
- Department of Computational and Data Sciences, Indian Institute of Science, Bengaluru, 560012, India
| | - Naveen Paluru
- Department of Computational and Data Sciences, Indian Institute of Science, Bengaluru, 560012, India
| | - Deepankar Nankani
- Department of Computer Science and Engineering, Indian Institute of Technology, Guwahati, Assam, 781039, India
| | - Kanchan Kulkarni
- IHU-LIRYC, Heart Rhythm Disease Institute, Fondation Bordeaux Université, Pessac, Bordeaux, F-33000, France
- University of Bordeaux, INSERM, Centre de recherche Cardio-Thoracique de Bordeaux, U1045, Bordeaux, F-33000, France
| | - Navchetan Awasthi
- Faculty of Science, Mathematics and Computer Science, Informatics Institute, University of Amsterdam, Amsterdam, 1090 GH, the Netherlands
- Department of Biomedical Engineering and Physics, Amsterdam UMC, Amsterdam, 1081 HV, the Netherlands
| |
Collapse
|
3
|
Wang R, Liu X, Tan G. Coupling speckle noise suppression with image classification for deep-learning-aided ultrasound diagnosis. Phys Med Biol 2024; 69:065001. [PMID: 38359452 DOI: 10.1088/1361-6560/ad29bb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 02/15/2024] [Indexed: 02/17/2024]
Abstract
Objective. During deep-learning-aided (DL-aided) ultrasound (US) diagnosis, US image classification is a foundational task. Due to the existence of serious speckle noise in US images, the performance of DL models may be degraded. Pre-denoising US images before their use in DL models is usually a logical choice. However, our investigation suggests that pre-speckle-denoising is not consistently advantageous. Furthermore, due to the decoupling of speckle denoising from the subsequent DL classification, investing intensive time in parameter tuning is inevitable to attain the optimal denoising parameters for various datasets and DL models. Pre-denoising will also add extra complexity to the classification task and make it no longer end-to-end.Approach. In this work, we propose a multi-scale high-frequency-based feature augmentation (MSHFFA) module that couples feature augmentation and speckle noise suppression with specific DL models, preserving an end-to-end fashion. In MSHFFA, the input US image is first decomposed to multi-scale low-frequency and high-frequency components (LFC and HFC) with discrete wavelet transform. Then, multi-scale augmentation maps are obtained by computing the correlation between LFC and HFC. Last, the original DL model features are augmented with multi-scale augmentation maps.Main results. On two public US datasets, all six renowned DL models exhibited enhanced F1-scores compared with their original versions (by 1.31%-8.17% on the POCUS dataset and 0.46%-3.89% on the BLU dataset) after using the MSHFFA module, with only approximately 1% increase in model parameter count.Significance. The proposed MSHFFA has broad applicability and commendable efficiency and thus can be used to enhance the performance of DL-aided US diagnosis. The codes are available athttps://github.com/ResonWang/MSHFFA.
Collapse
Affiliation(s)
- Ruixin Wang
- College of Computer Science and Software Engineering, Hohai University, Nanjing 210098, People's Republic of China
| | - Xiaohui Liu
- The First People's Hospital of Kunshan, Affiliated Kunshan Hospital of Jiangsu University, Kunshan 215300, People's Republic of China
| | - Guoping Tan
- College of Computer Science and Software Engineering, Hohai University, Nanjing 210098, People's Republic of China
| |
Collapse
|
4
|
Gheisari M, Ghaderzadeh M, Li H, Taami T, Fernández-Campusano C, Sadeghsalehi H, Afzaal Abbasi A. Mobile Apps for COVID-19 Detection and Diagnosis for Future Pandemic Control: Multidimensional Systematic Review. JMIR Mhealth Uhealth 2024; 12:e44406. [PMID: 38231538 PMCID: PMC10896318 DOI: 10.2196/44406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 01/02/2023] [Accepted: 08/18/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND In the modern world, mobile apps are essential for human advancement, and pandemic control is no exception. The use of mobile apps and technology for the detection and diagnosis of COVID-19 has been the subject of numerous investigations, although no thorough analysis of COVID-19 pandemic prevention has been conducted using mobile apps, creating a gap. OBJECTIVE With the intention of helping software companies and clinical researchers, this study provides comprehensive information regarding the different fields in which mobile apps were used to diagnose COVID-19 during the pandemic. METHODS In this systematic review, 535 studies were found after searching 5 major research databases (ScienceDirect, Scopus, PubMed, Web of Science, and IEEE). Of these, only 42 (7.9%) studies concerned with diagnosing and detecting COVID-19 were chosen after applying inclusion and exclusion criteria using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) protocol. RESULTS Mobile apps were categorized into 6 areas based on the content of these 42 studies: contact tracing, data gathering, data visualization, artificial intelligence (AI)-based diagnosis, rule- and guideline-based diagnosis, and data transformation. Patients with COVID-19 were identified via mobile apps using a variety of clinical, geographic, demographic, radiological, serological, and laboratory data. Most studies concentrated on using AI methods to identify people who might have COVID-19. Additionally, symptoms, cough sounds, and radiological images were used more frequently compared to other data types. Deep learning techniques, such as convolutional neural networks, performed comparatively better in the processing of health care data than other types of AI techniques, which improved the diagnosis of COVID-19. CONCLUSIONS Mobile apps could soon play a significant role as a powerful tool for data collection, epidemic health data analysis, and the early identification of suspected cases. These technologies can work with the internet of things, cloud storage, 5th-generation technology, and cloud computing. Processing pipelines can be moved to mobile device processing cores using new deep learning methods, such as lightweight neural networks. In the event of future pandemics, mobile apps will play a critical role in rapid diagnosis using various image data and clinical symptoms. Consequently, the rapid diagnosis of these diseases can improve the management of their effects and obtain excellent results in treating patients.
Collapse
Affiliation(s)
- Mehdi Gheisari
- Institute of Artificial Intelligence, Shaoxing University, Shaoxing, China
- Department of Computer Science and Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| | - Mustafa Ghaderzadeh
- School of Nursing and Health Sciences of Boukan, Urmia University of Medical Sciences, Urmia, Iran
| | - Huxiong Li
- Institute of Artificial Intelligence, Shaoxing University, Shaoxing, China
| | - Tania Taami
- Florida State University, Tallahassee, FL, United States
| | | | | | - Aaqif Afzaal Abbasi
- Department of Earth and Marine Sciences, University of Palermo, Palermo, Italy
| |
Collapse
|
5
|
Bassiouny R, Mohamed A, Umapathy K, Khan N. An Interpretable Neonatal Lung Ultrasound Feature Extraction and Lung Sliding Detection System Using Object Detectors. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 12:119-128. [PMID: 38088993 PMCID: PMC10712663 DOI: 10.1109/jtehm.2023.3327424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 10/11/2023] [Accepted: 10/20/2023] [Indexed: 12/18/2023]
Abstract
The objective of this study was to develop an interpretable system that could detect specific lung features in neonates. A challenging aspect of this work was that normal lungs showed the same visual features (as that of Pneumothorax (PTX)). M-mode is typically necessary to differentiate between the two cases, but its generation in clinics is time-consuming and requires expertise for interpretation, which remains limited. Therefore, our system automates M-mode generation by extracting Regions of Interest (ROIs) without human in the loop. Object detection models such as faster Region Based Convolutional Neural Network (fRCNN) and RetinaNet models were employed to detect seven common Lung Ultrasound (LUS) features. fRCNN predictions were then stored and further used to generate M-modes. Beyond static feature extraction, we used a Hough transform based statistical method to detect "lung sliding" in these M-modes. Results showed that fRCNN achieved a greater mean Average Precision (mAP) of 86.57% (Intersection-over-Union (IoU) = 0.2) than RetinaNet, which only displayed a mAP of 61.15%. The calculated accuracy for the generated RoIs was 97.59% for Normal videos and 96.37% for PTX videos. Using this system, we successfully classified 5 PTX and 6 Normal video cases with 100% accuracy. Automating the process of detecting seven prominent LUS features addresses the time-consuming manual evaluation of Lung ultrasound in a fast paced environment. Clinical impact: Our research work provides a significant clinical impact as it provides a more accurate and efficient method for diagnosing lung diseases in neonates.
Collapse
Affiliation(s)
- Rodina Bassiouny
- Department of Electrical, Computer, and Biomedical EngineeringToronto Metropolitan UniversityTorontoONM5B 2K3Canada
| | - Adel Mohamed
- Mount Sinai HospitalUniversity of TorontoTorontoONM5S 1A1Canada
| | - Karthi Umapathy
- Department of Electrical, Computer, and Biomedical EngineeringToronto Metropolitan UniversityTorontoONM5B 2K3Canada
| | - Naimul Khan
- Department of Electrical, Computer, and Biomedical EngineeringToronto Metropolitan UniversityTorontoONM5B 2K3Canada
| |
Collapse
|
6
|
Malík M, Dzian A, Števík M, Vetešková Š, Al Hakim A, Hliboký M, Magyar J, Kolárik M, Bundzel M, Babič F. Lung Ultrasound Reduces Chest X-rays in Postoperative Care after Thoracic Surgery: Is There a Role for Artificial Intelligence?-Systematic Review. Diagnostics (Basel) 2023; 13:2995. [PMID: 37761362 PMCID: PMC10527627 DOI: 10.3390/diagnostics13182995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/16/2023] [Accepted: 08/26/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND Chest X-ray (CXR) remains the standard imaging modality in postoperative care after non-cardiac thoracic surgery. Lung ultrasound (LUS) showed promising results in CXR reduction. The aim of this review was to identify areas where the evaluation of LUS videos by artificial intelligence could improve the implementation of LUS in thoracic surgery. METHODS A literature review of the replacement of the CXR by LUS after thoracic surgery and the evaluation of LUS videos by artificial intelligence after thoracic surgery was conducted in Medline. RESULTS Here, eight out of 10 reviewed studies evaluating LUS in CXR reduction showed that LUS can reduce CXR without a negative impact on patient outcome after thoracic surgery. No studies on the evaluation of LUS signs by artificial intelligence after thoracic surgery were found. CONCLUSION LUS can reduce CXR after thoracic surgery. We presume that artificial intelligence could help increase the LUS accuracy, objectify the LUS findings, shorten the learning curve, and decrease the number of inconclusive results. To confirm this assumption, clinical trials are necessary. This research is funded by the Slovak Research and Development Agency, grant number APVV 20-0232.
Collapse
Affiliation(s)
- Marek Malík
- Department of Thoracic Surgery, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava and University Hospital in Martin, Kollárova 4248/2, 036 59 Martin, Slovakia
| | - Anton Dzian
- Department of Thoracic Surgery, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava and University Hospital in Martin, Kollárova 4248/2, 036 59 Martin, Slovakia
| | - Martin Števík
- Radiology Department, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava and University Hospital in Martin, Kollárova 4248/2, 036 59 Martin, Slovakia
| | - Štefánia Vetešková
- Radiology Department, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava and University Hospital in Martin, Kollárova 4248/2, 036 59 Martin, Slovakia
| | - Abdulla Al Hakim
- Department of Thoracic Surgery, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava and University Hospital in Martin, Kollárova 4248/2, 036 59 Martin, Slovakia
| | - Maroš Hliboký
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 040 01 Košice, Slovakia
| | - Ján Magyar
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 040 01 Košice, Slovakia
| | - Michal Kolárik
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 040 01 Košice, Slovakia
| | - Marek Bundzel
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 040 01 Košice, Slovakia
| | - František Babič
- Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 040 01 Košice, Slovakia
| |
Collapse
|
7
|
Gregory Dal Toé S, Neal M, Hold N, Heney C, Turner R, McCoy E, Iftikhar M, Tiddeman B. Automated Video-Based Capture of Crustacean Fisheries Data Using Low-Power Hardware. SENSORS (BASEL, SWITZERLAND) 2023; 23:7897. [PMID: 37765954 PMCID: PMC10535158 DOI: 10.3390/s23187897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 09/03/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023]
Abstract
This work investigates the application of Computer Vision to the problem of the automated counting and measuring of crabs and lobsters onboard fishing boats. The aim is to provide catch count and measurement data for these key commercial crustacean species. This can provide vital input data for stock assessment models, to enable the sustainable management of these species. The hardware system is required to be low-cost, have low-power usage, be waterproof, available (given current chip shortages), and able to avoid over-heating. The selected hardware is based on a Raspberry Pi 3A+ contained in a custom waterproof housing. This hardware places challenging limitations on the options for processing the incoming video, with many popular deep learning frameworks (even light-weight versions) unable to load or run given the limited computational resources. The problem can be broken into several steps: (1) Identifying the portions of the video that contain each individual animal; (2) Selecting a set of representative frames for each animal, e.g, lobsters must be viewed from the top and underside; (3) Detecting the animal within the frame so that the image can be cropped to the region of interest; (4) Detecting keypoints on each animal; and (5) Inferring measurements from the keypoint data. In this work, we develop a pipeline that addresses these steps, including a key novel solution to frame selection in video streams that uses classification, temporal segmentation, smoothing techniques and frame quality estimation. The developed pipeline is able to operate on the target low-power hardware and the experiments show that, given sufficient training data, reasonable performance is achieved.
Collapse
Affiliation(s)
- Sebastian Gregory Dal Toé
- Department of Computer Science, Aberystwyth University, Aberystwyth SY23 3DB, Ceredigion, UK; (S.G.D.T.); (M.I.)
| | - Marie Neal
- Ystumtec Ltd., Pant-Y-Chwarel, Ystumtuen, Aberystwyth SY23 3AF, Ceredigion, UK;
| | - Natalie Hold
- School of Ocean Sciences, Bangor University, Bangor LL57 2DG, Gwynedd, UK; (N.H.); (C.H.); (R.T.); (E.M.)
| | - Charlotte Heney
- School of Ocean Sciences, Bangor University, Bangor LL57 2DG, Gwynedd, UK; (N.H.); (C.H.); (R.T.); (E.M.)
| | - Rebecca Turner
- School of Ocean Sciences, Bangor University, Bangor LL57 2DG, Gwynedd, UK; (N.H.); (C.H.); (R.T.); (E.M.)
| | - Emer McCoy
- School of Ocean Sciences, Bangor University, Bangor LL57 2DG, Gwynedd, UK; (N.H.); (C.H.); (R.T.); (E.M.)
| | - Muhammad Iftikhar
- Department of Computer Science, Aberystwyth University, Aberystwyth SY23 3DB, Ceredigion, UK; (S.G.D.T.); (M.I.)
| | - Bernard Tiddeman
- Department of Computer Science, Aberystwyth University, Aberystwyth SY23 3DB, Ceredigion, UK; (S.G.D.T.); (M.I.)
| |
Collapse
|
8
|
Lucassen RT, Jafari MH, Duggan NM, Jowkar N, Mehrtash A, Fischetti C, Bernier D, Prentice K, Duhaime EP, Jin M, Abolmaesumi P, Heslinga FG, Veta M, Duran-Mendicuti MA, Frisken S, Shyn PB, Golby AJ, Boyer E, Wells WM, Goldsmith AJ, Kapur T. Deep Learning for Detection and Localization of B-Lines in Lung Ultrasound. IEEE J Biomed Health Inform 2023; 27:4352-4361. [PMID: 37276107 PMCID: PMC10540221 DOI: 10.1109/jbhi.2023.3282596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Lung ultrasound (LUS) is an important imaging modality used by emergency physicians to assess pulmonary congestion at the patient bedside. B-line artifacts in LUS videos are key findings associated with pulmonary congestion. Not only can the interpretation of LUS be challenging for novice operators, but visual quantification of B-lines remains subject to observer variability. In this work, we investigate the strengths and weaknesses of multiple deep learning approaches for automated B-line detection and localization in LUS videos. We curate and publish, BEDLUS, a new ultrasound dataset comprising 1,419 videos from 113 patients with a total of 15,755 expert-annotated B-lines. Based on this dataset, we present a benchmark of established deep learning methods applied to the task of B-line detection. To pave the way for interpretable quantification of B-lines, we propose a novel "single-point" approach to B-line localization using only the point of origin. Our results show that (a) the area under the receiver operating characteristic curve ranges from 0.864 to 0.955 for the benchmarked detection methods, (b) within this range, the best performance is achieved by models that leverage multiple successive frames as input, and (c) the proposed single-point approach for B-line localization reaches an F 1-score of 0.65, performing on par with the inter-observer agreement. The dataset and developed methods can facilitate further biomedical research on automated interpretation of lung ultrasound with the potential to expand the clinical utility.
Collapse
|
9
|
Panjeta M, Reddy A, Shah R, Shah J. Artificial intelligence enabled COVID-19 detection: techniques, challenges and use cases. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-28. [PMID: 37362659 PMCID: PMC10224655 DOI: 10.1007/s11042-023-15247-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 03/10/2023] [Accepted: 03/30/2023] [Indexed: 06/28/2023]
Abstract
Deep Learning and Machine Learning are becoming more and more popular as their algorithms get progressively better, and their use is expected to have the large effect on improving the health care system. Also, the pandemic was a chance to show how adding AI to healthcare infrastructure could help, since infrastructures around the world are overworked and falling apart. These new technologies can be used to fight COVID-19 because they are flexible and can be changed. Based on these facts, we looked at how the ML and DL-based models can be used to deal with the COVID-19 pandemic problem and what the pros and cons of each are. This paper gives a full look at the different ways to find COVID-19. We looked at the COVID-19 issues in a systematic way and then rated the methods and techniques for finding it based on their availability, ease of use, accuracy, and cost. We have also shown in pictures how well each of the detection techniques works. We did a comparison of different detection models based on the above factors. This helps researchers understand the different methods and the pros and cons of using them as the basis for their research. In the last part, we talk about the open challenges and research questions that come with putting these techniques together with other detection methods.
Collapse
Affiliation(s)
- Manisha Panjeta
- Department of Computer Science and Engineering, Thapar Institute of Engineering Technology, Punjab, 147004 India
| | - Aryan Reddy
- Computer Science Department, NMIMS University, Mumbai, India
| | - Rushabh Shah
- Computer Science Department, NMIMS University, Mumbai, India
| | - Jash Shah
- Computer Science Department, NMIMS University, Mumbai, India
| |
Collapse
|
10
|
Tenali N, Babu GRM. HQDCNet: Hybrid Quantum Dilated Convolution Neural Network for detecting covid-19 in the context of Big Data Analytics. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-27. [PMID: 37362720 PMCID: PMC10176300 DOI: 10.1007/s11042-023-15515-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 01/12/2023] [Accepted: 04/19/2023] [Indexed: 06/28/2023]
Abstract
Medical care services are changing to address problems with the development of big data frameworks as a result of the widespread use of big data analytics. Covid illness has recently been one of the leading causes of death in people. Since then, related input chest X-ray image for diagnosing COVID illness have been enhanced by diagnostic tools. Big data technological breakthroughs provide a fantastic option for reducing contagious Covid disease. To increase the model's confidence, it is necessary to integrate a large number of training sets, however handling the data may be difficult. With the development of big data technology, a unique method to identify and categorise covid illness is now found in this research. In order to manage incoming big data, a massive volume of chest x-ray images is gathered and analysed using a distributed computing server built on the Hadoop framework. In order to group identical groups in the input x-ray images, which in turn segments the dominating portions of an image, the fuzzy empowered weighted k-means algorithm is then employed. A hybrid quantum dilated convolution neural network is suggested to classify various kinds of covid instances, and a Black Widow-based Moth Flame is also shown to improve the performance of the classifier pattern. The performance analysis of COVID-19 detection makes use of the COVID-19 radiography dataset. The suggested HQDCNet approach has an accuracy of 99.01. The experimental results are evaluated in Python using performance metrics such as accuracy, precision, recall, f-measure, and loss function.
Collapse
Affiliation(s)
- Nagamani Tenali
- Department of CSE, Y.S. Rajasekhar Reddy University College of Engineering & Technology, Acharya Nagarjuna University, Guntur, Nagarjuna Nagar India
| | - Gatram Rama Mohan Babu
- Computer Science and Engineering (AI&ML), RVR & JC College of Engineering, Guntur, Chowdavaram India
| |
Collapse
|
11
|
Bruno A, Ignesti G, Salvetti O, Moroni D, Martinelli M. Efficient Lung Ultrasound Classification. Bioengineering (Basel) 2023; 10:bioengineering10050555. [PMID: 37237625 DOI: 10.3390/bioengineering10050555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 04/28/2023] [Accepted: 05/02/2023] [Indexed: 05/28/2023] Open
Abstract
A machine learning method for classifying lung ultrasound is proposed here to provide a point of care tool for supporting a safe, fast, and accurate diagnosis that can also be useful during a pandemic such as SARS-CoV-2. Given the advantages (e.g., safety, speed, portability, cost-effectiveness) provided by the ultrasound technology over other examinations (e.g., X-ray, computer tomography, magnetic resonance imaging), our method was validated on the largest public lung ultrasound dataset. Focusing on both accuracy and efficiency, our solution is based on an efficient adaptive ensembling of two EfficientNet-b0 models reaching 100% of accuracy, which, to our knowledge, outperforms the previous state-of-the-art models by at least 5%. The complexity is restrained by adopting specific design choices: ensembling with an adaptive combination layer, ensembling performed on the deep features, and minimal ensemble using two weak models only. In this way, the number of parameters has the same order of magnitude of a single EfficientNet-b0 and the computational cost (FLOPs) is reduced at least by 20%, doubled by parallelization. Moreover, a visual analysis of the saliency maps on sample images of all the classes of the dataset reveals where an inaccurate weak model focuses its attention versus an accurate one.
Collapse
Affiliation(s)
- Antonio Bruno
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Giacomo Ignesti
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Ovidio Salvetti
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Davide Moroni
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Massimo Martinelli
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| |
Collapse
|
12
|
Gürsoy E, Kaya Y. An overview of deep learning techniques for COVID-19 detection: methods, challenges, and future works. MULTIMEDIA SYSTEMS 2023; 29:1603-1627. [PMID: 37261262 PMCID: PMC10039775 DOI: 10.1007/s00530-023-01083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/20/2023] [Indexed: 06/02/2023]
Abstract
The World Health Organization (WHO) declared a pandemic in response to the coronavirus COVID-19 in 2020, which resulted in numerous deaths worldwide. Although the disease appears to have lost its impact, millions of people have been affected by this virus, and new infections still occur. Identifying COVID-19 requires a reverse transcription-polymerase chain reaction test (RT-PCR) or analysis of medical data. Due to the high cost and time required to scan and analyze medical data, researchers are focusing on using automated computer-aided methods. This review examines the applications of deep learning (DL) and machine learning (ML) in detecting COVID-19 using medical data such as CT scans, X-rays, cough sounds, MRIs, ultrasound, and clinical markers. First, the data preprocessing, the features used, and the current COVID-19 detection methods are divided into two subsections, and the studies are discussed. Second, the reported publicly available datasets, their characteristics, and the potential comparison materials mentioned in the literature are presented. Third, a comprehensive comparison is made by contrasting the similar and different aspects of the studies. Finally, the results, gaps, and limitations are summarized to stimulate the improvement of COVID-19 detection methods, and the study concludes by listing some future research directions for COVID-19 classification.
Collapse
Affiliation(s)
- Ercan Gürsoy
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| | - Yasin Kaya
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| |
Collapse
|
13
|
Tenali N, Babu GRM. A Systematic Literature Review and Future Perspectives for Handling Big Data Analytics in COVID-19 Diagnosis. NEW GENERATION COMPUTING 2023; 41:243-280. [PMID: 37229177 PMCID: PMC10019802 DOI: 10.1007/s00354-023-00211-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
In today's digital world, information is growing along with the expansion of Internet usage worldwide. As a consequence, bulk of data is generated constantly which is known to be "Big Data". One of the most evolving technologies in twenty-first century is Big Data analytics, it is promising field for extracting knowledge from very large datasets and enhancing benefits while lowering costs. Due to the enormous success of big data analytics, the healthcare sector is increasingly shifting toward adopting these approaches to diagnose diseases. Due to the recent boom in medical big data and the development of computational methods, researchers and practitioners have gained the ability to mine and visualize medical big data on a larger scale. Thus, with the aid of integration of big data analytics in healthcare sectors, precise medical data analysis is now feasible with early sickness detection, health status monitoring, patient treatment, and community services is now achievable. With all these improvements, a deadly disease COVID is considered in this comprehensive review with the intention of offering remedies utilizing big data analytics. The use of big data applications is vital to managing pandemic conditions, such as predicting outbreaks of COVID-19 and identifying cases and patterns of spread of COVID-19. Research is still being done on leveraging big data analytics to forecast COVID-19. But precise and early identification of COVID disease is still lacking due to the volume of medical records like dissimilar medical imaging modalities. Meanwhile, Digital imaging has now become essential to COVID diagnosis, but the main challenge is the storage of massive volumes of data. Taking these limitations into account, a comprehensive analysis is presented in the systematic literature review (SLR) to provide a deeper understanding of big data in the field of COVID-19.
Collapse
Affiliation(s)
- Nagamani Tenali
- Department of CSE, Dr.Y.S. Rajasekhar Reddy University College of Engineering & Technology, Acharya Nagarjuna University, Nagarjuna Nagar, Guntur, India
| | - Gatram Rama Mohan Babu
- Computer Science and Engineering (AI&ML), RVR & JC College of Engineering, Chowdavaram, Guntur, India
| |
Collapse
|
14
|
Kanumuri C, Chodavarapu RM. GUI Enabled Optimized Approach of CNN for Automatic Diagnosis of COVID-19 Using Radiograph Images. NEW GENERATION COMPUTING 2023; 41:213-224. [PMID: 37229178 PMCID: PMC10010635 DOI: 10.1007/s00354-023-00212-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
World Health Organization (WHO) proclaimed the Corona virus (COVID-19) as a pandemic, since it contaminated billions of individuals and killed lakhs. The spread along with the severity of the disease plays a key role in early detection and classification to reduce the rapid spread as the variants are changing. COVID-19 could be categorized as a pneumonia infection. Bacterial pneumonia, fungal pneumonia, viral pneumonia, etc., are the classifications of several forms of pneumonia, which are subcategorized into more than 20 forms and COVID-19 will come under viral pneumonia. The wrong prediction of any of these can mislead humans into improper treatment, which leads to a matter of life. From the radiograph that is X-ray images, diagnosis of all these forms can be possible. For detecting these disease classes, the proposed method will employ a deep learning (DL) technique. Early detection of the COVID-19 is possible with this model; hence, the spread of the disease is minimized by isolating the patients. For execution, a graphical user interface (GUI) provides more flexibility. The proposed model, which is a GUI approach, is trained with 21 types of pneumonia radiographs by a convolutional neural network (CNN) trained on Image Net and adjusts them to act as feature extractors for the Radiograph images. Next, the CNNs are combined with united AI strategies. For the classification of COVID-19 detection, several approaches are proposed in which those approaches are concerned with COVID-19, pneumonia, and healthy patients only. In classifying more than 20 types of pneumonia infections, the proposed model attained an accuracy of 92%. Likewise, COVID-19 images are effectively distinguished from the other pneumonia images of radiographs.
Collapse
Affiliation(s)
- Chalapathiraju Kanumuri
- Electronics and Communication Engineering, S.R.K.R Engineering College, Bhimavaram, Andhra Pradesh India
| | - Renu Madhavi Chodavarapu
- Electronics and Instrumentation Engineering, RV College of Engineering, Bangalore, Karnataka India
| |
Collapse
|
15
|
Bandwidth Improvement in Ultrasound Image Reconstruction Using Deep Learning Techniques. Healthcare (Basel) 2022; 11:healthcare11010123. [PMID: 36611583 PMCID: PMC9819580 DOI: 10.3390/healthcare11010123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 12/24/2022] [Accepted: 12/29/2022] [Indexed: 01/03/2023] Open
Abstract
Ultrasound (US) imaging is a medical imaging modality that uses the reflection of sound in the range of 2-18 MHz to image internal body structures. In US, the frequency bandwidth (BW) is directly associated with image resolution. BW is a property of the transducer and more bandwidth comes at a higher cost. Thus, methods that can transform strongly bandlimited ultrasound data into broadband data are essential. In this work, we propose a deep learning (DL) technique to improve the image quality for a given bandwidth by learning features provided by broadband data of the same field of view. Therefore, the performance of several DL architectures and conventional state-of-the-art techniques for image quality improvement and artifact removal have been compared on in vitro US datasets. Two training losses have been utilized on three different architectures: a super resolution convolutional neural network (SRCNN), U-Net, and a residual encoder decoder network (REDNet) architecture. The models have been trained to transform low-bandwidth image reconstructions to high-bandwidth image reconstructions, to reduce the artifacts, and make the reconstructions visually more attractive. Experiments were performed for 20%, 40%, and 60% fractional bandwidth on the original images and showed that the improvements obtained are as high as 45.5% in RMSE, and 3.85 dB in PSNR, in datasets with a 20% bandwidth limitation.
Collapse
|
16
|
Mento F, Khan U, Faita F, Smargiassi A, Inchingolo R, Perrone T, Demi L. State of the Art in Lung Ultrasound, Shifting from Qualitative to Quantitative Analyses. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:2398-2416. [PMID: 36155147 PMCID: PMC9499741 DOI: 10.1016/j.ultrasmedbio.2022.07.007] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 05/27/2023]
Abstract
Lung ultrasound (LUS) has been increasingly expanding since the 1990s, when the clinical relevance of vertical artifacts was first reported. However, the massive spread of LUS is only recent and is associated with the coronavirus disease 2019 (COVID-19) pandemic, during which semi-quantitative computer-aided techniques were proposed to automatically classify LUS data. In this review, we discuss the state of the art in LUS, from semi-quantitative image analysis approaches to quantitative techniques involving the analysis of radiofrequency data. We also discuss recent in vitro and in silico studies, as well as research on LUS safety. Finally, conclusions are drawn highlighting the potential future of LUS.
Collapse
Affiliation(s)
- Federico Mento
- Department of Information Engineering and Computer Science, University of Trento, Trento, Italy
| | - Umair Khan
- Department of Information Engineering and Computer Science, University of Trento, Trento, Italy
| | - Francesco Faita
- Institute of Clinical Physiology, National Research Council, Pisa, Italy
| | - Andrea Smargiassi
- Department of Cardiovascular and Thoracic Sciences, Pulmonary Medicine Unit, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Riccardo Inchingolo
- Department of Cardiovascular and Thoracic Sciences, Pulmonary Medicine Unit, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | | | - Libertario Demi
- Department of Information Engineering and Computer Science, University of Trento, Trento, Italy.
| |
Collapse
|
17
|
Contrasting EfficientNet, ViT, and gMLP for COVID-19 Detection in Ultrasound Imagery. J Pers Med 2022; 12:jpm12101707. [PMID: 36294846 PMCID: PMC9605641 DOI: 10.3390/jpm12101707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/19/2022] [Accepted: 10/10/2022] [Indexed: 11/06/2022] Open
Abstract
A timely diagnosis of coronavirus is critical in order to control the spread of the virus. To aid in this, we propose in this paper a deep learning-based approach for detecting coronavirus patients using ultrasound imagery. We propose to exploit the transfer learning of a EfficientNet model pre-trained on the ImageNet dataset for the classification of ultrasound images of suspected patients. In particular, we contrast the results of EfficentNet-B2 with the results of ViT and gMLP. Then, we show the results of the three models by learning from scratch, i.e., without transfer learning. We view the detection problem from a multiclass classification perspective by classifying images as COVID-19, pneumonia, and normal. In the experiments, we evaluated the models on a publically available ultrasound dataset. This dataset consists of 261 recordings (202 videos + 59 images) belonging to 216 distinct patients. The best results were obtained using EfficientNet-B2 with transfer learning. In particular, we obtained precision, recall, and F1 scores of 95.84%, 99.88%, and 24 97.41%, respectively, for detecting the COVID-19 class. EfficientNet-B2 with transfer learning presented an overall accuracy of 96.79%, outperforming gMLP and ViT, which achieved accuracies of 93.03% and 92.82%, respectively.
Collapse
|
18
|
Maximino J, Coimbra M, Pedrosa J. Detection of COVID-19 in Point of Care Lung Ultrasound. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1527-1530. [PMID: 36086665 DOI: 10.1109/embc48229.2022.9871235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The coronavirus disease 2019 (COVID-19) evolved into a global pandemic, responsible for a significant number of infections and deaths. In this scenario, point-of-care ultrasound (POCUS) has emerged as a viable and safe imaging modality. Computer vision (CV) solutions have been proposed to aid clinicians in POCUS image interpretation, namely detection/segmentation of structures and image/patient classification but relevant challenges still remain. As such, the aim of this study is to develop CV algorithms, using Deep Learning techniques, to create tools that can aid doctors in the diagnosis of viral and bacterial pneumonia (VP and BP) through POCUS exams. To do so, convolutional neural networks were designed to perform in classification tasks. The architectures chosen to build these models were the VGG16, ResNet50, DenseNet169 e MobileNetV2. Patients images were divided in three classes: healthy (HE), BP and VP (which includes COVID-19). Through a comparative study, which was based on several performance metrics, the model based on the DenseNet169 architecture was designated as the best performing model, achieving 78% average accuracy value of the five iterations of 5- Fold Cross-Validation. Given that the currently available POCUS datasets for COVID-19 are still limited, the training of the models was negatively affected by such and the models were not tested in an independent dataset. Furthermore, it was also not possible to perform lesion detection tasks. Nonetheless, in order to provide explainability and understanding of the models, Gradient-weighted Class Activation Mapping (GradCAM) were used as a tool to highlight the most relevant classification regions. Clinical relevance - Reveals the potential of POCUS to support COVID-19 screening. The results are very promising although the dataset is limite.
Collapse
|
19
|
Awasthi N, Vermeer L, Fixsen LS, Lopata RGP, Pluim JPW. LVNet: Lightweight Model for Left Ventricle Segmentation for Short Axis Views in Echocardiographic Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2115-2128. [PMID: 35452387 DOI: 10.1109/tuffc.2022.3169684] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lightweight segmentation models are becoming more popular for fast diagnosis on small and low cost medical imaging devices. This study focuses on the segmentation of the left ventricle (LV) in cardiac ultrasound (US) images. A new lightweight model [LV network (LVNet)] is proposed for segmentation, which gives the benefits of requiring fewer parameters but with improved segmentation performance in terms of Dice score (DS). The proposed model is compared with state-of-the-art methods, such as UNet, MiniNetV2, and fully convolutional dense dilated network (FCdDN). The model proposed comes with a post-processing pipeline that further enhances the segmentation results. In general, the training is done directly using the segmentation mask as the output and the US image as the input of the model. A new strategy for segmentation is also introduced in addition to the direct training method used. Compared with the UNet model, an improvement in DS performance as high as 5% for segmentation with papillary (WP) muscles was found, while showcasing an improvement of 18.5% when the papillary muscles are excluded. The model proposed requires only 5% of the memory required by a UNet model. LVNet achieves a better trade-off between the number of parameters and its segmentation performance as compared with other conventional models. The developed codes are available at https://github.com/navchetanawasthi/Left_Ventricle_Segmentation.
Collapse
|
20
|
De Rosa L, L'Abbate S, Kusmic C, Faita F. Applications of artificial intelligence in lung ultrasound: Review of deep learning methods for COVID-19 fighting. Artif Intell Med Imaging 2022; 3:42-54. [DOI: 10.35711/aimi.v3.i2.42] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 02/22/2022] [Accepted: 04/26/2022] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND The pandemic outbreak of the novel coronavirus disease (COVID-19) has highlighted the need to combine rapid, non-invasive and widely accessible techniques with the least risk of patient’s cross-infection to achieve a successful early detection and surveillance of the disease. In this regard, the lung ultrasound (LUS) technique has been proved invaluable in both the differential diagnosis and the follow-up of COVID-19 patients, and its potential may be destined to evolve. Recently, indeed, LUS has been empowered through the development of automated image processing techniques.
AIM To provide a systematic review of the application of artificial intelligence (AI) technology in medical LUS analysis of COVID-19 patients using the preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines.
METHODS A literature search was performed for relevant studies published from March 2020 - outbreak of the pandemic - to 30 September 2021. Seventeen articles were included in the result synthesis of this paper.
RESULTS As part of the review, we presented the main characteristics related to AI techniques, in particular deep learning (DL), adopted in the selected articles. A survey was carried out on the type of architectures used, availability of the source code, network weights and open access datasets, use of data augmentation, use of the transfer learning strategy, type of input data and training/test datasets, and explainability.
CONCLUSION Finally, this review highlighted the existing challenges, including the lack of large datasets of reliable COVID-19-based LUS images to test the effectiveness of DL methods and the ethical/regulatory issues associated with the adoption of automated systems in real clinical scenarios.
Collapse
Affiliation(s)
- Laura De Rosa
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
| | - Serena L'Abbate
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
- Institute of Life Sciences, Scuola Superiore Sant’Anna, Pisa 56124, Italy
| | - Claudia Kusmic
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
| | - Francesco Faita
- Institute of Clinical Physiology, Consiglio Nazionale delle Ricerche, Pisa 56124, Italy
| |
Collapse
|
21
|
Review of Machine Learning in Lung Ultrasound in COVID-19 Pandemic. J Imaging 2022; 8:jimaging8030065. [PMID: 35324620 PMCID: PMC8952297 DOI: 10.3390/jimaging8030065] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 03/01/2022] [Accepted: 03/02/2022] [Indexed: 12/25/2022] Open
Abstract
Ultrasound imaging of the lung has played an important role in managing patients with COVID-19–associated pneumonia and acute respiratory distress syndrome (ARDS). During the COVID-19 pandemic, lung ultrasound (LUS) or point-of-care ultrasound (POCUS) has been a popular diagnostic tool due to its unique imaging capability and logistical advantages over chest X-ray and CT. Pneumonia/ARDS is associated with the sonographic appearances of pleural line irregularities and B-line artefacts, which are caused by interstitial thickening and inflammation, and increase in number with severity. Artificial intelligence (AI), particularly machine learning, is increasingly used as a critical tool that assists clinicians in LUS image reading and COVID-19 decision making. We conducted a systematic review from academic databases (PubMed and Google Scholar) and preprints on arXiv or TechRxiv of the state-of-the-art machine learning technologies for LUS images in COVID-19 diagnosis. Openly accessible LUS datasets are listed. Various machine learning architectures have been employed to evaluate LUS and showed high performance. This paper will summarize the current development of AI for COVID-19 management and the outlook for emerging trends of combining AI-based LUS with robotics, telehealth, and other techniques.
Collapse
|
22
|
Zhao L, Lediju Bell MA. A Review of Deep Learning Applications in Lung Ultrasound Imaging of COVID-19 Patients. BME FRONTIERS 2022; 2022:9780173. [PMID: 36714302 PMCID: PMC9880989 DOI: 10.34133/2022/9780173] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
The massive and continuous spread of COVID-19 has motivated researchers around the world to intensely explore, understand, and develop new techniques for diagnosis and treatment. Although lung ultrasound imaging is a less established approach when compared to other medical imaging modalities such as X-ray and CT, multiple studies have demonstrated its promise to diagnose COVID-19 patients. At the same time, many deep learning models have been built to improve the diagnostic efficiency of medical imaging. The integration of these initially parallel efforts has led multiple researchers to report deep learning applications in medical imaging of COVID-19 patients, most of which demonstrate the outstanding potential of deep learning to aid in the diagnosis of COVID-19. This invited review is focused on deep learning applications in lung ultrasound imaging of COVID-19 and provides a comprehensive overview of ultrasound systems utilized for data acquisition, associated datasets, deep learning models, and comparative performance.
Collapse
Affiliation(s)
- Lingyi Zhao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA,Department of Computer Science, Johns Hopkins University, Baltimore, USA,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
23
|
Wang Y, Zhang Y, He Q, Liao H, Luo J. Quantitative Analysis of Pleural Line and B-Lines in Lung Ultrasound Images for Severity Assessment of COVID-19 Pneumonia. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:73-83. [PMID: 34428140 PMCID: PMC8905613 DOI: 10.1109/tuffc.2021.3107598] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 08/21/2021] [Indexed: 06/12/2023]
Abstract
Specific patterns of lung ultrasound (LUS) images are used to assess the severity of coronavirus disease 2019 (COVID-19) pneumonia, while such assessment is mainly based on clinicians' qualitative and subjective observations. In this study, we quantitatively analyze the LUS images to assess the severity of COVID-19 pneumonia by characterizing the patterns related to the pleural line (PL) and B-lines (BLs). Twenty-seven patients with COVID-19 pneumonia, including 13 moderate cases, seven severe cases, and seven critical cases, are enrolled. Features related to the PL, including the thickness (TPL) and roughness of the PL (RPL), and the mean (MPLI) and standard deviation (SDPLI) of the PL intensities are extracted from the LUS images. Features related to the BLs, including the number (NBL), accumulated width (AWBL), attenuation coefficient (ACBL), and accumulated intensity (AIBL) of BLs, are also extracted. The correlations of these features with the disease severity are evaluated. The performances of the binary severe/non-severe classification are assessed for each feature and support vector machine (SVM) classifiers with various combinations of features as input. Several features, including the RPL, NBL, AWBL, and AIBL, show significant correlations with disease severity (all ). The classification performance is optimal using the SVM classifier using all the features as input (area under the receiver operating characteristic (ROC) curve = 0.96, sensitivity = 0.93, and specificity = 1). These findings demonstrate that the proposed method may be a promising tool for automatic grading diagnosis and follow-up of patients with COVID-19 pneumonia.
Collapse
Affiliation(s)
- Yuanyuan Wang
- Department of Biomedical EngineeringSchool of MedicineTsinghua UniversityBeijing100084China
| | - Yao Zhang
- Department of UltrasoundBeijing Ditan HospitalCapital Medical UniversityBeijing100015China
| | - Qiong He
- Department of Biomedical EngineeringSchool of MedicineTsinghua UniversityBeijing100084China
| | - Hongen Liao
- Department of Biomedical EngineeringSchool of MedicineTsinghua UniversityBeijing100084China
| | - Jianwen Luo
- Department of Biomedical EngineeringSchool of MedicineTsinghua UniversityBeijing100084China
| |
Collapse
|
24
|
Gillman AG, Lunardo F, Prinable J, Belous G, Nicolson A, Min H, Terhorst A, Dowling JA. Automated COVID-19 diagnosis and prognosis with medical imaging and who is publishing: a systematic review. Phys Eng Sci Med 2021; 45:13-29. [PMID: 34919204 PMCID: PMC8678975 DOI: 10.1007/s13246-021-01093-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 12/13/2021] [Indexed: 12/31/2022]
Abstract
Objectives: To conduct a systematic survey of published techniques for automated diagnosis and prognosis of COVID-19 diseases using medical imaging, assessing the validity of reported performance and investigating the proposed clinical use-case. To conduct a scoping review into the authors publishing such work. Methods: The Scopus database was queried and studies were screened for article type, and minimum source normalized impact per paper and citations, before manual relevance assessment and a bias assessment derived from a subset of the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). The number of failures of the full CLAIM was adopted as a surrogate for risk-of-bias. Methodological and performance measurements were collected from each technique. Each study was assessed by one author. Comparisons were evaluated for significance with a two-sided independent t-test. Findings: Of 1002 studies identified, 390 remained after screening and 81 after relevance and bias exclusion. The ratio of exclusion for bias was 71%, indicative of a high level of bias in the field. The mean number of CLAIM failures per study was 8.3 ± 3.9 [1,17] (mean ± standard deviation [min,max]). 58% of methods performed diagnosis versus 31% prognosis. Of the diagnostic methods, 38% differentiated COVID-19 from healthy controls. For diagnostic techniques, area under the receiver operating curve (AUC) = 0.924 ± 0.074 [0.810,0.991] and accuracy = 91.7% ± 6.4 [79.0,99.0]. For prognostic techniques, AUC = 0.836 ± 0.126 [0.605,0.980] and accuracy = 78.4% ± 9.4 [62.5,98.0]. CLAIM failures did not correlate with performance, providing confidence that the highest results were not driven by biased papers. Deep learning techniques reported higher AUC (p < 0.05) and accuracy (p < 0.05), but no difference in CLAIM failures was identified. Interpretation: A majority of papers focus on the less clinically impactful diagnosis task, contrasted with prognosis, with a significant portion performing a clinically unnecessary task of differentiating COVID-19 from healthy. Authors should consider the clinical scenario in which their work would be deployed when developing techniques. Nevertheless, studies report superb performance in a potentially impactful application. Future work is warranted in translating techniques into clinical tools.
Collapse
Affiliation(s)
- Ashley G Gillman
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia.
| | - Febrio Lunardo
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia.,College of Science and Engineering, James Cook University, Australian Tropical Science Innovation Precinct, Townsville, QLD, 4814, Australia
| | - Joseph Prinable
- ACRF Image X Institute, University of Sydney, Level 2, Biomedical Building (C81), 1 Central Ave, Australian Technology Park, Eveleigh, Sydney, NSW, 2015, Australia
| | - Gregg Belous
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Aaron Nicolson
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Hang Min
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Andrew Terhorst
- Data61, Commonwealth Scientific and Industrial Research Organisation, College Road, Sandy Bay, Hobart, TAS, 7005, Australia
| | - Jason A Dowling
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| |
Collapse
|
25
|
Owen JP, Blazes M, Manivannan N, Lee GC, Yu S, Durbin MK, Nair A, Singh RP, Talcott KE, Melo AG, Greenlee T, Chen ER, Conti TF, Lee CS, Lee AY. Student becomes teacher: training faster deep learning lightweight networks for automated identification of optical coherence tomography B-scans of interest using a student-teacher framework. BIOMEDICAL OPTICS EXPRESS 2021; 12:5387-5399. [PMID: 34692189 PMCID: PMC8515993 DOI: 10.1364/boe.433432] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 07/10/2021] [Accepted: 07/19/2021] [Indexed: 06/13/2023]
Abstract
This work explores a student-teacher framework that leverages unlabeled images to train lightweight deep learning models with fewer parameters to perform fast automated detection of optical coherence tomography B-scans of interest. Twenty-seven lightweight models (LWMs) from four families of models were trained on expert-labeled B-scans (∼70 K) as either "abnormal" or "normal", which established a baseline performance for the models. Then the LWMs were trained from random initialization using a student-teacher framework to incorporate a large number of unlabeled B-scans (∼500 K). A pre-trained ResNet50 model served as the teacher network. The ResNet50 teacher model achieved 96.0% validation accuracy and the validation accuracy achieved by the LWMs ranged from 89.6% to 95.1%. The best performing LWMs were 2.53 to 4.13 times faster than ResNet50 (0.109s to 0.178s vs. 0.452s). All LWMs benefitted from increasing the training set by including unlabeled B-scans in the student-teacher framework, with several models achieving validation accuracy of 96.0% or higher. The three best-performing models achieved comparable sensitivity and specificity in two hold-out test sets to the teacher network. We demonstrated the effectiveness of a student-teacher framework for training fast LWMs for automated B-scan of interest detection leveraging unlabeled, routinely-available data.
Collapse
Affiliation(s)
- Julia P. Owen
- Department of Ophthalmology, University of Washington, Seattle, WA 98195, USA
| | - Marian Blazes
- Department of Ophthalmology, University of Washington, Seattle, WA 98195, USA
| | | | - Gary C. Lee
- Carl Zeiss Meditec, Inc., Dublin, CA 94568, USA
| | - Sophia Yu
- Carl Zeiss Meditec, Inc., Dublin, CA 94568, USA
| | | | - Aditya Nair
- Carl Zeiss Meditec, Inc., Dublin, CA 94568, USA
| | - Rishi P. Singh
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Katherine E. Talcott
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Alline G. Melo
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Tyler Greenlee
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Eric R. Chen
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Thais F. Conti
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Cecilia S. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA 98195, USA
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
26
|
Barros B, Lacerda P, Albuquerque C, Conci A. Pulmonary COVID-19: Learning Spatiotemporal Features Combining CNN and LSTM Networks for Lung Ultrasound Video Classification. SENSORS (BASEL, SWITZERLAND) 2021; 21:5486. [PMID: 34450928 PMCID: PMC8401701 DOI: 10.3390/s21165486] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 08/04/2021] [Accepted: 08/05/2021] [Indexed: 12/18/2022]
Abstract
Deep Learning is a very active and important area for building Computer-Aided Diagnosis (CAD) applications. This work aims to present a hybrid model to classify lung ultrasound (LUS) videos captured by convex transducers to diagnose COVID-19. A Convolutional Neural Network (CNN) performed the extraction of spatial features, and the temporal dependence was learned using a Long Short-Term Memory (LSTM). Different types of convolutional architectures were used for feature extraction. The hybrid model (CNN-LSTM) hyperparameters were optimized using the Optuna framework. The best hybrid model was composed of an Xception pre-trained on ImageNet and an LSTM containing 512 units, configured with a dropout rate of 0.4, two fully connected layers containing 1024 neurons each, and a sequence of 20 frames in the input layer (20×2018). The model presented an average accuracy of 93% and sensitivity of 97% for COVID-19, outperforming models based purely on spatial approaches. Furthermore, feature extraction using transfer learning with models pre-trained on ImageNet provided comparable results to models pre-trained on LUS images. The results corroborate with other studies showing that this model for LUS classification can be an important tool in the fight against COVID-19 and other lung diseases.
Collapse
Affiliation(s)
- Bruno Barros
- Institute of Computing, Campus Praia Vermelha, Fluminense Federal University, Niterói 24.210-346, Brazil; (P.L.); (C.A.); (A.C.)
| | | | | | | |
Collapse
|