51
|
Rich CNN Features for Water-Body Segmentation from Very High Resolution Aerial and Satellite Imagery. REMOTE SENSING 2021. [DOI: 10.3390/rs13101912] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Extracting water-bodies accurately is a great challenge from very high resolution (VHR) remote sensing imagery. The boundaries of a water body are commonly hard to identify due to the complex spectral mixtures caused by aquatic vegetation, distinct lake/river colors, silts near the bank, shadows from the surrounding tall plants, and so on. The diversity and semantic information of features need to be increased for a better extraction of water-bodies from VHR remote sensing images. In this paper, we address these problems by designing a novel multi-feature extraction and combination module. This module consists of three feature extraction sub-modules based on spatial and channel correlations in feature maps at each scale, which extract the complete target information from the local space, larger space, and between-channel relationship to achieve a rich feature representation. Simultaneously, to better predict the fine contours of water-bodies, we adopt a multi-scale prediction fusion module. Besides, to solve the semantic inconsistency of feature fusion between the encoding stage and the decoding stage, we apply an encoder-decoder semantic feature fusion module to promote fusion effects. We carry out extensive experiments in VHR aerial and satellite imagery respectively. The result shows that our method achieves state-of-the-art segmentation performance, surpassing the classic and recent methods. Moreover, our proposed method is robust in challenging water-body extraction scenarios.
Collapse
|
52
|
Hricak H, Abdel-Wahab M, Atun R, Lette MM, Paez D, Brink JA, Donoso-Bach L, Frija G, Hierath M, Holmberg O, Khong PL, Lewis JS, McGinty G, Oyen WJG, Shulman LN, Ward ZJ, Scott AM. Medical imaging and nuclear medicine: a Lancet Oncology Commission. Lancet Oncol 2021; 22:e136-e172. [PMID: 33676609 PMCID: PMC8444235 DOI: 10.1016/s1470-2045(20)30751-8] [Citation(s) in RCA: 136] [Impact Index Per Article: 45.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2020] [Revised: 12/04/2020] [Accepted: 12/07/2020] [Indexed: 12/13/2022]
Abstract
The diagnosis and treatment of patients with cancer requires access to imaging to ensure accurate management decisions and optimal outcomes. Our global assessment of imaging and nuclear medicine resources identified substantial shortages in equipment and workforce, particularly in low-income and middle-income countries (LMICs). A microsimulation model of 11 cancers showed that the scale-up of imaging would avert 3·2% (2·46 million) of all 76·0 million deaths caused by the modelled cancers worldwide between 2020 and 2030, saving 54·92 million life-years. A comprehensive scale-up of imaging, treatment, and care quality would avert 9·55 million (12·5%) of all cancer deaths caused by the modelled cancers worldwide, saving 232·30 million life-years. Scale-up of imaging would cost US$6·84 billion in 2020-30 but yield lifetime productivity gains of $1·23 trillion worldwide, a net return of $179·19 per $1 invested. Combining the scale-up of imaging, treatment, and quality of care would provide a net benefit of $2·66 trillion and a net return of $12·43 per $1 invested. With the use of a conservative approach regarding human capital, the scale-up of imaging alone would provide a net benefit of $209·46 billion and net return of $31·61 per $1 invested. With comprehensive scale-up, the worldwide net benefit using the human capital approach is $340·42 billion and the return per dollar invested is $2·46. These improved health and economic outcomes hold true across all geographical regions. We propose actions and investments that would enhance access to imaging equipment, workforce capacity, digital technology, radiopharmaceuticals, and research and training programmes in LMICs, to produce massive health and economic benefits and reduce the burden of cancer globally.
Collapse
Affiliation(s)
- Hedvig Hricak
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Department of Radiology, Weill Cornell Medical College, New York, NY, USA.
| | - May Abdel-Wahab
- International Atomic Energy Agency, Division of Human Health, Vienna, Austria; Radiation Oncology, National Cancer Institute, Cairo University, Cairo, Egypt; Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Rifat Atun
- Department of Global Health and Population, Harvard TH Chan School of Public Health, Boston, MA, USA; Department of Global Health and Social Medicine, Harvard Medical School, Harvard University, Boston, MA, USA
| | | | - Diana Paez
- International Atomic Energy Agency, Division of Human Health, Vienna, Austria
| | - James A Brink
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Harvard University, Boston, MA, USA
| | - Lluís Donoso-Bach
- Department of Medical Imaging, Hospital Clínic of Barcelona, University of Barcelona, Barcelona, Spain
| | | | | | - Ola Holmberg
- Radiation Protection of Patients Unit, International Atomic Energy Agency, Vienna, Austria
| | - Pek-Lan Khong
- Department of Diagnostic Radiology, University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Jason S Lewis
- Department of Radiology and Molecular Pharmacology Programme, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Departments of Pharmacology and Radiology, Weill Cornell Medical College, New York, NY, USA
| | - Geraldine McGinty
- Departments of Radiology and Population Science, Weill Cornell Medical College, New York, NY, USA; American College of Radiology, Reston, VA, USA
| | - Wim J G Oyen
- Department of Biomedical Sciences and Humanitas Clinical and Research Centre, Department of Nuclear Medicine, Humanitas University, Milan, Italy; Department of Radiology and Nuclear Medicine, Rijnstate Hospital, Arnhem, Netherlands; Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, Netherlands
| | - Lawrence N Shulman
- Department of Medicine, Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA, USA
| | - Zachary J Ward
- Center for Health Decision Science, Harvard TH Chan School of Public Health, Boston, MA, USA
| | - Andrew M Scott
- Tumour Targeting Laboratory, Olivia Newton-John Cancer Research Institute, Melbourne, VIC, Australia; Department of Molecular Imaging and Therapy, Austin Health, Melbourne, VIC, Australia; School of Cancer Medicine, La Trobe University, Melbourne, VIC, Australia; Department of Medicine, University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
53
|
Salient detection network for lung nodule detection in 3D Thoracic MRI Images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
54
|
Sun L, Wang Z, Pu H, Yuan G, Guo L, Pu T, Peng Z. Attention-embedded complementary-stream CNN for false positive reduction in pulmonary nodule detection. Comput Biol Med 2021; 133:104357. [PMID: 33836449 DOI: 10.1016/j.compbiomed.2021.104357] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 03/22/2021] [Accepted: 03/22/2021] [Indexed: 01/18/2023]
Abstract
False positive reduction plays a key role in computer-aided detection systems for pulmonary nodule detection in computed tomography (CT) scans. However, this remains a challenge owing to the heterogeneity and similarity of anisotropic pulmonary nodules. In this study, a novel attention-embedded complementary-stream convolutional neural network (AECS-CNN) is proposed to obtain more representative features of nodules for false positive reduction. The proposed network comprises three function blocks: 1) attention-guided multi-scale feature extraction, 2) complementary-stream block with an attention module for feature integration, and 3) classification block. The inputs of the network are multi-scale 3D CT volumes due to variations in nodule sizes. Subsequently, a gradual multi-scale feature extraction block with an attention module was applied to acquire more contextual information regarding the nodules. A subsequent complementary-stream integration block with an attention module was utilized to learn the significantly complementary features. Finally, the candidates were classified using a fully connected layer block. An exhaustive experiment on the LUNA16 challenge dataset was conducted to verify the effectiveness and performance of the proposed network. The AECS-CNN achieved a sensitivity of 0.92 with 4 false positives per scan. The results indicate that the attention mechanism can improve the network performance in false positive reduction, the proposed AECS-CNN can learn more representative features, and the attention module can guide the network to learn the discriminated feature channels and the crucial information embedded in the data, thereby effectively enhancing the performance of the detection system.
Collapse
Affiliation(s)
- Lingma Sun
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhuoran Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Hong Pu
- Sichuan Provincial People's Hospital, Chengdu, Sichuan, 610072, China; School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
| | - Guohui Yuan
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lu Guo
- Sichuan Provincial People's Hospital, Chengdu, Sichuan, 610072, China; School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China
| | - Tian Pu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhenming Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| |
Collapse
|
55
|
On the performance of lung nodule detection, segmentation and classification. Comput Med Imaging Graph 2021; 89:101886. [PMID: 33706112 DOI: 10.1016/j.compmedimag.2021.101886] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 01/11/2021] [Accepted: 02/02/2021] [Indexed: 01/10/2023]
Abstract
Computed tomography (CT) screening is an effective way for early detection of lung cancer in order to improve the survival rate of such a deadly disease. For more than two decades, image processing techniques such as nodule detection, segmentation, and classification have been extensively studied to assist physicians in identifying nodules from hundreds of CT slices to measure shapes and HU distributions of nodules automatically and to distinguish their malignancy. Thanks to new parallel computation, multi-layer convolution, nonlinear pooling operation, and the big data learning strategy, recent development of deep-learning algorithms has shown great progress in lung nodule screening and computer-assisted diagnosis (CADx) applications due to their high sensitivity and low false positive rates. This paper presents a survey of state-of-the-art deep-learning-based lung nodule screening and analysis techniques focusing on their performance and clinical applications, aiming to help better understand the current performance, the limitation, and the future trends of lung nodule analysis.
Collapse
|
56
|
Sun S, Ren H, Dan T, Wei W. 3D segmentation of lungs with juxta-pleural tumor using the improved active shape model approach. Technol Health Care 2021; 29:385-398. [PMID: 33682776 PMCID: PMC8150541 DOI: 10.3233/thc-218037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND AND OBJECTIVE: At present, there are many methods for pathological lung segmentation. However, there are still two unresolved problems. (1) The search steps in traditional ASM is a least square optimization method, which is sensitive to outlier marker points, and it makes the profile update to the transition area in the middle of normal lung tissue and tumor rather than a true lung contour. (2) If the noise images exist in the training dataset, the corrected shape model cannot be constructed. METHODS: To solve the first problem, we proposed a new ASM algorithm. Firstly, we detected these outlier marker points by a distance method, and then the different searching functions to the abnormal and normal marker points are applied. To solve the second problem, robust principal component analysis (RPCA) of low rank theory can remove noise, so the proposed method combines RPCA instead of PCA with ASM to solve this problem. Low rank decompose for marker points matrix of training dataset and covariance matrix of PCA will be done before segmentation using ASM. RESULTS: Using the proposed method to segment 122 lung images with juxta-pleural tumors of EMPIRE10 database, got the overlap rate with the gold standard as 94.5%. While the accuracy of ASM based on PCA is only 69.5%. CONCLUSIONS: The results showed that when the noise sample is contained in the training sample set, a good segmentation result for the lungs with juxta-pleural tumors can be obtained by the ASM based on RPCA.
Collapse
Affiliation(s)
- Shenshen Sun
- College of Information and Engineering, Shenyang University, Shenyang, Liaoning, China
| | - Huizhi Ren
- College of Mechanical Engineering, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Tian Dan
- College of Information and Engineering, Shenyang University, Shenyang, Liaoning, China
| | - Wu Wei
- College of Information and Engineering, Shenyang University, Shenyang, Liaoning, China
| |
Collapse
|
57
|
Pawar SP, Talbar SN. LungSeg-Net: Lung field segmentation using generative adversarial network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102296] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
58
|
Zhou J, Wang W, Lei B, Ge W, Huang Y, Zhang L, Yan Y, Zhou D, Ding Y, Wu J, Wang W. Automatic Detection and Classification of Focal Liver Lesions Based on Deep Convolutional Neural Networks: A Preliminary Study. Front Oncol 2021; 10:581210. [PMID: 33585197 PMCID: PMC7878526 DOI: 10.3389/fonc.2020.581210] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 12/14/2020] [Indexed: 12/12/2022] Open
Abstract
With the increasing daily workload of physicians, computer-aided diagnosis (CAD) systems based on deep learning play an increasingly important role in pattern recognition of diagnostic medical images. In this paper, we propose a framework based on hierarchical convolutional neural networks (CNNs) for automatic detection and classification of focal liver lesions (FLLs) in multi-phasic computed tomography (CT). A total of 616 nodules, composed of three types of malignant lesions (hepatocellular carcinoma, intrahepatic cholangiocarcinoma, and metastasis) and benign lesions (hemangioma, focal nodular hyperplasia, and cyst), were randomly divided into training and test sets at an approximate ratio of 3:1. To evaluate the performance of our model, other commonly adopted CNN models and two physicians were included for comparison. Our model achieved the best results to detect FLLs, with an average test precision of 82.8%, recall of 93.4%, and F1-score of 87.8%. Our model initially classified FLLs into malignant and benign and then classified them into more detailed classes. For the binary and six-class classification, our model achieved average accuracy results of 82.5 and73.4%, respectively, which were better than the other three classification neural networks. Interestingly, the classification performance of the model was placed between a junior physician and a senior physician. Overall, this preliminary study demonstrates that our proposed multi-modality and multi-scale CNN structure can locate and classify FLLs accurately in a limited dataset, and would help inexperienced physicians to reach a diagnosis in clinical practice.
Collapse
Affiliation(s)
- Jiarong Zhou
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.,Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China
| | - Wenzhe Wang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Biwen Lei
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Wenhao Ge
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.,Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China
| | - Yu Huang
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.,Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China
| | - Linshi Zhang
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.,Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China
| | - Yingcai Yan
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.,Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China
| | - Dongkai Zhou
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.,Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China
| | - Yuan Ding
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.,Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China.,Clinical Medicine Innovation Center of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Diseases of Zhejiang University, Hangzhou, China.,Clinical Research Center of Hepatobiliary and Pancreatic Diseases of Zhejiang Province, Hangzhou, China.,Research Center of Diagnosis and Treatment Technology for Hepatocellular Carcinoma of Zhejiang Province, Hangzhou, China
| | - Jian Wu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Weilin Wang
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.,Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China.,Clinical Medicine Innovation Center of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Diseases of Zhejiang University, Hangzhou, China.,Clinical Research Center of Hepatobiliary and Pancreatic Diseases of Zhejiang Province, Hangzhou, China.,Research Center of Diagnosis and Treatment Technology for Hepatocellular Carcinoma of Zhejiang Province, Hangzhou, China
| |
Collapse
|
59
|
|
60
|
Gomi T, Hara H, Watanabe Y, Mizukami S. Improved digital chest tomosynthesis image quality by use of a projection-based dual-energy virtual monochromatic convolutional neural network with super resolution. PLoS One 2020; 15:e0244745. [PMID: 33382766 PMCID: PMC7774945 DOI: 10.1371/journal.pone.0244745] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 12/15/2020] [Indexed: 12/22/2022] Open
Abstract
We developed a novel dual-energy (DE) virtual monochromatic (VM) very-deep super-resolution (VDSR) method with an unsharp masking reconstruction algorithm (DE–VM–VDSR) that uses projection data to improve the nodule contrast and reduce ripple artifacts during chest digital tomosynthesis (DT). For estimating the residual errors from high-resolution and multiscale VM images from the projection space, the DE–VM–VDSR algorithm employs a training network (mini-batch stochastic gradient-descent algorithm with momentum) and a hybrid super-resolution (SR) image [simultaneous algebraic reconstruction technique (SART) total-variation (TV) first-iterative shrinkage–thresholding algorithm (FISTA); SART–TV–FISTA] that involves subjective reconstruction with bilateral filtering (BF) [DE–VM–VDSR with BF]. DE-DT imaging was accomplished by pulsed X-ray exposures rapidly switched between low (60 kV, 37 projection) and high (120 kV, 37 projection) tube-potential kVp by employing a 40° swing angle. This was followed by comparison of images obtained employing the conventional polychromatic filtered backprojection (FBP), SART, SART–TV–FISTA, and DE–VM–SART–TV–FISTA algorithms. The improvements in contrast, ripple artifacts, and resolution were compared using the signal-difference-to-noise ratio (SDNR), Gumbel distribution of the largest variations, radial modulation transfer function (radial MTF) for a chest phantom with simulated ground-glass opacity (GGO) nodules, and noise power spectrum (NPS) for uniform water phantom. The novel DE–VM–VDSR with BF improved the overall performance in terms of SDNR (DE–VM–VDSR with BF: 0.1603, without BF: 0.1517; FBP: 0.0521; SART: 0.0645; SART–TV–FISTA: 0.0984; and DE–VM–SART–TV–FISTA: 0.1004), obtained a Gumbel distribution that yielded good images showing the type of simulated GGO nodules used in the chest phantom, and reduced the ripple artifacts. The NPS of DE–VM–VDSR with BF showed the lowest noise characteristics in the high-frequency region (~0.8 cycles/mm). The DE–VM–VDSR without BF yielded an improved resolution relative to that of the conventional reconstruction algorithms for radial MTF analysis (0.2–0.3 cycles/mm). Finally, based on the overall image quality, DE–VM–VDSR with BF improved the contrast and reduced the high-frequency ripple artifacts and noise.
Collapse
Affiliation(s)
- Tsutomu Gomi
- School of Allied Health Sciences, Kitasato University, Sagamihara, Kanagawa, Japan
- * E-mail:
| | - Hidetake Hara
- School of Allied Health Sciences, Kitasato University, Sagamihara, Kanagawa, Japan
| | - Yusuke Watanabe
- School of Allied Health Sciences, Kitasato University, Sagamihara, Kanagawa, Japan
| | - Shinya Mizukami
- School of Allied Health Sciences, Kitasato University, Sagamihara, Kanagawa, Japan
| |
Collapse
|
61
|
Zahia S, Garcia-Zapirain B, Saralegui I, Fernandez-Ruanova B. Dyslexia detection using 3D convolutional neural networks and functional magnetic resonance imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105726. [PMID: 32916543 DOI: 10.1016/j.cmpb.2020.105726] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2019] [Accepted: 08/22/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES Dyslexia is a disorder of neurological origin which affects the learning of those who suffer from it, mainly children, and causes difficulty in reading and writing. When undiagnosed, dyslexia leads to intimidation and frustration of the affected children and also of their family circles. In case no early intervention is given, children may reach high school with serious achievement gaps. Hence, early detection and intervention services for dyslexic students are highly important and recommended in order to support children in developing a positive self-esteem and reaching their maximum academic capacities. This paper presents a new approach for automatic recognition of children with dyslexia using functional magnetic resonance Imaging. METHODS Our proposed system is composed of a sequence of preprocessing steps to retrieve the brain activation areas during three different reading tasks. Conversion to Nifti volumes, adjustment of head motion, normalization and smoothing transformations were performed on the fMRI scans in order to bring all the subject brains into one single model which will enable voxels comparison between each subject. Subsequently, using Statistical Parametric Maps (SPMs), a total of 165 3D volumes containing brain activation of 55 children were created. The classification of these volumes was handled using three parallel 3D Convolutional Neural Network (3D CNN), each corresponding to a brain activation during one reading task, and concatenated in the last two dense layers, forming a single architecture devoted to performing optimized detection of dyslexic brain activation. Additionally, we used 4-fold cross validation method in order to assess the generalizability of our model and control overfitting. RESULTS Our approach has achieved an overall average classification accuracy of 72.73%, sensitivity of 75%, specificity of 71.43%, precision of 60% and an F1-score of 67% in dyslexia detection. CONCLUSIONS The proposed system has demonstrated that the recognition of dyslexic children is feasible using deep learning and functional magnetic resonance Imaging when performing phonological and orthographic reading tasks.
Collapse
Affiliation(s)
- Sofia Zahia
- eVida research laboratory, University of Deusto, Bilbao 48007, Spain.
| | | | - Ibone Saralegui
- Department of Neuroradiology, Osatek, Biocruces-Bizkaia; Galdakao-Usansolo Hospital / Osakidetza, Galdakao 48960, Spain
| | | |
Collapse
|
62
|
Debelee TG, Kebede SR, Schwenker F, Shewarega ZM. Deep Learning in Selected Cancers' Image Analysis-A Survey. J Imaging 2020; 6:121. [PMID: 34460565 PMCID: PMC8321208 DOI: 10.3390/jimaging6110121] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/19/2020] [Accepted: 10/26/2020] [Indexed: 02/08/2023] Open
Abstract
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, 120611 Addis Ababa, Ethiopia
| | - Samuel Rahimeto Kebede
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- Department of Electrical and Computer Engineering, Debreberhan University, 445 Debre Berhan, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, University of Ulm, 89081 Ulm, Germany;
| | | |
Collapse
|
63
|
Lu X, Gu Y, Yang L, Zhang B, Zhao Y, Yu D, Zhao J, Gao L, Zhou T, Liu Y, Zhang W. Multi-level 3D Densenets for False-positive Reduction in Lung Nodule Detection Based on Chest Computed Tomography. Curr Med Imaging 2020; 16:1004-1021. [PMID: 33081662 DOI: 10.2174/1573405615666191113122840] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Revised: 10/11/2019] [Accepted: 10/19/2019] [Indexed: 12/31/2022]
Abstract
OBJECTIVE False-positive nodule reduction is a crucial part of a computer-aided detection (CADe) system, which assists radiologists in accurate lung nodule detection. In this research, a novel scheme using multi-level 3D DenseNet framework is proposed to implement false-positive nodule reduction task. METHODS Multi-level 3D DenseNet models were extended to differentiate lung nodules from falsepositive nodules. First, different models were fed with 3D cubes with different sizes for encoding multi-level contextual information to meet the challenges of the large variations of lung nodules. In addition, image rotation and flipping were utilized to upsample positive samples which consisted of a positive sample set. Furthermore, the 3D DenseNets were designed to keep low-level information of nodules, as densely connected structures in DenseNet can reuse features of lung nodules and then boost feature propagation. Finally, the optimal weighted linear combination of all model scores obtained the best classification result in this research. RESULTS The proposed method was evaluated with LUNA16 dataset which contained 888 thin-slice CT scans. The performance was validated via 10-fold cross-validation. Both the Free-response Receiver Operating Characteristic (FROC) curve and the Competition Performance Metric (CPM) score show that the proposed scheme can achieve a satisfactory detection performance in the falsepositive reduction track of the LUNA16 challenge. CONCLUSION The result shows that the proposed scheme can be significant for false-positive nodule reduction task.
Collapse
Affiliation(s)
- Xiaoqi Lu
- College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China,Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Jianfeng Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lixin Gao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Foreign Languages, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Tao Zhou
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Yang Liu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Wei Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| |
Collapse
|
64
|
Hong J, Feng Z, Wang SH, Peet A, Zhang YD, Sun Y, Yang M. Brain Age Prediction of Children Using Routine Brain MR Images via Deep Learning. Front Neurol 2020; 11:584682. [PMID: 33193046 PMCID: PMC7604456 DOI: 10.3389/fneur.2020.584682] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 09/04/2020] [Indexed: 01/26/2023] Open
Abstract
Predicting brain age of children accurately and quantitatively can give help in brain development analysis and brain disease diagnosis. Traditional methods to estimate brain age based on 3D magnetic resonance (MR), T1 weighted imaging (T1WI), and diffusion tensor imaging (DTI) need complex preprocessing and extra scanning time, decreasing clinical practice, especially in children. This research aims at proposing an end-to-end AI system based on deep learning to predict the brain age based on routine brain MR imaging. We spent over 5 years enrolling 220 stacked 2D routine clinical brain MR T1-weighted images of healthy children aged 0 to 5 years old and randomly divided those images into training data including 176 subjects and test data including 44 subjects. Data augmentation technology, which includes scaling, image rotation, translation, and gamma correction, was employed to extend the training data. A 10-layer 3D convolutional neural network (CNN) was designed for predicting the brain age of children and it achieved reliable and accurate results on test data with a mean absolute deviation (MAE) of 67.6 days, a root mean squared error (RMSE) of 96.1 days, a mean relative error (MRE) of 8.2%, a correlation coefficient (R) of 0.985, and a coefficient of determination (R 2) of 0.971. Specially, the performance on predicting the age of children under 2 years old with a MAE of 28.9 days, a RMSE of 37.0 days, a MRE of 7.8%, a R of 0.983, and a R 2 of 0.967 is much better than that over 2 with a MAE of 110.0 days, a RMSE of 133.5 days, a MRE of 8.2%, a R of 0.883, and a R 2 of 0.780.
Collapse
Affiliation(s)
- Jin Hong
- School of Informatics, University of Leicester, Leicester, United Kingdom
- Department of Radiology, Children's Hospital of Nanjing Medical University, Nanjing, China
| | - Zhangzhi Feng
- Department of Radiology, Children's Hospital of Nanjing Medical University, Nanjing, China
| | - Shui-Hua Wang
- School of Architecture Building and Civil Engineering, Loughborough University, Loughborough, United Kingdom
- School of Mathematics and Actuarial Science, University of Leicester, Leicester, United Kingdom
| | - Andrew Peet
- Institute of Cancer & Genomic Science, University of Birmingham, Birmingham, United Kingdom
| | - Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, United Kingdom
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Yu Sun
- Institute of Cancer & Genomic Science, University of Birmingham, Birmingham, United Kingdom
- International Laboratory for Children's Medical Imaging Research, School of Biology Science and Medical Engineering, Southeast University, Nanjing, China
| | - Ming Yang
- Department of Radiology, Children's Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
65
|
Singh SP, Wang L, Gupta S, Goli H, Padmanabhan P, Gulyás B. 3D Deep Learning on Medical Images: A Review. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5097. [PMID: 32906819 PMCID: PMC7570704 DOI: 10.3390/s20185097] [Citation(s) in RCA: 162] [Impact Index Per Article: 40.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 08/31/2020] [Accepted: 09/03/2020] [Indexed: 12/20/2022]
Abstract
The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.
Collapse
Affiliation(s)
- Satya P. Singh
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
| | - Lipo Wang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore;
| | - Sukrit Gupta
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore; (S.G.); (H.G.)
| | - Haveesh Goli
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore; (S.G.); (H.G.)
| | - Parasuraman Padmanabhan
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
| | - Balázs Gulyás
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 608232, Singapore; (S.P.S.); (B.G.)
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore 636921, Singapore
- Department of Clinical Neuroscience, Karolinska Institute, 17176 Stockholm, Sweden
| |
Collapse
|
66
|
Ohno Y, Aoyagi K, Yaguchi A, Seki S, Ueno Y, Kishida Y, Takenaka D, Yoshikawa T. Differentiation of Benign from Malignant Pulmonary Nodules by Using a Convolutional Neural Network to Determine Volume Change at Chest CT. Radiology 2020; 296:432-443. [DOI: 10.1148/radiol.2020191740] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
67
|
Farhat H, Sakr GE, Kilany R. Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19. MACHINE VISION AND APPLICATIONS 2020; 31:53. [PMID: 32834523 PMCID: PMC7386599 DOI: 10.1007/s00138-020-01101-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 06/21/2020] [Accepted: 07/07/2020] [Indexed: 05/07/2023]
Abstract
Shortly after deep learning algorithms were applied to Image Analysis, and more importantly to medical imaging, their applications increased significantly to become a trend. Likewise, deep learning applications (DL) on pulmonary medical images emerged to achieve remarkable advances leading to promising clinical trials. Yet, coronavirus can be the real trigger to open the route for fast integration of DL in hospitals and medical centers. This paper reviews the development of deep learning applications in medical image analysis targeting pulmonary imaging and giving insights of contributions to COVID-19. It covers more than 160 contributions and surveys in this field, all issued between February 2017 and May 2020 inclusively, highlighting various deep learning tasks such as classification, segmentation, and detection, as well as different pulmonary pathologies like airway diseases, lung cancer, COVID-19 and other infections. It summarizes and discusses the current state-of-the-art approaches in this research domain, highlighting the challenges, especially with COVID-19 pandemic current situation.
Collapse
Affiliation(s)
- Hanan Farhat
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - George E. Sakr
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - Rima Kilany
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| |
Collapse
|
68
|
Hussain AA, Bouachir O, Al-Turjman F, Aloqaily M. AI Techniques for COVID-19. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:128776-128795. [PMID: 34976554 PMCID: PMC8545328 DOI: 10.1109/access.2020.3007939] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 07/04/2020] [Indexed: 05/18/2023]
Abstract
Artificial Intelligence (AI) intent is to facilitate human limits. It is getting a standpoint on human administrations, filled by the growing availability of restorative clinical data and quick progression of insightful strategies. Motivated by the need to highlight the need for employing AI in battling the COVID-19 Crisis, this survey summarizes the current state of AI applications in clinical administrations while battling COVID-19. Furthermore, we highlight the application of Big Data while understanding this virus. We also overview various intelligence techniques and methods that can be applied to various types of medical information-based pandemic. We classify the existing AI techniques in clinical data analysis, including neural systems, classical SVM, and edge significant learning. Also, an emphasis has been made on regions that utilize AI-oriented cloud computing in combating various similar viruses to COVID-19. This survey study is an attempt to benefit medical practitioners and medical researchers in overpowering their faced difficulties while handling COVID-19 big data. The investigated techniques put forth advances in medical data analysis with an exactness of up to 90%. We further end up with a detailed discussion about how AI implementation can be a huge advantage in combating various similar viruses.
Collapse
Affiliation(s)
- Adedoyin Ahmed Hussain
- Department of Computer EngineeringNear East University99138NicosiaMersin 10Turkey
- Research Centre for AI and IoTDepartment of Artificial Intelligence EngineeringNear East University99138NicosiaMersin 10Turkey
| | - Ouns Bouachir
- Department of Computer EngineeringZayed UniversityDubaiUnited Arab Emirates
- College of Technological InnovationZayed UniversityDubaiUnited Arab Emirates
| | - Fadi Al-Turjman
- Research Centre for AI and IoTDepartment of Artificial Intelligence EngineeringNear East University99138NicosiaMersin 10Turkey
| | - Moayad Aloqaily
- College of EngineeringAl Ain UniversityAl AinUnited Arab Emirates
| |
Collapse
|
69
|
Bharati S, Podder P, Mondal MRH. Hybrid deep learning for detecting lung diseases from X-ray images. INFORMATICS IN MEDICINE UNLOCKED 2020; 20:100391. [PMID: 32835077 PMCID: PMC7341954 DOI: 10.1016/j.imu.2020.100391] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 06/29/2020] [Accepted: 06/30/2020] [Indexed: 02/08/2023] Open
Abstract
Lung disease is common throughout the world. These include chronic obstructive pulmonary disease, pneumonia, asthma, tuberculosis, fibrosis, etc. Timely diagnosis of lung disease is essential. Many image processing and machine learning models have been developed for this purpose. Different forms of existing deep learning techniques including convolutional neural network (CNN), vanilla neural network, visual geometry group based neural network (VGG), and capsule network are applied for lung disease prediction. The basic CNN has poor performance for rotated, tilted, or other abnormal image orientation. Therefore, we propose a new hybrid deep learning framework by combining VGG, data augmentation and spatial transformer network (STN) with CNN. This new hybrid method is termed here as VGG Data STN with CNN (VDSNet). As implementation tools, Jupyter Notebook, Tensorflow, and Keras are used. The new model is applied to NIH chest X-ray image dataset collected from Kaggle repository. Full and sample versions of the dataset are considered. For both full and sample datasets, VDSNet outperforms existing methods in terms of a number of metrics including precision, recall, F0.5 score and validation accuracy. For the case of full dataset, VDSNet exhibits a validation accuracy of 73%, while vanilla gray, vanilla RGB, hybrid CNN and VGG, and modified capsule network have accuracy values of 67.8%, 69%, 69.5% and 63.8%, respectively. When sample dataset rather than full dataset is used, VDSNet requires much lower training time at the expense of a slightly lower validation accuracy. Hence, the proposed VDSNet framework will simplify the detection of lung disease for experts as well as for doctors.
Collapse
Affiliation(s)
- Subrato Bharati
- Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Dhaka, 1205, Bangladesh
| | - Prajoy Podder
- Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Dhaka, 1205, Bangladesh
| | - M Rubaiyat Hossain Mondal
- Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Dhaka, 1205, Bangladesh
| |
Collapse
|
70
|
Ardakani AA, Kanafi AR, Acharya UR, Khadem N, Mohammadi A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput Biol Med 2020; 121:103795. [PMID: 32568676 PMCID: PMC7190523 DOI: 10.1016/j.compbiomed.2020.103795] [Citation(s) in RCA: 361] [Impact Index Per Article: 90.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 04/10/2020] [Accepted: 04/27/2020] [Indexed: 12/23/2022]
Abstract
Fast diagnostic methods can control and prevent the spread of pandemic diseases like coronavirus disease 2019 (COVID-19) and assist physicians to better manage patients in high workload conditions. Although a laboratory test is the current routine diagnostic tool, it is time-consuming, imposing a high cost and requiring a well-equipped laboratory for analysis. Computed tomography (CT) has thus far become a fast method to diagnose patients with COVID-19. However, the performance of radiologists in diagnosis of COVID-19 was moderate. Accordingly, additional investigations are needed to improve the performance in diagnosing COVID-19. In this study is suggested a rapid and valid method for COVID-19 diagnosis using an artificial intelligence technique based. 1020 CT slices from 108 patients with laboratory proven COVID-19 (the COVID-19 group) and 86 patients with other atypical and viral pneumonia diseases (the non-COVID-19 group) were included. Ten well-known convolutional neural networks were used to distinguish infection of COVID-19 from non-COVID-19 groups: AlexNet, VGG-16, VGG-19, SqueezeNet, GoogleNet, MobileNet-V2, ResNet-18, ResNet-50, ResNet-101, and Xception. Among all networks, the best performance was achieved by ResNet-101 and Xception. ResNet-101 could distinguish COVID-19 from non-COVID-19 cases with an AUC of 0.994 (sensitivity, 100%; specificity, 99.02%; accuracy, 99.51%). Xception achieved an AUC of 0.994 (sensitivity, 98.04%; specificity, 100%; accuracy, 99.02%). However, the performance of the radiologist was moderate with an AUC of 0.873 (sensitivity, 89.21%; specificity, 83.33%; accuracy, 86.27%). ResNet-101 can be considered as a high sensitivity model to characterize and diagnose COVID-19 infections, and can be used as an adjuvant tool in radiology departments.
Collapse
Affiliation(s)
- Ali Abbasian Ardakani
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences (IUMS), Tehran, Iran.
| | | | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore; School of Medicine, Faculty of Health and Medical Sciences, Taylor's University, 47500, Subang Jaya, Malaysia; Department of Biomedical Informatics and Medical Engineering, Asia University, Taiwan.
| | - Nazanin Khadem
- Department of Radiology, Faculty of Medicine, Urmia University of Medical Science, Urmia, Iran.
| | - Afshin Mohammadi
- Department of Radiology, Faculty of Medicine, Urmia University of Medical Science, Urmia, Iran.
| |
Collapse
|
71
|
Transfer Learning with Deep Convolutional Neural Network (CNN) for Pneumonia Detection Using Chest X-ray. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10093233] [Citation(s) in RCA: 115] [Impact Index Per Article: 28.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Pneumonia is a life-threatening disease, which occurs in the lungs caused by either bacterial or viral infection. It can be life-endangering if not acted upon at the right time and thus the early diagnosis of pneumonia is vital. The paper aims to automatically detect bacterial and viral pneumonia using digital x-ray images. It provides a detailed report on advances in accurate detection of pneumonia and then presents the methodology adopted by the authors. Four different pre-trained deep Convolutional Neural Network (CNN): AlexNet, ResNet18, DenseNet201, and SqueezeNet were used for transfer learning. A total of 5247 chest X-ray images consisting of bacterial, viral, and normal chest x-rays images were preprocessed and trained for the transfer learning-based classification task. In this study, the authors have reported three schemes of classifications: normal vs. pneumonia, bacterial vs. viral pneumonia, and normal, bacterial, and viral pneumonia. The classification accuracy of normal and pneumonia images, bacterial and viral pneumonia images, and normal, bacterial, and viral pneumonia were 98%, 95%, and 93.3%, respectively. This is the highest accuracy, in any scheme, of the accuracies reported in the literature. Therefore, the proposed study can be useful in more quickly diagnosing pneumonia by the radiologist and can help in the fast airport screening of pneumonia patients.
Collapse
|
72
|
Kuo CFJ, Huang CC, Siao JJ, Hsieh CW, Huy VQ, Ko KH, Hsu HH. Automatic lung nodule detection system using image processing techniques in computed tomography. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101659] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
73
|
A Novel Transfer Learning Based Approach for Pneumonia Detection in Chest X-ray Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10020559] [Citation(s) in RCA: 185] [Impact Index Per Article: 46.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Pneumonia is among the top diseases which cause most of the deaths all over the world. Virus, bacteria and fungi can all cause pneumonia. However, it is difficult to judge the pneumonia just by looking at chest X-rays. The aim of this study is to simplify the pneumonia detection process for experts as well as for novices. We suggest a novel deep learning framework for the detection of pneumonia using the concept of transfer learning. In this approach, features from images are extracted using different neural network models pretrained on ImageNet, which then are fed into a classifier for prediction. We prepared five different models and analyzed their performance. Thereafter, we proposed an ensemble model that combines outputs from all pretrained models, which outperformed individual models, reaching the state-of-the-art performance in pneumonia recognition. Our ensemble model reached an accuracy of 96.4% with a recall of 99.62% on unseen data from the Guangzhou Women and Children’s Medical Center dataset.
Collapse
|
74
|
Kim TJ, Kim CH, Lee HY, Chung MJ, Shin SH, Lee KJ, Lee KS. Management of incidental pulmonary nodules: current strategies and future perspectives. Expert Rev Respir Med 2019; 14:173-194. [PMID: 31762330 DOI: 10.1080/17476348.2020.1697853] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Introduction: Detection and characterization of pulmonary nodules is an important issue, because the process is the first step in the management of lung cancers.Areas covered: Literature review was performed on May 15 2019 by using the PubMed, US National Library of Medicine National Institutes of Health, and the National Center for Biotechnology information. CT features helping identify the druggable mutations and predict the prognosis of malignant nodules were presented. Technical advancements in MRI and PET/CT were introduced for providing functional information about malignant nodules. Advances in various tissue biopsy techniques enabling molecular analysis and histologic diagnosis of indeterminate nodules were also presented. New techniques such as radiomics, deep learning (DL) technology, and artificial intelligence showing promise in differentiating between malignant and benign nodules were summarized. Recently, updated management guidelines for solid and subsolid nodules incidentally detected on CT were described. Risk stratification and prediction models for indeterminate nodules under active investigation were briefly summarized.Expert opinion: Advancement in CT knowledge has led to a better correlation between CT features and genomic alterations or tumor histology. Recent advances like PET/CT, MRI, radiomics, and DL-based approach have shown promising results in the characterization and prognostication of pulmonary nodules.
Collapse
Affiliation(s)
- Tae Jung Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Cho Hee Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Ho Yun Lee
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Myung Jin Chung
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Sun Hye Shin
- Respiratory and Critical Care Division of Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Kyung Jong Lee
- Respiratory and Critical Care Division of Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Kyung Soo Lee
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| |
Collapse
|
75
|
Wang Q, Shen F, Shen L, Huang J, Sheng W. Lung Nodule Detection in CT Images Using a Raw Patch-Based Convolutional Neural Network. J Digit Imaging 2019; 32:971-979. [PMID: 31062113 PMCID: PMC6841817 DOI: 10.1007/s10278-019-00221-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Remarkable progress has been made in image classification and segmentation, due to the recent study of deep convolutional neural networks (CNNs). To solve the similar problem of diagnostic lung nodule detection in low-dose computed tomography (CT) scans, we propose a new Computer-Aided Detection (CAD) system using CNNs and CT image segmentation techniques. Unlike former studies focusing on the classification of malignant nodule types or relying on prior image processing, in this work, we put raw CT image patches directly in CNNs to reduce the complexity of the system. Specifically, we split each CT image into several patches, which are divided into 6 types consisting of 3 nodule types and 3 non-nodule types. We compare the performance of ResNet with different CNNs architectures on CT images from a publicly available dataset named the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). Results show that our best model reaches a high detection sensitivity of 92.8% with 8 false positives per scan (FPs/scan). Compared with related work, our work obtains a state-of-the-art effect.
Collapse
Affiliation(s)
- Qin Wang
- Shanghai Jiao Tong University, Shanghai, 201100, China
| | - Fengyi Shen
- Shanghai Jiao Tong University, Shanghai, 201100, China
| | - Linyao Shen
- Shanghai Jiao Tong University, Shanghai, 201100, China
| | - Jia Huang
- Shanghai Chest Hospital, Shanghai, 200030, China
| | | |
Collapse
|
76
|
Nasrullah N, Sang J, Alam MS, Mateen M, Cai B, Hu H. Automated Lung Nodule Detection and Classification Using Deep Learning Combined with Multiple Strategies. SENSORS 2019; 19:s19173722. [PMID: 31466261 PMCID: PMC6749467 DOI: 10.3390/s19173722] [Citation(s) in RCA: 120] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Revised: 08/13/2019] [Accepted: 08/26/2019] [Indexed: 01/12/2023]
Abstract
Lung cancer is one of the major causes of cancer-related deaths due to its aggressive nature and delayed detections at advanced stages. Early detection of lung cancer is very important for the survival of an individual, and is a significant challenging problem. Generally, chest radiographs (X-ray) and computed tomography (CT) scans are used initially for the diagnosis of the malignant nodules; however, the possible existence of benign nodules leads to erroneous decisions. At early stages, the benign and the malignant nodules show very close resemblance to each other. In this paper, a novel deep learning-based model with multiple strategies is proposed for the precise diagnosis of the malignant nodules. Due to the recent achievements of deep convolutional neural networks (CNN) in image analysis, we have used two deep three-dimensional (3D) customized mixed link network (CMixNet) architectures for lung nodule detection and classification, respectively. Nodule detections were performed through faster R-CNN on efficiently-learned features from CMixNet and U-Net like encoder-decoder architecture. Classification of the nodules was performed through a gradient boosting machine (GBM) on the learned features from the designed 3D CMixNet structure. To reduce false positives and misdiagnosis results due to different types of errors, the final decision was performed in connection with physiological symptoms and clinical biomarkers. With the advent of the internet of things (IoT) and electro-medical technology, wireless body area networks (WBANs) provide continuous monitoring of patients, which helps in diagnosis of chronic diseases-especially metastatic cancers. The deep learning model for nodules' detection and classification, combined with clinical factors, helps in the reduction of misdiagnosis and false positive (FP) results in early-stage lung cancer diagnosis. The proposed system was evaluated on LIDC-IDRI datasets in the form of sensitivity (94%) and specificity (91%), and better results were obatined compared to the existing methods.
Collapse
Affiliation(s)
- Nasrullah Nasrullah
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China
- Department of Software Engineering, Foundation University Islamabad, Islamabad 44000, Pakistan
| | - Jun Sang
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China.
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China.
| | - Mohammad S Alam
- Frank H. Dotterweich College of Engineering, Texas A&M University-Kingsville, Kingsville, TX 78363-8202, USA
| | - Muhammad Mateen
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China
| | - Bin Cai
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China
| | - Haibo Hu
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China
| |
Collapse
|
77
|
Deepak S, Ameer PM. Brain tumor classification using deep CNN features via transfer learning. Comput Biol Med 2019; 111:103345. [PMID: 31279167 DOI: 10.1016/j.compbiomed.2019.103345] [Citation(s) in RCA: 264] [Impact Index Per Article: 52.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 06/26/2019] [Accepted: 06/26/2019] [Indexed: 11/28/2022]
Abstract
Brain tumor classification is an important problem in computer-aided diagnosis (CAD) for medical applications. This paper focuses on a 3-class classification problem to differentiate among glioma, meningioma and pituitary tumors, which form three prominent types of brain tumor. The proposed classification system adopts the concept of deep transfer learning and uses a pre-trained GoogLeNet to extract features from brain MRI images. Proven classifier models are integrated to classify the extracted features. The experiment follows a patient-level five-fold cross-validation process, on MRI dataset from figshare. The proposed system records a mean classification accuracy of 98%, outperforming all state-of-the-art methods. Other performance measures used in the study are the area under the curve (AUC), precision, recall, F-score and specificity. In addition, the paper addresses a practical aspect by evaluating the system with fewer training samples. The observations of the study imply that transfer learning is a useful technique when the availability of medical images is limited. The paper provides an analytical discussion on misclassifications also.
Collapse
Affiliation(s)
- S Deepak
- Department of Electronics & Communication Engineering, National Institute of Technology, Calicut, India.
| | - P M Ameer
- Department of Electronics & Communication Engineering, National Institute of Technology, Calicut, India
| |
Collapse
|
78
|
Automatic lung nodule detection using multi-scale dot nodule-enhancement filter and weighted support vector machines in chest computed tomography. PLoS One 2019; 14:e0210551. [PMID: 30629724 PMCID: PMC6328111 DOI: 10.1371/journal.pone.0210551] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Accepted: 12/27/2018] [Indexed: 01/15/2023] Open
Abstract
A novel CAD scheme for automated lung nodule detection is proposed to assist radiologists with the detection of lung cancer on CT scans. The proposed scheme is composed of four major steps: (1) lung volume segmentation, (2) nodule candidate extraction and grouping, (3) false positives reduction for the non-vessel tree group, and (4) classification for the vessel tree group. Lung segmentation is performed first. Then, 3D labeling technology is used to divide nodule candidates into two groups. For the non-vessel tree group, nodule candidates are classified as true nodules at the false positive reduction stage if the candidates survive the rule-based classifier and are not screened out by the dot filter. For the vessel tree group, nodule candidates are extracted using dot filter. Next, RSFS feature selection is used to select the most discriminating features for classification. Finally, WSVM with an undersampling approach is adopted to discriminate true nodules from vessel bifurcations in vessel tree group. The proposed method was evaluated on 154 thin-slice scans with 204 nodules in the LIDC database. The performance of the proposed CAD scheme yielded a high sensitivity (87.81%) while maintaining a low false rate (1.057 FPs/scan). The experimental results indicate the performance of our method may be better than the existing methods.
Collapse
|