1
|
Okumura E, Kato H, Honmoto T, Suzuki N, Okumura E, Higashigawa T, Kitamura S, Ando J, Ishida T. [Segmentation of Mass in Mammogram Using Gaze Search Patterns]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2024; 80:487-498. [PMID: 38479883 DOI: 10.6009/jjrt.2024-1438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/21/2024]
Abstract
PURPOSE It is very difficult for a radiologist to correctly detect small lesions and lesions hidden on dense breast tissue on a mammogram. Therefore, recently, computer-aided detection (CAD) systems have been widely used to assist radiologists in interpreting images. Thus, in this study, we aimed to segment mass on the mammogram with high accuracy by using focus images obtained from an eye-tracking device. METHODS We obtained focus images for two mammography expert radiologists and 19 mammography technologists on 8 abnormal and 8 normal mammograms published by the DDSM. Next, the auto-encoder, Pix2Pix, and UNIT learned the relationship between the actual mammogram and the focus image, and generated the focus image for the unknown mammogram. Finally, we segmented regions of mass on mammogram using the U-Net for each focus image generated by the auto-encoder, Pix2Pix, and UNIT. RESULTS The dice coefficient in the UNIT was 0.64±0.14. The dice coefficient in the UNIT was higher than that in the auto-encoder and Pix2Pix, and there was a statistically significant difference (p<0.05). The dice coefficient of the proposed method, which combines the focus images generated by the UNIT and the original mammogram, was 0.66±0.15, which is equivalent to the method using the original mammogram. CONCLUSION In the future, it will be necessary to increase the number of cases and further improve the segmentation.
Collapse
Affiliation(s)
- Eiichiro Okumura
- Department of Radiological Technology, Faculty of Health Sciences, Tsukuba International University
| | - Hideki Kato
- Department of Radiological Science, Faculty of Health Science, Gunma Paz University
| | - Tsuyoshi Honmoto
- Department of Radiological Technology, Ibaraki Children's Hospital
| | | | - Erika Okumura
- Department of Radiological Technology, Faculty of Health Sciences, Tsukuba International University
| | - Takuji Higashigawa
- Group of Visual Measurement, Department of Technology, Nac Image Technology
| | - Shigemi Kitamura
- Department of Radiological Technology, Faculty of Health Sciences, Tsukuba International University
| | - Jiro Ando
- Hospital Director, Tochigi Cancer Center
| | - Takayuki Ishida
- Division of Health Sciences, Graduate School of Medicine, Osaka University
| |
Collapse
|
2
|
Juan J, Monsó E, Lozano C, Cufí M, Subías-Beltrán P, Ruiz-Dern L, Rafael-Palou X, Andreu M, Castañer E, Gallardo X, Ullastres A, Sans C, Lujàn M, Rubiés C, Ribas-Ripoll V. Computer-assisted diagnosis for an early identification of lung cancer in chest X rays. Sci Rep 2023; 13:7720. [PMID: 37173327 PMCID: PMC10182094 DOI: 10.1038/s41598-023-34835-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 05/09/2023] [Indexed: 05/15/2023] Open
Abstract
Computer-assisted diagnosis (CAD) algorithms have shown its usefulness for the identification of pulmonary nodules in chest x-rays, but its capability to diagnose lung cancer (LC) is unknown. A CAD algorithm for the identification of pulmonary nodules was created and used on a retrospective cohort of patients with x-rays performed in 2008 and not examined by a radiologist when obtained. X-rays were sorted according to the probability of pulmonary nodule, read by a radiologist and the evolution for the following three years was assessed. The CAD algorithm sorted 20,303 x-rays and defined four subgroups with 250 images each (percentiles ≥ 98, 66, 33 and 0). Fifty-eight pulmonary nodules were identified in the ≥ 98 percentile (23,2%), while only 64 were found in lower percentiles (8,5%) (p < 0.001). A pulmonary nodule was confirmed by the radiologist in 39 out of 173 patients in the high-probability group who had follow-up information (22.5%), and in 5 of them a LC was diagnosed with a delay of 11 months (12.8%). In one quarter of the chest x-rays considered as high-probability for pulmonary nodule by a CAD algorithm, the finding is confirmed and corresponds to an undiagnosed LC in one tenth of the cases.
Collapse
Affiliation(s)
- Judith Juan
- Innovation Department, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Eduard Monsó
- Airway Inflammation Research Group, Institut d'Investigació i Innovació Parc Taulí (I3PT), Parc Taulí 1, 08208, Sabadell, Spain.
| | - Carme Lozano
- Diagnostic Imaging Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Marta Cufí
- Diagnostic Imaging Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | | | | | | | - Marta Andreu
- Diagnostic Imaging Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Eva Castañer
- Diagnostic Imaging Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Xavier Gallardo
- Diagnostic Imaging Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Anna Ullastres
- Innovation Department, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Carles Sans
- Eurecat, Centre Tecnològic de Catalunya, Barcelona, Spain
| | - Manel Lujàn
- Respiratory Diseases Department, Parc Taulí Hospital Universitari, Institut d'Investigació i Innovació Parc Taulí (I3PT), Sabadell, Spain
| | - Carles Rubiés
- Informatics and Systems Department, Granollers General Hospital, Granollers, Barcelona, Spain
| | | |
Collapse
|
3
|
Agrawal T, Choudhary P. Segmentation and classification on chest radiography: a systematic survey. THE VISUAL COMPUTER 2022; 39:875-913. [PMID: 35035008 PMCID: PMC8741572 DOI: 10.1007/s00371-021-02352-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/01/2021] [Indexed: 06/14/2023]
Abstract
Chest radiography (X-ray) is the most common diagnostic method for pulmonary disorders. A trained radiologist is required for interpreting the radiographs. But sometimes, even experienced radiologists can misinterpret the findings. This leads to the need for computer-aided detection diagnosis. For decades, researchers were automatically detecting pulmonary disorders using the traditional computer vision (CV) methods. Now the availability of large annotated datasets and computing hardware has made it possible for deep learning to dominate the area. It is now the modus operandi for feature extraction, segmentation, detection, and classification tasks in medical imaging analysis. This paper focuses on the research conducted using chest X-rays for the lung segmentation and detection/classification of pulmonary disorders on publicly available datasets. The studies performed using the Generative Adversarial Network (GAN) models for segmentation and classification on chest X-rays are also included in this study. GAN has gained the interest of the CV community as it can help with medical data scarcity. In this study, we have also included the research conducted before the popularity of deep learning models to have a clear picture of the field. Many surveys have been published, but none of them is dedicated to chest X-rays. This study will help the readers to know about the existing techniques, approaches, and their significance.
Collapse
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| | - Prakash Choudhary
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| |
Collapse
|
4
|
Kavithaa G, Balakrishnan P, Yuvaraj SA. Lung Cancer Detection and Improving Accuracy Using Linear Subspace Image Classification Algorithm. Interdiscip Sci 2021; 13:779-786. [PMID: 34351570 DOI: 10.1007/s12539-021-00468-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 07/15/2021] [Accepted: 07/23/2021] [Indexed: 06/13/2023]
Abstract
The ability to identify lung cancer at an early stage is critical, because it can help patients live longer. However, predicting the affected area while diagnosing cancer is a huge challenge. An intelligent computer-aided diagnostic system can be utilized to detect and diagnose lung cancer by detecting the damaged region. The suggested Linear Subspace Image Classification Algorithm (LSICA) approach classifies images in a linear subspace. This methodology is used to accurately identify the damaged region, and it involves three steps: image enhancement, segmentation, and classification. The spatial image clustering technique is used to quickly segment and identify the impacted area in the image. LSICA is utilized to determine the accuracy value of the affected region for classification purposes. Therefore, a lung cancer detection system with classification-dependent image processing is used for lung cancer CT imaging. Therefore, a new method to overcome these deficiencies of the process for detection using LSICA is proposed in this work on lung cancer. MATLAB has been used in all programs. A proposed system designed to easily identify the affected region with help of the classification technique to enhance and get more accurate results.
Collapse
Affiliation(s)
- G Kavithaa
- Department of Electronics and Communication Engineering, Government College of Engineering, Salem, Tamilnadu, India.
| | - P Balakrishnan
- Malla Reddy Engineering College for Women (Autonomous), Hyderabad, 500100, India
| | - S A Yuvaraj
- Department of ECE, GRT Institute of Engineering and Technology, Tiruttani, Tamilnadu, India
| |
Collapse
|
5
|
Agrawal T, Choudhary P. FocusCovid: automated COVID-19 detection using deep learning with chest X-ray images. EVOLVING SYSTEMS 2021; 13:519-533. [PMID: 38624806 PMCID: PMC8106902 DOI: 10.1007/s12530-021-09385-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 04/29/2021] [Indexed: 12/28/2022]
Abstract
COVID-19 is an acronym for coronavirus disease 2019. Initially, it was called 2019-nCoV, and later International Committee on Taxonomy of Viruses (ICTV) termed it SARS-CoV-2. On 30th January 2020, the World Health Organization (WHO) declared it a pandemic. With an increasing number of COVID-19 cases, the available medical infrastructure is essential to detect the suspected cases. Medical imaging techniques such as Computed Tomography (CT), chest radiography can play an important role in the early screening and detection of COVID-19 cases. It is important to identify and separate the cases to stop the further spread of the virus. Artificial Intelligence can play an important role in COVID-19 detection and decreases the workload on collapsing medical infrastructure. In this paper, a deep convolutional neural network-based architecture is proposed for the COVID-19 detection using chest radiographs. The dataset used to train and test the model is available on different public repositories. Despite having the high accuracy of the model, the decision on COVID-19 should be made in consultation with the trained medical clinician.
Collapse
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| | - Prakash Choudhary
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| |
Collapse
|
6
|
Rajagopalan K, Babu S. The detection of lung cancer using massive artificial neural network based on soft tissue technique. BMC Med Inform Decis Mak 2020; 20:282. [PMID: 33129343 PMCID: PMC7602294 DOI: 10.1186/s12911-020-01220-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 08/13/2020] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND A proposed computer aided detection (CAD) scheme faces major issues during subtle nodule recognition. However, radiologists have not noticed subtle nodules in beginning stage of lung cancer while a proposed CAD scheme recognizes non subtle nodules using x-ray images. METHOD Such an issue has been resolved by creating MANN (Massive Artificial Neural Network) based soft tissue technique from the lung segmented x-ray image. A soft tissue image recognizes nodule candidate for feature extortion and classification. X-ray images are downloaded using Japanese society of radiological technology (JSRT) image set. This image set includes 233 images (140 nodule x-ray images and 93 normal x-ray images). A mean size for a nodule is 17.8 mm and it is validated with computed tomography (CT) image. Thirty percent (42/140) abnormal represents subtle nodules and it is split into five stages (tremendously subtle, very subtle, subtle, observable, relatively observable) by radiologists. RESULT A proposed CAD scheme without soft tissue technique attained 66.42% (93/140) sensitivity and 66.76% accuracy having 2.5 false positives per image. Utilizing soft tissue technique, many nodules superimposed by ribs as well as clavicles have identified (sensitivity is 72.85% (102/140) and accuracy is 72.96% at one false positive rate). CONCLUSION In particular, a proposed CAD system determine sensitivity and accuracy in support of subtle nodules (sensitivity is 14/42 = 33.33% and accuracy is 33.66%) is statistically higher than CAD (sensitivity is 13/42 = 30.95% and accuracy is 30.97%) scheme without soft tissue technique. A proposed CAD scheme attained tremendously minimum false positive rate and it is a promising technique in support of cancerous recognition due to improved sensitivity and specificity.
Collapse
Affiliation(s)
- Kishore Rajagopalan
- Department of Electronics and Communication Engineering (ECE), Kamaraj college of engineering and technology (Autonomous), Virudhunagar, India
| | - Suresh Babu
- Department of Electronics and Communication Engineering (ECE), Kamaraj college of engineering and technology (Autonomous), Virudhunagar, India
| |
Collapse
|
7
|
Chen S, Han Y, Lin J, Zhao X, Kong P. Pulmonary nodule detection on chest radiographs using balanced convolutional neural network and classic candidate detection. Artif Intell Med 2020; 107:101881. [DOI: 10.1016/j.artmed.2020.101881] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Revised: 04/05/2020] [Accepted: 05/12/2020] [Indexed: 12/21/2022]
|
8
|
Mendoza J, Pedrini H. Detection and classification of lung nodules in chest X‐ray images using deep convolutional neural networks. Comput Intell 2020. [DOI: 10.1111/coin.12241] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Affiliation(s)
- Julio Mendoza
- Institute of ComputingUniversity of Campinas Campinas‐SP Brazil
| | - Helio Pedrini
- Institute of ComputingUniversity of Campinas Campinas‐SP Brazil
| |
Collapse
|
9
|
Li X, Shen L, Xie X, Huang S, Xie Z, Hong X, Yu J. Multi-resolution convolutional networks for chest X-ray radiograph based lung nodule detection. Artif Intell Med 2019; 103:101744. [PMID: 31732411 DOI: 10.1016/j.artmed.2019.101744] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 10/23/2019] [Accepted: 10/23/2019] [Indexed: 10/25/2022]
Abstract
Lung cancer is the leading cause of cancer death worldwide. Early detection of lung cancer is helpful to provide the best possible clinical treatment for patients. Due to the limited number of radiologist and the huge number of chest x-ray radiographs (CXR) available for observation, a computer-aided detection scheme should be developed to assist radiologists in decision-making. While deep learning showed state-of-the-art performance in several computer vision applications, it has not been used for lung nodule detection on CXR. In this paper, a deep learning-based lung nodule detection method was proposed. We employed patch-based multi-resolution convolutional networks to extract the features and employed four different fusion methods for classification. The proposed method shows much better performance and is much more robust than those previously reported researches. For publicly available Japanese Society of Radiological Technology (JSRT) database, more than 99% of lung nodules can be detected when the false positives per image (FPs/image) was 0.2. The FAUC and R-CPM of the proposed method were 0.982 and 0.987, respectively. The proposed approach has the potential of applications in clinical practice.
Collapse
Affiliation(s)
- Xuechen Li
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong province, PR China; Shenzhen Institute of Artificial Intelligence and Robotics for Society, PR China; Guangdong Key Laboratory of Itelligent Information Processing, Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen University, PR China
| | - Linlin Shen
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong province, PR China; Shenzhen Institute of Artificial Intelligence and Robotics for Society, PR China; Guangdong Key Laboratory of Itelligent Information Processing, Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen University, PR China.
| | - Xinpeng Xie
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong province, PR China
| | - Shiyun Huang
- Sun Yat-Sen University Public Health Insititue, Guangzhou, Guangdong province, PR China.
| | - Zhien Xie
- GuangzhHou Thoracic Hospital, Guangzhou, Guangdong province, PR China.
| | - Xian Hong
- GuangzhHou Thoracic Hospital, Guangzhou, Guangdong province, PR China
| | - Juan Yu
- Imaging Department of Shenzhen University Health Science Center, Shenzhen University School of Medicine, Shenzhen Second People's Hospital, First Affiliated Hospital of Shenzhen University, Shenzhen, Guangdong, PR China.
| |
Collapse
|
10
|
Kar S, Das Sharma K, Maitra M. Adaptive weighted aggregation in Group Improvised Harmony Search for lung nodule classification. J EXP THEOR ARTIF IN 2019. [DOI: 10.1080/0952813x.2019.1647561] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Subhajit Kar
- Department of Electrical Engineering, Future Institute of Engineering and Management, Kolkata, India
| | | | - Madhubanti Maitra
- Department of Electrical Engineering, Jadavpur University, Kolkata, India
| |
Collapse
|
11
|
Soft Tissue/Bone Decomposition of Conventional Chest Radiographs Using Nonparametric Image Priors. Appl Bionics Biomech 2019; 2019:9806464. [PMID: 31341514 PMCID: PMC6613034 DOI: 10.1155/2019/9806464] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Revised: 03/01/2019] [Accepted: 05/12/2019] [Indexed: 11/17/2022] Open
Abstract
Background and Objective When radiologists diagnose lung diseases in chest radiography, they can miss some lung nodules overlapped with ribs or clavicles. Dual-energy subtraction (DES) imaging performs well because it can produce soft tissue images, in which the bone components in chest radiography were almost suppressed but the visibility of nodules and lung vessels was still maintained. However, most routinely available X-ray machines do not possess the DES function. Thus, we presented a data-driven decomposition model to perform virtual DES function for decomposing a single conventional chest radiograph into soft tissue and bone images. Methods For a given chest radiograph, similar chest radiographs with corresponding DES soft tissue and bone images are selected from the training database as exemplars for decomposition. The corresponding fields between the observed chest radiograph and the exemplars are solved by a hierarchically dense matching algorithm. Then, nonparametric priors of soft tissue and bone components are constructed by sampling image patches from the selected soft tissue and bone images according to the corresponding fields. Finally, these nonparametric priors are integrated into our decomposition model, the energy function of which is efficiently optimized by an iteratively reweighted least-squares scheme (IRLS). Results The decomposition method is evaluated on a data set of posterior-anterior DES radiography (503 cases), as well as on the JSRT data set. The proposed method can produce soft tissue and bone images similar to those produced by the actual DES system. Conclusions The proposed method can markedly reduce the visibility of bony structures in chest radiographs and shows potential to enhance diagnosis.
Collapse
|
12
|
Tayal U, King L, Schofield R, Castellano I, Stirrup J, Pontana F, Earls J, Nicol E. Image reconstruction in cardiovascular CT: Part 2 - Iterative reconstruction; potential and pitfalls. J Cardiovasc Comput Tomogr 2019; 13:3-10. [PMID: 31014928 DOI: 10.1016/j.jcct.2019.04.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Revised: 04/04/2019] [Accepted: 04/15/2019] [Indexed: 12/22/2022]
Abstract
The use of IR in CT previously has been prohibitively complicated and time consuming, however improvements in computer processing power now make it possible on almost all CT scanners. Due to its potential to allow scanning at lower doses, IR has received a lot of attention in the medical literature and has become a successful commercial product. Its use in cardiovascular CT has been driven in part due to concerns about radiation dose and image quality. This manuscript discusses the various vendor permutations of iterative reconstruction (IR) in detail and critically appraises the current clinical research available on the various IR techniques used in cardiovascular CT.
Collapse
Affiliation(s)
- U Tayal
- Department of Cardiovascular CT, Royal Brompton Hospital, London, UK.
| | - L King
- Joint Department of Physics, The Royal Marsden, London, UK.
| | - R Schofield
- Department of Cardiovascular CT, Royal Brompton Hospital, London, UK.
| | - I Castellano
- Joint Department of Physics, The Royal Marsden, London, UK.
| | - J Stirrup
- Department of Cardiology, Royal Berkshire Hospital, Reading, UK.
| | - F Pontana
- Department of Cardiovascular Imaging, Lille University Hospital, France.
| | - J Earls
- George Washington University Hospital, Washington DC, USA.
| | - E Nicol
- Department of Cardiovascular CT, Royal Brompton Hospital, London, UK.
| |
Collapse
|
13
|
Zarshenas A, Liu J, Forti P, Suzuki K. Separation of bones from soft tissue in chest radiographs: Anatomy-specific orientation-frequency-specific deep neural network convolution. Med Phys 2019; 46:2232-2242. [PMID: 30848498 DOI: 10.1002/mp.13468] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 02/25/2019] [Accepted: 02/27/2019] [Indexed: 01/02/2023] Open
Abstract
PURPOSE Lung nodules that are missed by radiologists as well as by computer-aided detection (CAD) systems mostly overlap with ribs and clavicles. Removing the bony structures would result in better visualization of undetectable lesions. Our purpose in this study was to develop a virtual dual-energy imaging system to separate ribs and clavicles from soft tissue in chest radiographs. METHODS We developed a mixture of anatomy-specific, orientation-frequency-specific (ASOFS) deep neural network convolution (NNC) experts. Anatomy-specific (AS) NNC was designed to separate the bony structures from soft tissue in different lung segments. While an AS design was proposed previously under our massive-training artificial neural networks (MTANN) framework, in this work, we newly mathematically defined an AS experts model as well as its learning and inference strategies in a probabilistic deep-learning framework. In addition, in combination with our AS experts design, we newly proposed the orientation-frequency-specific (OFS) NNC models to decompose bone and soft-tissue structures into specific orientation-frequency components of different scales using a multi-resolution decomposition technique. We trained multiple NNC models, each of which is an expert for a specific orientation-frequency component in a particular anatomic segment. Perfect reconstruction discrete wavelet transform was used for OFS decomposition/reconstruction, while we introduced a soft-gating layer to merge the predictions of AS NNC experts. To train our model, we used the bone images obtained from a dual-energy system as the target (or teaching) images while the standard chest radiographs were used as the input to our model. The training, validation, and test were performed in a nested two-fold cross-validation manner. RESULTS We used a database of 118 chest radiographs with pulmonary nodules to evaluate our NNC scheme. In order to evaluate our scheme, we performed quantitative and qualitative evaluation of the predicted bone and soft-tissue images from our model as well as the ones of a state-of-the-art technique where the "gold-standard" dual-energy bone and soft-tissue images were used as the reference images. Both quantitative and qualitative evaluations demonstrated that our ASOFS NNC was superior to the state-of-the-art bone-suppression technique. Particularly, our scheme was better able to maintain the conspicuity of nodules and lung vessels, comparing to the reference technique, while it separated ribs and clavicles from soft tissue. Comparing to a state-of-the-art bone suppression technique, our bone images had substantially higher (t-test; P < 0.01) similarity, in terms of structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR), to the "gold-standard" dual-energy bone images. CONCLUSIONS Our deep ASOFS NNC scheme can decompose chest radiographs into their bone and soft-tissue images accurately, offering the improved conspicuity of lung nodules and vessels, and therefore would be useful for radiologists as well as CAD systems in detecting lung nodules in chest radiographs.
Collapse
Affiliation(s)
- Amin Zarshenas
- Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, 60616, USA
| | - Junchi Liu
- Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, 60616, USA
| | - Paul Forti
- Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, 60616, USA
| | - Kenji Suzuki
- Medical Imaging Research Center & Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL, 60616, USA
| |
Collapse
|
14
|
Zia ur Rehman M, Javaid M, Shah SIA, Gilani SO, Jamil M, Butt SI. An appraisal of nodules detection techniques for lung cancer in CT images. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2017.11.017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
15
|
Li X, Shen L, Luo S. A Solitary Feature-Based Lung Nodule Detection Approach for Chest X-Ray Radiographs. IEEE J Biomed Health Inform 2018; 22:516-524. [DOI: 10.1109/jbhi.2017.2661805] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
16
|
Suzuki K. Overview of deep learning in medical imaging. Radiol Phys Technol 2017; 10:257-273. [PMID: 28689314 DOI: 10.1007/s12194-017-0406-5] [Citation(s) in RCA: 364] [Impact Index Per Article: 52.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 06/29/2017] [Indexed: 02/07/2023]
Abstract
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Collapse
Affiliation(s)
- Kenji Suzuki
- Medical Imaging Research Center and Department of Electrical and Computer Engineering, Illinois Institute of Technology, 3440 South Dearborn Street, Chicago, IL, 60616, USA. .,World Research Hub Initiative (WRHI), Tokyo Institute of Technology, Tokyo, Japan.
| |
Collapse
|
17
|
Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain. Med Image Anal 2017; 35:421-433. [PMID: 27589577 DOI: 10.1016/j.media.2016.08.004] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Revised: 07/24/2016] [Accepted: 08/15/2016] [Indexed: 11/23/2022]
|
18
|
Chen S, Yao L, Chen B. A parameterized logarithmic image processing method with Laplacian of Gaussian filtering for lung nodule enhancement in chest radiographs. Med Biol Eng Comput 2016; 54:1793-1806. [DOI: 10.1007/s11517-016-1469-x] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2015] [Accepted: 02/15/2016] [Indexed: 12/17/2022]
|
19
|
Chen S, Zhong S, Yao L, Shang Y, Suzuki K. Enhancement of chest radiographs obtained in the intensive care unit through bone suppression and consistent processing. Phys Med Biol 2016; 61:2283-301. [PMID: 26930386 DOI: 10.1088/0031-9155/61/6/2283] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Portable chest radiographs (CXRs) are commonly used in the intensive care unit (ICU) to detect subtle pathological changes. However, exposure settings or patient and apparatus positioning deteriorate image quality in the ICU. Chest x-rays of patients in the ICU are often hazy and show low contrast and increased noise. To aid clinicians in detecting subtle pathological changes, we proposed a consistent processing and bone structure suppression method to decrease variations in image appearance and improve the diagnostic quality of images. We applied a region of interest-based look-up table to process original ICU CXRs such that they appeared consistent with each other and the standard CXRs. Then, an artificial neural network was trained by standard CXRs and the corresponding dual-energy bone images for the generation of a bone image. Once the neural network was trained, the real dual-energy image was no longer necessary, and the trained neural network was applied to the consistent processed ICU CXR to output the bone image. Finally, a gray level-based morphological method was applied to enhance the bone image by smoothing other structures on this image. This enhanced image was subtracted from the consistent, processed ICU CXR to produce a soft tissue image. This method was tested for 20 patients with a total of 87 CXRs. The findings indicated that our method suppressed bone structures on ICU CXRs and standard CXRs, simultaneously maintaining subtle pathological changes.
Collapse
Affiliation(s)
- Sheng Chen
- School of Optical-Electrical and Computer Engineering & Engineering Research Center of Optical Instrument and System, Ministry of Education, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | | | | | | | | |
Collapse
|
20
|
Advanced imaging tools in pulmonary nodule detection and surveillance. Clin Imaging 2016; 40:296-301. [PMID: 26916752 DOI: 10.1016/j.clinimag.2016.01.015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2015] [Revised: 01/27/2016] [Accepted: 01/29/2016] [Indexed: 11/23/2022]
Abstract
Lung cancer is a leading cause of death worldwide. The National Lung Screening Trial has demonstrated that lung cancer screening can reduce lung cancer specific and all cause mortality. With approval of national coverage for lung cancer screening, it is expected that an increase in exams related to pulmonary nodule detection and surveillance will ensue. Advanced imaging technologies for nodule detection and surveillance will be more important than ever. While computed tomography (CT) remains the modality of choice, other emerging modalities such as magnetic resonance imaging provides viable alternatives to CT.
Collapse
|
21
|
Maduskar P, Hogeweg L, de Jong PA, Peters-Bax L, Dawson R, Ayles H, Sánchez CI, van Ginneken B. Cavity contour segmentation in chest radiographs using supervised learning and dynamic programming. Med Phys 2015; 41:071912. [PMID: 24989390 DOI: 10.1118/1.4881096] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
PURPOSE Efficacy of tuberculosis (TB) treatment is often monitored using chest radiography. Monitoring size of cavities in pulmonary tuberculosis is important as the size predicts severity of the disease and its persistence under therapy predicts relapse. The authors present a method for automatic cavity segmentation in chest radiographs. METHODS A two stage method is proposed to segment the cavity borders, given a user defined seed point close to the center of the cavity. First, a supervised learning approach is employed to train a pixel classifier using texture and radial features to identify the border pixels of the cavity. A likelihood value of belonging to the cavity border is assigned to each pixel by the classifier. The authors experimented with four different classifiers:k-nearest neighbor (kNN), linear discriminant analysis (LDA), GentleBoost (GB), and random forest (RF). Next, the constructed likelihood map was used as an input cost image in the polar transformed image space for dynamic programming to trace the optimal maximum cost path. This constructed path corresponds to the segmented cavity contour in image space. RESULTS The method was evaluated on 100 chest radiographs (CXRs) containing 126 cavities. The reference segmentation was manually delineated by an experienced chest radiologist. An independent observer (a chest radiologist) also delineated all cavities to estimate interobserver variability. Jaccard overlap measure Ω was computed between the reference segmentation and the automatic segmentation; and between the reference segmentation and the independent observer's segmentation for all cavities. A median overlap Ω of 0.81 (0.76 ± 0.16), and 0.85 (0.82 ± 0.11) was achieved between the reference segmentation and the automatic segmentation, and between the segmentations by the two radiologists, respectively. The best reported mean contour distance and Hausdorff distance between the reference and the automatic segmentation were, respectively, 2.48 ± 2.19 and 8.32 ± 5.66 mm, whereas these distances were 1.66 ± 1.29 and 5.75 ± 4.88 mm between the segmentations by the reference reader and the independent observer, respectively. The automatic segmentations were also visually assessed by two trained CXR readers as "excellent," "adequate," or "insufficient." The readers had good agreement in assessing the cavity outlines and 84% of the segmentations were rated as "excellent" or "adequate" by both readers. CONCLUSIONS The proposed cavity segmentation technique produced results with a good degree of overlap with manual expert segmentations. The evaluation measures demonstrated that the results approached the results of the experienced chest radiologists, in terms of overlap measure and contour distance measures. Automatic cavity segmentation can be employed in TB clinics for treatment monitoring, especially in resource limited settings where radiologists are not available.
Collapse
Affiliation(s)
- Pragnya Maduskar
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | - Laurens Hogeweg
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | - Pim A de Jong
- Department of Radiology, University Medical Center Utrecht, 3584 CX, The Netherlands
| | - Liesbeth Peters-Bax
- Department of Radiology, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | - Rodney Dawson
- University of Cape Town Lung Institute, Cape Town 7700, South Africa
| | - Helen Ayles
- Department of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, London WC1E 7HT, United Kingdom
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| |
Collapse
|
22
|
Hogeweg L, Sanchez CI, van Ginneken B. Suppression of translucent elongated structures: applications in chest radiography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:2099-2113. [PMID: 23880041 DOI: 10.1109/tmi.2013.2274212] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Projection images, such as those routinely acquired in radiological practice, are difficult to analyze because multiple 3-D structures superimpose at a single point in the 2-D image. Removal of particular superimposed structures may improve interpretation of these images, both by humans and by computers. This work therefore presents a general method to isolate and suppress structures in 2-D projection images. The focus is on elongated structures, which allows an intensity model of a structure of interest to be extracted using local information only. The model is created from profiles sampled perpendicular to the structure. Profiles containing other structures are detected and removed to reduce the influence on the model. Subspace filtering, using blind source separation techniques, is applied to separate the structure to be suppressed from other structures. By subtracting the modeled structure from the original image a structure suppressed image is created. The method is evaluated in four experiments. In the first experiment ribs are suppressed in 20 artificial radiographs simulated from 3-D lung computed tomography (CT) images. The proposed method with blind source separation and outlier detection shows superior suppression of ribs in simulated radiographs, compared to a simplified approach without these techniques. Additionally, the ability of three observers to discriminate between patches containing ribs and containing no ribs, as measured by the area under the receiver operating characteristic curve (AUC), reduced from 0.99-1.00 on original images to 0.75-0.84 on suppressed images. In the second experiment clavicles are suppressed in 253 chest radiographs. The effect of suppression on clavicle visibility is evaluated using the clavicle contrast and border response, showing a reduction of 78% and 34%, respectively. In the third experiment nodules extracted from CT were simulated close to the clavicles in 100 chest radiographs. It was found that after suppression contrast of the nodules was higher than of the clavicles (1.35 and 0.55, respectively) than on original images (1.83 and 2.46, respectively). In the fourth experiment catheters were suppressed in chest radiographs. The ability of three observers to discriminate between patches originating from 36 images with and 21 images without catheters, as measured by the AUC, reduced from 0.98-0.99 on original images to 0.64-0.74 on suppressed images. We conclude that the presented method can markedly reduce the visibility of elongated structures in chest radiographs and shows potential to enhance diagnosis.
Collapse
|