1
|
Chowa SS, Azam S, Montaha S, Bhuiyan MRI, Jonkman M. Improving the Automated Diagnosis of Breast Cancer with Mesh Reconstruction of Ultrasound Images Incorporating 3D Mesh Features and a Graph Attention Network. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1067-1085. [PMID: 38361007 DOI: 10.1007/s10278-024-00983-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/17/2023] [Accepted: 12/11/2023] [Indexed: 02/17/2024]
Abstract
This study proposes a novel approach for breast tumor classification from ultrasound images into benign and malignant by converting the region of interest (ROI) of a 2D ultrasound image into a 3D representation using the point-e system, allowing for in-depth analysis of underlying characteristics. Instead of relying solely on 2D imaging features, this method extracts 3D mesh features that describe tumor patterns more precisely. Ten informative and medically relevant mesh features are extracted and assessed with two feature selection techniques. Additionally, a feature pattern analysis has been conducted to determine the feature's significance. A feature table with dimensions of 445 × 12 is generated and a graph is constructed, considering the rows as nodes and the relationships among the nodes as edges. The Spearman correlation coefficient method is employed to identify edges between the strongly connected nodes (with a correlation score greater than or equal to 0.7), resulting in a graph containing 56,054 edges and 445 nodes. A graph attention network (GAT) is proposed for the classification task and the model is optimized with an ablation study, resulting in the highest accuracy of 99.34%. The performance of the proposed model is compared with ten machine learning (ML) models and one-dimensional convolutional neural network where the test accuracy of these models ranges from 73 to 91%. Our novel 3D mesh-based approach, coupled with the GAT, yields promising performance for breast tumor classification, outperforming traditional models, and has the potential to reduce time and effort of radiologists providing a reliable diagnostic system.
Collapse
Affiliation(s)
- Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Md Rahad Islam Bhuiyan
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
2
|
Mohammadi M, Fell C, Morrison D, Syed S, Konanahalli P, Bell S, Bryson G, Arandjelović O, Harrison DJ, Harris-Birtill D. Automated reporting of cervical biopsies using artificial intelligence. PLOS DIGITAL HEALTH 2024; 3:e0000381. [PMID: 38648217 PMCID: PMC11034655 DOI: 10.1371/journal.pdig.0000381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 10/03/2023] [Indexed: 04/25/2024]
Abstract
When detected at an early stage, the 5-year survival rate for people with invasive cervical cancer is 92%. Being aware of signs and symptoms of cervical cancer and early detection greatly improve the chances of successful treatment. We have developed an Artificial Intelligence (AI) algorithm, trained and evaluated on cervical biopsies for automated reporting of digital diagnostics. The aim is to increase overall efficiency of pathological diagnosis and to have the performance tuned to high sensitivity for malignant cases. Having a tool for triage/identifying cancer and high grade lesions may potentially reduce reporting time by identifying areas of interest in a slide for the pathologist and therefore improving efficiency. We trained and validated our algorithm on 1738 cervical WSIs with one WSI per patient. On the independent test set of 811 WSIs, we achieved 93.4% malignant sensitivity for classifying slides. Recognising a WSI, with our algorithm, takes approximately 1.5 minutes on the NVIDIA Tesla V100 GPU. Whole slide images of different formats (TIFF, iSyntax, and CZI) can be processed using this code, and it is easily extendable to other formats.
Collapse
Affiliation(s)
- Mahnaz Mohammadi
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| | - Christina Fell
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| | - David Morrison
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| | - Sheeba Syed
- Department of Pathology, Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Prakash Konanahalli
- Department of Pathology, Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Sarah Bell
- Department of Pathology, Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Gareth Bryson
- Department of Pathology, Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Ognjen Arandjelović
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| | - David J. Harrison
- School of Medicine, University of St Andrews, United Kingdom
- Pathology, Division of Laboratory Medicine, Royal Infirmary of Edinburgh, United Kingdom
| | - David Harris-Birtill
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| |
Collapse
|
3
|
Kirelli Y, Arslankaya S, Koçer HB, Harmantepe T. CNN-based deep learning method for predicting the disease response to the Neoadjuvant Chemotherapy (NAC) treatment in breast cancer. Heliyon 2023; 9:e16812. [PMID: 37303531 PMCID: PMC10248274 DOI: 10.1016/j.heliyon.2023.e16812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 05/26/2023] [Accepted: 05/29/2023] [Indexed: 06/13/2023] Open
Abstract
Objective The objective of the study is to evaluate the performance of CNN-based proposed models for predicting patients' response to NAC treatment and the disease development process in the pathological area. The study aims to determine the main criteria that affect the model's success during training, such as the number of convolutional layers, dataset quality and depended variable. Method The study uses pathological data frequently used in the healthcare industry to evaluate the proposed CNN-based models. The researchers analyze the classification performances of the models and evaluate their success during training. Results The study shows that using deep learning methods, particularly CNN models, can offer strong feature representation and lead to accurate predictions of patients' response to NAC treatment and the disease development process in the pathological area. A model that predicts 'miller coefficient', 'tumor lymph node value', 'complete response in both tumor and axilla' values with high accuracy, which is considered to be effective in achieving complete response to treatment, has been created. Estimation performance metrics have been obtained as 87%, 77% and 91%, respectively. Conclusion The study concludes that interpreting pathological test results with deep learning methods is an effective way of determining the correct diagnosis and treatment method, as well as the prognosis follow-up of the patient. It provides clinicians with a solution to a large extent, particularly in the case of large, heterogeneous datasets that can be challenging to manage with traditional methods. The study suggests that using machine learning and deep learning methods can significantly improve the performance of interpreting and managing healthcare data.
Collapse
Affiliation(s)
- Yasin Kirelli
- Management Information Systems, Kutahya Dumlupinar University, Kutahya, Turkey
| | - Seher Arslankaya
- Industrial Engineering Department, Sakarya University, Sakarya, Turkey
| | | | | |
Collapse
|
4
|
Kusk MW, Lysdahlgaard S. The effect of Gaussian noise on pneumonia detection on chest radiographs, using convolutional neural networks. Radiography (Lond) 2023; 29:38-43. [PMID: 36274315 DOI: 10.1016/j.radi.2022.09.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 09/26/2022] [Accepted: 09/29/2022] [Indexed: 11/13/2022]
Abstract
INTRODUCTION Chest X-rays (CXR) with under-exposure increase image noise and this may affect convolutional neural network (CNN) performance. This study aimed to train and validate CNNs for classifying pneumonia on CXR as normal or pneumonia acquired at different image noise levels. METHODS The study used the curated and publicly available "Chest X-Ray Pneumonia" dataset of 5856 AP CXR classified into 1583 normal, 4273 viral and bacterial pneumonia cases. Gaussian noise with zero mean was added to the images, at 5 image noise variance levels, corresponding to decreasing exposure. Each noise-level dataset was split into 80% for training, 10% for validation, and 10% for test data and then classified using custom trained sequential CNN architecture. Six classification tasks were developed for five Gaussian noise levels and the original dataset. Sensitivity, specificity, predictive values and accuracy were used as evaluation performance metrics. RESULTS CNN evaluation on the different datasets revealed no performance drop from the original dataset to the five datasets with different noise levels. Sensitivity, specificity and accuracy for the normal datasets were 98.7%, 76.1% and 90.2%. For the five Gaussian noise levels the sensitivity, specificity and accuracy ranged from 96.9% to 98.2%, 94.4%-98.7% and 96.8%-97.6%, respectively. A heat map was used for visual explanation of the CNNs. CONCLUSION The CNNs sensitivity maintained, and the specificity increased in distinguishing between normal and pneumonia CXR with the introduction of image noise. IMPLICATIONS FOR PRACTICE No performance drops of CNNs in distinguishing cases with and without pneumonia CXR with different Gaussian noise levels was observed. This has potential for decreasing radiation dose to patients or maintaining exposure parameters for patients that require additional radiographs.
Collapse
Affiliation(s)
- M W Kusk
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark
| | - S Lysdahlgaard
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark; Imaging Research Initiative Southwest (IRIS), Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark.
| |
Collapse
|
5
|
Xu X, Wang X, Lin J, Xiong H, Wang M, Tan H, Xiong K, Han D. Automatic Segmentation and Measurement of Choroid Layer in High Myopia for OCT Imaging Using Deep Learning. J Digit Imaging 2022; 35:1153-1163. [PMID: 35581408 PMCID: PMC9582076 DOI: 10.1007/s10278-021-00571-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 11/30/2022] Open
Abstract
Automatic segmentation and measurement of the choroid layer is useful in studying of related fundus diseases, such as diabetic retinopathy and high myopia. However, most algorithms are not helpful for choroid layer segmentation due to its blurred boundaries and complex gradients. Therefore, this paper aimed to propose a novel choroid segmentation method that combines image enhancement and attention-based dense (AD) U-Net network. The choroidal images obtained from optical coherence tomography (OCT) are pre-enhanced by algorithms that include flattening, filtering, and exponential and linear enhancement to reduce choroid-independent information. Experimental results obtained from 800 OCT B-scans of the choroid layers from both normal eyes and high myopia showed that image enhancement significantly increased the performance of ADU-Net, with an AUC of 99.51% and a DSC of 97.91%. The accuracy of segmentation using the ADU-Net method with image enhancement is superior to that of the existing networks. In addition, we describe some algorithms that can measure automatically choroidal foveal thickness and the volume of adjacent areas. Statistical analyses of the choroidal parameters variation indicated that compared with normal eyes, high myopia has a reduction of 86.3% of the choroidal foveal thickness and 90% of the adjacent volume. It proved that high myopia is likely to cause choroid layer attenuation. These algorithms would have wide application in the diagnosis and precaution of related fundus lesions caused by choroid thinning from high myopia in future studies.
Collapse
Affiliation(s)
- Xiangcong Xu
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, Guangdong China
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, Foshan, People’s Republic of China
- School of Mechatronic Engineering and Automation, Foshan University, Foshan, Guangdong China
| | - Xuehua Wang
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, Guangdong China
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, Foshan, People’s Republic of China
| | - Jingyi Lin
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, Guangdong China
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, Foshan, People’s Republic of China
| | - Honglian Xiong
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, Guangdong China
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, Foshan, People’s Republic of China
| | - Mingyi Wang
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, Guangdong China
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, Foshan, People’s Republic of China
| | - Haishu Tan
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, Guangdong China
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, Foshan, People’s Republic of China
| | - Ke Xiong
- Department of Ophthalmology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong China
| | - Dingan Han
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, Guangdong China
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic, Foshan, People’s Republic of China
| |
Collapse
|
6
|
Ding Y, Wang T. Mental Health Management of English Teachers in English Teaching Under the COVID-19 Era. Front Psychol 2022; 13:916886. [PMID: 35756224 PMCID: PMC9226886 DOI: 10.3389/fpsyg.2022.916886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 04/26/2022] [Indexed: 11/17/2022] Open
Abstract
Background The COVID-19 pandemic has brought new challenges and attention to the mental health of all social groups, making mental health increasingly necessary and important. However, people only focus on the mental health of undergraduates, and the mental health of teachers has not received much attention from society. College teachers are the backbone of the teachers' group, and their mental health not only affects the teaching quality and research level but also plays an important role in the mental health and personality development of undergraduates. Method During the COVID-19 pandemic, online teaching is a major challenge for college teachers, especially English teachers. To this end, this article proposes a bipartite graph convolutional network (BGCN) model based on the psychological test questionnaire and its structural characteristics for the recognition of the mental health crisis. Results Experimental results show that the proposed BGCN model is superior to neural network algorithms and other machine learning algorithms in accuracy, precision, F1, and recall and can be well used for the mental health management of English teachers in the era of COVID-19.
Collapse
Affiliation(s)
- Yiling Ding
- Heilongjiang University, Harbin, China
- Harbin Normal University, Harbin, China
| | | |
Collapse
|
7
|
Mitsutake H, Watanabe H, Sakaguchi A, Uchiyama K, Lee Y, Hayashi N, Shimosegawa M, Ogura T. [Evaluation of Radiograph Accuracy in Skull X-ray Images Using Deep Learning]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2022; 78:23-32. [PMID: 35046219 DOI: 10.6009/jjrt.780104] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE Accurate positioning is essential for radiography, and it is especially important to maintain image reproducibility in follow-up observations. The decision on re-taking radiographs is entrusting to the individual radiological technologist. The evaluation is a visual and qualitative evaluation and there are individual variations in the acceptance criteria. In this study, we propose a method of image evaluation using a deep convolutional neural network (DCNN) for skull X-ray images. METHOD The radiographs were obtained from 5 skull phantoms and were classified by simple network and VGG16. The discrimination ability of DCNN was verified by recognizing the X-ray projection angle and the retake of the radiograph. DCNN architectures were used with the different input image sizes and were evaluated by 5-fold cross-validation and leave-one-out cross-validation. RESULT Using the 5-fold cross-validation, the classification accuracy was 99.75% for the simple network and 80.00% for the VGG16 in small input image sizes, and when the input image size was general image size, simple network and VGG16 showed 79.58% and 80.00%, respectively. CONCLUSION The experimental results showed that the combination between the small input image size, and the shallow DCNN architecture was suitable for the four-category classification in X-ray projection angles. The classification accuracy was up to 99.75%. The proposed method has the potential to automatically recognize the slight projection angles and the re-taking images to the acceptance criteria. It is considered that our proposed method can contribute to feedback for re-taking images and to reduce radiation dose due to individual subjectivity.
Collapse
Affiliation(s)
| | - Haruyuki Watanabe
- School of Radiological Technology, Gunma Prefectural College of Health Sciences
| | - Aya Sakaguchi
- School of Radiological Technology, Gunma Prefectural College of Health Sciences (Current address: Department of Radiological Technology, Seikei-kai Chiba Medical Center)
| | - Kiyoshi Uchiyama
- Department of Radiological Technology, Teikyo University Hospital
| | - Yongbum Lee
- School of Health Sciences, Faculty of Medicine, Niigata University
| | - Norio Hayashi
- School of Radiological Technology, Gunma Prefectural College of Health Sciences
| | | | - Toshihiro Ogura
- School of Radiological Technology, Gunma Prefectural College of Health Sciences
| |
Collapse
|
8
|
Gong XQ, Tao YY, Wu YK, Liu N, Yu X, Wang R, Zheng J, Liu N, Huang XH, Li JD, Yang G, Wei XQ, Yang L, Zhang XM. Progress of MRI Radiomics in Hepatocellular Carcinoma. Front Oncol 2021; 11:698373. [PMID: 34616673 PMCID: PMC8488263 DOI: 10.3389/fonc.2021.698373] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 08/31/2021] [Indexed: 02/05/2023] Open
Abstract
Background Hepatocellular carcinoma (HCC) is the sixth most common cancer in the world and the third leading cause of cancer-related death. Although the diagnostic scheme of HCC is currently undergoing refinement, the prognosis of HCC is still not satisfactory. In addition to certain factors, such as tumor size and number and vascular invasion displayed on traditional imaging, some histopathological features and gene expression parameters are also important for the prognosis of HCC patients. However, most parameters are based on postoperative pathological examinations, which cannot help with preoperative decision-making. As a new field, radiomics extracts high-throughput imaging data from different types of images to build models and predict clinical outcomes noninvasively before surgery, rendering it a powerful aid for making personalized treatment decisions preoperatively. Objective This study reviewed the workflow of radiomics and the research progress on magnetic resonance imaging (MRI) radiomics in the diagnosis and treatment of HCC. Methods A literature review was conducted by searching PubMed for search of relevant peer-reviewed articles published from May 2017 to June 2021.The search keywords included HCC, MRI, radiomics, deep learning, artificial intelligence, machine learning, neural network, texture analysis, diagnosis, histopathology, microvascular invasion, surgical resection, radiofrequency, recurrence, relapse, transarterial chemoembolization, targeted therapy, immunotherapy, therapeutic response, and prognosis. Results Radiomics features on MRI can be used as biomarkers to determine the differential diagnosis, histological grade, microvascular invasion status, gene expression status, local and systemic therapeutic responses, and prognosis of HCC patients. Conclusion Radiomics is a promising new imaging method. MRI radiomics has high application value in the diagnosis and treatment of HCC.
Collapse
Affiliation(s)
- Xue-Qin Gong
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Yun-Yun Tao
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Yao-Kun Wu
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Ning Liu
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Xi Yu
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Ran Wang
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Jing Zheng
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Nian Liu
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Xiao-Hua Huang
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Jing-Dong Li
- Department of Hepatocellular Surgery, Institute of Hepato-Biliary-Intestinal Disease, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Gang Yang
- Department of Hepatocellular Surgery, Institute of Hepato-Biliary-Intestinal Disease, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Xiao-Qin Wei
- School of Medical Imaging, North Sichuan Medical College, Nanchong, China
| | - Lin Yang
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Xiao-Ming Zhang
- Medical Imaging Key Laboratory of Sichuan Province, Department of Radiology, Medical Research Center, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| |
Collapse
|
9
|
Chiang CH, Weng CL, Chiu HW. Automatic classification of medical image modality and anatomical location using convolutional neural network. PLoS One 2021; 16:e0253205. [PMID: 34115822 PMCID: PMC8195382 DOI: 10.1371/journal.pone.0253205] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 05/30/2021] [Indexed: 11/22/2022] Open
Abstract
Modern radiologic images comply with DICOM (digital imaging and communications in medicine) standard, which, upon conversion to other image format, would lose its image detail and information such as patient demographics or type of image modality that DICOM format carries. As there is a growing interest in using large amount of image data for research purpose and acquisition of large amount of medical image is now a standard practice in the clinical setting, efficient handling and storage of large amount of image data is important in both the clinical and research setting. In this study, four classes of images were created, namely, CT (computed tomography) of abdomen, CT of brain, MRI (magnetic resonance imaging) of brain and MRI of spine. After converting these images into JPEG (Joint Photographic Experts Group) format, our proposed CNN architecture could automatically classify these 4 groups of medical images by both their image modality and anatomic location. We achieved excellent overall classification accuracy in both validation and test sets (> 99.5%), specificity and F1 score (> 99%) in each category of this dataset which contained both diseased and normal images. Our study has shown that using CNN for medical image classification is a promising methodology and could work on non-DICOM images, which could potentially save image processing time and storage space.
Collapse
Affiliation(s)
- Chen-Hua Chiang
- Department of Radiology, Shuang Ho Hospital, Taipei Medical University, New Taipei City, Taiwan
- Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| | | | - Hung-Wen Chiu
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
- * E-mail:
| |
Collapse
|
10
|
Comparison and Validation of Deep Learning Models for the Diagnosis of Pneumonia. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:8876798. [PMID: 33014032 PMCID: PMC7520009 DOI: 10.1155/2020/8876798] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 07/26/2020] [Accepted: 09/09/2020] [Indexed: 01/13/2023]
Abstract
As a respiratory infection, pneumonia has gained great attention from countries all over the world for its strong spreading and relatively high mortality. For pneumonia, early detection and treatment will reduce its mortality rate significantly. Currently, X-ray diagnosis is recognized as a relatively effective method. The visual analysis of a patient's X-ray chest radiograph by an experienced doctor takes about 5 to 15 minutes. When cases are concentrated, this will undoubtedly put tremendous pressure on the doctor's clinical diagnosis. Therefore, relying on the naked eye of the imaging doctor has very low efficiency. Hence, the use of artificial intelligence for clinical image diagnosis of pneumonia is a necessary thing. In addition, artificial intelligence recognition is very fast, and the convolutional neural networks (CNNs) have achieved better performance than human beings in terms of image identification. Therefore, we used the dataset which has chest X-ray images for classification made available by Kaggle with a total of 5216 train and 624 test images, with 2 classes as normal and pneumonia. We performed studies using five mainstream network algorithms to classify these diseases in the dataset and compared the results, from which we improved MobileNet's network structure and achieved a higher accuracy rate than other methods. Furthermore, the improved MobileNet's network could also extend to other areas for application.
Collapse
|
11
|
Kriegsmann M, Haag C, Weis CA, Steinbuss G, Warth A, Zgorzelski C, Muley T, Winter H, Eichhorn ME, Eichhorn F, Kriegsmann J, Christopolous P, Thomas M, Witzens-Harig M, Sinn P, von Winterfeld M, Heussel CP, Herth FJF, Klauschen F, Stenzinger A, Kriegsmann K. Deep Learning for the Classification of Small-Cell and Non-Small-Cell Lung Cancer. Cancers (Basel) 2020; 12:cancers12061604. [PMID: 32560475 PMCID: PMC7352768 DOI: 10.3390/cancers12061604] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 06/14/2020] [Accepted: 06/15/2020] [Indexed: 12/24/2022] Open
Abstract
Reliable entity subtyping is paramount for therapy stratification in lung cancer. Morphological evaluation remains the basis for entity subtyping and directs the application of additional methods such as immunohistochemistry (IHC). The decision of whether to perform IHC for subtyping is subjective, and access to IHC is not available worldwide. Thus, the application of additional methods to support morphological entity subtyping is desirable. Therefore, the ability of convolutional neuronal networks (CNNs) to classify the most common lung cancer subtypes, pulmonary adenocarcinoma (ADC), pulmonary squamous cell carcinoma (SqCC), and small-cell lung cancer (SCLC), was evaluated. A cohort of 80 ADC, 80 SqCC, 80 SCLC, and 30 skeletal muscle specimens was assembled; slides were scanned; tumor areas were annotated; image patches were extracted; and cases were randomly assigned to a training, validation or test set. Multiple CNN architectures (VGG16, InceptionV3, and InceptionResNetV2) were trained and optimized to classify the four entities. A quality control (QC) metric was established. An optimized InceptionV3 CNN architecture yielded the highest classification accuracy and was used for the classification of the test set. Image patch and patient-based CNN classification results were 95% and 100% in the test set after the application of strict QC. Misclassified cases mainly included ADC and SqCC. The QC metric identified cases that needed further IHC for definite entity subtyping. The study highlights the potential and limitations of CNN image classification models for tumor differentiation.
Collapse
Affiliation(s)
- Mark Kriegsmann
- Institute of Pathology, Heidelberg University, 69120 Heidelberg, Germany; (C.H.); (G.S.); (C.Z.); (P.S.); (M.v.W.); (A.S.)
- Translational Lung Research Centre Heidelberg, Member of the German Centre for Lung Research (DZL), 69120 Heidelberg, Germany; (T.M.); (H.W.); (M.E.E.); (F.E.); (P.C.); (M.T.); (C.P.H.); (F.J.F.H.)
- Correspondence: (M.K.); (K.K.); Tel.: +49-6221-56-36930 (M.K.); +49-6221-56-37238 (K.K.)
| | - Christian Haag
- Institute of Pathology, Heidelberg University, 69120 Heidelberg, Germany; (C.H.); (G.S.); (C.Z.); (P.S.); (M.v.W.); (A.S.)
- Department Hematology, Oncology and Rheumatology, Heidelberg University, 69120 Heidelberg, Germany
| | - Cleo-Aron Weis
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68782 Mannheim, Germany;
| | - Georg Steinbuss
- Institute of Pathology, Heidelberg University, 69120 Heidelberg, Germany; (C.H.); (G.S.); (C.Z.); (P.S.); (M.v.W.); (A.S.)
- Department Hematology, Oncology and Rheumatology, Heidelberg University, 69120 Heidelberg, Germany
| | - Arne Warth
- Institute of Pathology, Cytopathology, and Molecular Pathology, UEGP MVZ Gießen/Wetzlar/Limburg, 65549 Limburg, Germany;
| | - Christiane Zgorzelski
- Institute of Pathology, Heidelberg University, 69120 Heidelberg, Germany; (C.H.); (G.S.); (C.Z.); (P.S.); (M.v.W.); (A.S.)
| | - Thomas Muley
- Translational Lung Research Centre Heidelberg, Member of the German Centre for Lung Research (DZL), 69120 Heidelberg, Germany; (T.M.); (H.W.); (M.E.E.); (F.E.); (P.C.); (M.T.); (C.P.H.); (F.J.F.H.)
- Department of Thoracic Surgery, Thoraxklinik, Heidelberg University, 69126 Heidelberg, Germany
| | - Hauke Winter
- Translational Lung Research Centre Heidelberg, Member of the German Centre for Lung Research (DZL), 69120 Heidelberg, Germany; (T.M.); (H.W.); (M.E.E.); (F.E.); (P.C.); (M.T.); (C.P.H.); (F.J.F.H.)
- Department of Thoracic Surgery, Thoraxklinik, Heidelberg University, 69126 Heidelberg, Germany
| | - Martin E. Eichhorn
- Translational Lung Research Centre Heidelberg, Member of the German Centre for Lung Research (DZL), 69120 Heidelberg, Germany; (T.M.); (H.W.); (M.E.E.); (F.E.); (P.C.); (M.T.); (C.P.H.); (F.J.F.H.)
- Department of Thoracic Surgery, Thoraxklinik, Heidelberg University, 69126 Heidelberg, Germany
| | - Florian Eichhorn
- Translational Lung Research Centre Heidelberg, Member of the German Centre for Lung Research (DZL), 69120 Heidelberg, Germany; (T.M.); (H.W.); (M.E.E.); (F.E.); (P.C.); (M.T.); (C.P.H.); (F.J.F.H.)
- Department of Thoracic Surgery, Thoraxklinik, Heidelberg University, 69126 Heidelberg, Germany
| | - Joerg Kriegsmann
- Molecular Pathology Trier, 54296 Trier, Germany;
- Danube Private University Krems, 3500 Krems, Austria
| | - Petros Christopolous
- Translational Lung Research Centre Heidelberg, Member of the German Centre for Lung Research (DZL), 69120 Heidelberg, Germany; (T.M.); (H.W.); (M.E.E.); (F.E.); (P.C.); (M.T.); (C.P.H.); (F.J.F.H.)
- Department of Thoracic Oncology, Thoraxklinik, Heidelberg University, 69126 Heidelberg, Germany
| | - Michael Thomas
- Translational Lung Research Centre Heidelberg, Member of the German Centre for Lung Research (DZL), 69120 Heidelberg, Germany; (T.M.); (H.W.); (M.E.E.); (F.E.); (P.C.); (M.T.); (C.P.H.); (F.J.F.H.)
- Department of Thoracic Oncology, Thoraxklinik, Heidelberg University, 69126 Heidelberg, Germany
| | | | - Peter Sinn
- Institute of Pathology, Heidelberg University, 69120 Heidelberg, Germany; (C.H.); (G.S.); (C.Z.); (P.S.); (M.v.W.); (A.S.)
| | - Moritz von Winterfeld
- Institute of Pathology, Heidelberg University, 69120 Heidelberg, Germany; (C.H.); (G.S.); (C.Z.); (P.S.); (M.v.W.); (A.S.)
| | - Claus Peter Heussel
- Translational Lung Research Centre Heidelberg, Member of the German Centre for Lung Research (DZL), 69120 Heidelberg, Germany; (T.M.); (H.W.); (M.E.E.); (F.E.); (P.C.); (M.T.); (C.P.H.); (F.J.F.H.)
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Thoraxklinik, Heidelberg University, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology, Thoraxklinik, Heidelberg University, 69120 Heidelberg, Germany
| | - Felix J. F. Herth
- Translational Lung Research Centre Heidelberg, Member of the German Centre for Lung Research (DZL), 69120 Heidelberg, Germany; (T.M.); (H.W.); (M.E.E.); (F.E.); (P.C.); (M.T.); (C.P.H.); (F.J.F.H.)
- Department of Pneumology and Critical Care Medicine, Thoraxklinik, Heidelberg University, 69126 Heidelberg, Germany
| | | | - Albrecht Stenzinger
- Institute of Pathology, Heidelberg University, 69120 Heidelberg, Germany; (C.H.); (G.S.); (C.Z.); (P.S.); (M.v.W.); (A.S.)
- Translational Lung Research Centre Heidelberg, Member of the German Centre for Lung Research (DZL), 69120 Heidelberg, Germany; (T.M.); (H.W.); (M.E.E.); (F.E.); (P.C.); (M.T.); (C.P.H.); (F.J.F.H.)
| | - Katharina Kriegsmann
- Department Hematology, Oncology and Rheumatology, Heidelberg University, 69120 Heidelberg, Germany
- Correspondence: (M.K.); (K.K.); Tel.: +49-6221-56-36930 (M.K.); +49-6221-56-37238 (K.K.)
| |
Collapse
|
12
|
Rahaman MM, Li C, Yao Y, Kulwa F, Rahman MA, Wang Q, Qi S, Kong F, Zhu X, Zhao X. Identification of COVID-19 samples from chest X-Ray images using deep learning: A comparison of transfer learning approaches. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:821-839. [PMID: 32773400 PMCID: PMC7592691 DOI: 10.3233/xst-200715] [Citation(s) in RCA: 81] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 06/29/2020] [Accepted: 07/11/2020] [Indexed: 05/18/2023]
Abstract
BACKGROUND The novel coronavirus disease 2019 (COVID-19) constitutes a public health emergency globally. The number of infected people and deaths are proliferating every day, which is putting tremendous pressure on our social and healthcare system. Rapid detection of COVID-19 cases is a significant step to fight against this virus as well as release pressure off the healthcare system. OBJECTIVE One of the critical factors behind the rapid spread of COVID-19 pandemic is a lengthy clinical testing time. The imaging tool, such as Chest X-ray (CXR), can speed up the identification process. Therefore, our objective is to develop an automated CAD system for the detection of COVID-19 samples from healthy and pneumonia cases using CXR images. METHODS Due to the scarcity of the COVID-19 benchmark dataset, we have employed deep transfer learning techniques, where we examined 15 different pre-trained CNN models to find the most suitable one for this task. RESULTS A total of 860 images (260 COVID-19 cases, 300 healthy and 300 pneumonia cases) have been employed to investigate the performance of the proposed algorithm, where 70% images of each class are accepted for training, 15% is used for validation, and rest is for testing. It is observed that the VGG19 obtains the highest classification accuracy of 89.3% with an average precision, recall, and F1 score of 0.90, 0.89, 0.90, respectively. CONCLUSION This study demonstrates the effectiveness of deep transfer learning techniques for the identification of COVID-19 cases using CXR images.
Collapse
Affiliation(s)
- Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, USA
| | - Frank Kulwa
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | | | - Qian Wang
- Liaoning Hospital and Institute, Cancer Hospital of China Medical University, Shenyang, China
| | - Shouliang Qi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Fanjie Kong
- Electrical Engineering Department, Pratt School of Engineering Duke University, Durham, NC, USA
| | - Xuemin Zhu
- Whiting School of Engineering, Johns Hopkins University, 500 W University Parkway, MD, USA, USA
| | - Xin Zhao
- Environmental Engineering Department, Northeastern University, Shenyang, China
| |
Collapse
|
13
|
Deep Neural Network-Based Method for Detecting Central Retinal Vein Occlusion Using Ultrawide-Field Fundus Ophthalmoscopy. J Ophthalmol 2018; 2018:1875431. [PMID: 30515316 PMCID: PMC6236766 DOI: 10.1155/2018/1875431] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 10/17/2018] [Indexed: 11/17/2022] Open
Abstract
The aim of this study is to assess the performance of two machine-learning technologies, namely, deep learning (DL) and support vector machine (SVM) algorithms, for detecting central retinal vein occlusion (CRVO) in ultrawide-field fundus images. Images from 125 CRVO patients (n=125 images) and 202 non-CRVO normal subjects (n=238 images) were included in this study. Training to construct the DL model using deep convolutional neural network algorithms was provided using ultrawide-field fundus images. The SVM uses scikit-learn library with a radial basis function kernel. The diagnostic abilities of DL and the SVM were compared by assessing their sensitivity, specificity, and area under the curve (AUC) of the receiver operating characteristic curve for CRVO. For diagnosing CRVO, the DL model had a sensitivity of 98.4% (95% confidence interval (CI), 94.3–99.8%) and a specificity of 97.9% (95% CI, 94.6–99.1%) with an AUC of 0.989 (95% CI, 0.980–0.999). In contrast, the SVM model had a sensitivity of 84.0% (95% CI, 76.3–89.3%) and a specificity of 87.5% (95% CI, 82.7–91.1%) with an AUC of 0.895 (95% CI, 0.859–0.931). Thus, the DL model outperformed the SVM model in all indices assessed (P < 0.001 for all). Our data suggest that a DL model derived using ultrawide-field fundus images could distinguish between normal and CRVO images with a high level of accuracy and that automatic CRVO detection in ultrawide-field fundus ophthalmoscopy is possible. This proposed DL-based model can also be used in ultrawide-field fundus ophthalmoscopy to accurately diagnose CRVO and improve medical care in remote locations where it is difficult for patients to attend an ophthalmic medical center.
Collapse
|