1
|
Asiri AA, Shaf A, Ali T, Aamir M, Irfan M, Alqahtani S. Enhancing brain tumor diagnosis: an optimized CNN hyperparameter model for improved accuracy and reliability. PeerJ Comput Sci 2024; 10:e1878. [PMID: 38660148 PMCID: PMC11041936 DOI: 10.7717/peerj-cs.1878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 01/24/2024] [Indexed: 04/26/2024]
Abstract
Hyperparameter tuning plays a pivotal role in the accuracy and reliability of convolutional neural network (CNN) models used in brain tumor diagnosis. These hyperparameters exert control over various aspects of the neural network, encompassing feature extraction, spatial resolution, non-linear mapping, convergence speed, and model complexity. We propose a meticulously refined CNN hyperparameter model designed to optimize critical parameters, including filter number and size, stride padding, pooling techniques, activation functions, learning rate, batch size, and the number of layers. Our approach leverages two publicly available brain tumor MRI datasets for research purposes. The first dataset comprises a total of 7,023 human brain images, categorized into four classes: glioma, meningioma, no tumor, and pituitary. The second dataset contains 253 images classified as "yes" and "no." Our approach delivers exceptional results, demonstrating an average 94.25% precision, recall, and F1-score with 96% accuracy for dataset 1, while an average 87.5% precision, recall, and F1-score, with accuracy of 88% for dataset 2. To affirm the robustness of our findings, we perform a comprehensive comparison with existing techniques, revealing that our method consistently outperforms these approaches. By systematically fine-tuning these critical hyperparameters, our model not only enhances its performance but also bolsters its generalization capabilities. This optimized CNN model provides medical experts with a more precise and efficient tool for supporting their decision-making processes in brain tumor diagnosis.
Collapse
Affiliation(s)
- Abdullah A. Asiri
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Najran, Saudi Arabia
| | - Ahmad Shaf
- Department of Computer Science, COMSATS University Islamabad, Sahiwal, Punjan, Pakistan
| | - Tariq Ali
- Department of Computer Science, COMSATS University Islamabad, Sahiwal, Punjan, Pakistan
| | - Muhammad Aamir
- Department of Computer Science, COMSATS University Islamabad, Sahiwal, Punjan, Pakistan
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran, Najran, Saudi Arabia
| | - Saeed Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Najran, Saudi Arabia
| |
Collapse
|
2
|
Bibault JE, Giraud P. Deep learning for automated segmentation in radiotherapy: a narrative review. Br J Radiol 2024; 97:13-20. [PMID: 38263838 PMCID: PMC11027240 DOI: 10.1093/bjr/tqad018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 08/10/2023] [Accepted: 10/27/2023] [Indexed: 01/25/2024] Open
Abstract
The segmentation of organs and structures is a critical component of radiation therapy planning, with manual segmentation being a laborious and time-consuming task. Interobserver variability can also impact the outcomes of radiation therapy. Deep neural networks have recently gained attention for their ability to automate segmentation tasks, with convolutional neural networks (CNNs) being a popular approach. This article provides a descriptive review of the literature on deep learning (DL) techniques for segmentation in radiation therapy planning. This review focuses on five clinical sub-sites and finds that U-net is the most commonly used CNN architecture. The studies using DL for image segmentation were included in brain, head and neck, lung, abdominal, and pelvic cancers. The majority of DL segmentation articles in radiation therapy planning have concentrated on normal tissue structures. N-fold cross-validation was commonly employed, without external validation. This research area is expanding quickly, and standardization of metrics and independent validation are critical to benchmarking and comparing proposed methods.
Collapse
Affiliation(s)
- Jean-Emmanuel Bibault
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique—Hôpitaux de Paris, Université de Paris Cité, Paris, 75015, France
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
| | - Paul Giraud
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
- Radiation Oncology Department, Pitié Salpêtrière Hospital, Assistance Publique—Hôpitaux de Paris, Paris Sorbonne Universités, Paris, 75013, France
| |
Collapse
|
3
|
Aggarwal M, Tiwari AK, Sarathi MP, Bijalwan A. An early detection and segmentation of Brain Tumor using Deep Neural Network. BMC Med Inform Decis Mak 2023; 23:78. [PMID: 37101176 PMCID: PMC10134539 DOI: 10.1186/s12911-023-02174-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 04/12/2023] [Indexed: 04/28/2023] Open
Abstract
BACKGROUND Magnetic resonance image (MRI) brain tumor segmentation is crucial and important in the medical field, which can help in diagnosis and prognosis, overall growth predictions, Tumor density measures, and care plans needed for patients. The difficulty in segmenting brain Tumors is primarily because of the wide range of structures, shapes, frequency, position, and visual appeal of Tumors, like intensity, contrast, and visual variation. With recent advancements in Deep Neural Networks (DNN) for image classification tasks, intelligent medical image segmentation is an exciting direction for Brain Tumor research. DNN requires a lot of time & processing capabilities to train because of only some gradient diffusion difficulty and its complication. METHODS To overcome the gradient issue of DNN, this research work provides an efficient method for brain Tumor segmentation based on the Improved Residual Network (ResNet). Existing ResNet can be improved by maintaining the details of all the available connection links or by improving projection shortcuts. These details are fed to later phases, due to which improved ResNet achieves higher precision and can speed up the learning process. RESULTS The proposed improved Resnet address all three main components of existing ResNet: the flow of information through the network layers, the residual building block, and the projection shortcut. This approach minimizes computational costs and speeds up the process. CONCLUSION An experimental analysis of the BRATS 2020 MRI sample data reveals that the proposed methodology achieves competitive performance over the traditional methods like CNN and Fully Convolution Neural Network (FCN) in more than 10% improved accuracy, recall, and f-measure.
Collapse
Affiliation(s)
- Mukul Aggarwal
- Dr. A.P.J. Abdul Kalam Technical University, Lucknow, Uttar Pradesh, India
| | | | - M Partha Sarathi
- Amity School of Engineering and Technology, Amity University, Noida, Uttar Pradesh, India
| | - Anchit Bijalwan
- Faculty of Electrical and Computer Engineering, Arba Minch University, Arba Minch, Ethiopia.
| |
Collapse
|
4
|
A Comprehensive Analysis of Recent Deep and Federated-Learning-Based Methodologies for Brain Tumor Diagnosis. J Pers Med 2022; 12:jpm12020275. [PMID: 35207763 PMCID: PMC8880689 DOI: 10.3390/jpm12020275] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/05/2022] [Accepted: 02/09/2022] [Indexed: 12/12/2022] Open
Abstract
Brain tumors are a deadly disease with a high mortality rate. Early diagnosis of brain tumors improves treatment, which results in a better survival rate for patients. Artificial intelligence (AI) has recently emerged as an assistive technology for the early diagnosis of tumors, and AI is the primary focus of researchers in the diagnosis of brain tumors. This study provides an overview of recent research on the diagnosis of brain tumors using federated and deep learning methods. The primary objective is to explore the performance of deep and federated learning methods and evaluate their accuracy in the diagnosis process. A systematic literature review is provided, discussing the open issues and challenges, which are likely to guide future researchers working in the field of brain tumor diagnosis.
Collapse
|
5
|
Wen H, Wu W, Fan F, Liao P, Chen H, Zhang Y, Deng Z, Lv W. Human identification performed with skull's sphenoid sinus based on deep learning. Int J Legal Med 2022; 136:1067-1074. [PMID: 35022840 DOI: 10.1007/s00414-021-02761-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 12/02/2021] [Indexed: 11/29/2022]
Abstract
Human identification plays a significant role in the investigations of disasters and criminal cases. Human identification could be achieved quickly and efficiently via 3D sphenoid sinus models by customized convolutional neural networks. In this retrospective study, a deep learning neural network was proposed to achieve human identification of 1475 noncontrast thin-slice CT scans. A total of 732 patients were retrieved and studied (82% for model training and 18% for testing). By establishing an individual recognition framework, the anonymous sphenoid sinus model was matched and cross-tested, and the performance of the framework also was evaluated on the test set using the recognition rate, ROC curve and identification speed. Finally, manual matching was performed based on the framework results in the test set. Out of a total of 732 subjects (mean age 46.45 years ± 14.92 (SD); 349 women), 600 subjects were trained, and 132 subjects were tested. The present automatic human identification has achieved Rank 1 and Rank 5 accuracy values of 93.94% and 99.24%, respectively, in the test set. In addition, all the identifications were completed within 55 s, which manifested the inference speed of the test set. We used the comparison results of the MVSS-Net to exclude sphenoid sinus models with low similarity and carried out traditional visual comparisons of the CT anatomical aspects of the sphenoid sinus of 132 individuals with an accuracy of 100%. The customized deep learning framework achieves reliable and fast human identification based on a 3D sphenoid sinus and can assist forensic radiologists in human identification accuracy.
Collapse
Affiliation(s)
- Hanjie Wen
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Wei Wu
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Fei Fan
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Peixi Liao
- Department of Scientific Research and Education, The Sixth People's 3. Hospital of Chengdu, Chengdu, 610065, People's Republic of China
| | - Hu Chen
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China.
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Zhenhua Deng
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, People's Republic of China.
| | - Weiqiang Lv
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| |
Collapse
|
6
|
Risk Factors of Restroke in Patients with Lacunar Cerebral Infarction Using Magnetic Resonance Imaging Image Features under Deep Learning Algorithm. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:2527595. [PMID: 34887708 PMCID: PMC8616697 DOI: 10.1155/2021/2527595] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/18/2021] [Accepted: 10/23/2021] [Indexed: 02/08/2023]
Abstract
This study was aimed to explore the magnetic resonance imaging (MRI) image features based on the fuzzy local information C-means clustering (FLICM) image segmentation method to analyze the risk factors of restroke in patients with lacunar infarction. In this study, based on the FLICM algorithm, the Canny edge detection algorithm and the Fourier shape descriptor were introduced to optimize the algorithm. The difference of Jaccard coefficient, Dice coefficient, peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), running time, and segmentation accuracy of the optimized FLICM algorithm and other algorithms when the brain tissue MRI images were segmented was studied. 36 patients with lacunar infarction were selected as the research objects, and they were divided into a control group (no restroke, 20 cases) and a stroke group (restroke, 16 cases) according to whether the patients had restroke. The differences in MRI imaging characteristics of the two groups of patients were compared, and the risk factors for restroke in lacunar infarction were analyzed by logistic multivariate regression. The results showed that the Jaccard coefficient, Dice coefficient, PSNR value, and SSIM value of the optimized FLICM algorithm for segmenting brain tissue were all higher than those of other algorithms. The shortest running time was 26 s, and the highest accuracy rate was 97.86%. The proportion of patients with a history of hypertension, the proportion of patients with paraventricular white matter lesion (WML) score greater than 2 in the stroke group, the proportion of patients with a deep WML score of 2, and the average age of patients in the stroke group were much higher than those in the control group (P < 0.05). Logistic multivariate regression showed that age and history of hypertension were risk factors for restroke after lacunar infarction (P < 0.05). It showed that the optimized FLICM algorithm can effectively segment brain MRI images, and the risk factors for restroke in patients with lacunar infarction were age and hypertension history. This study could provide a reference for the diagnosis and prognosis of lacunar infarction.
Collapse
|
7
|
Chen M, Wu S, Zhao W, Zhou Y, Zhou Y, Wang G. Application of deep learning to auto-delineation of target volumes and organs at risk in radiotherapy. Cancer Radiother 2021; 26:494-501. [PMID: 34711488 DOI: 10.1016/j.canrad.2021.08.020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 07/30/2021] [Accepted: 08/04/2021] [Indexed: 11/28/2022]
Abstract
The technological advancement heralded the arrival of precision radiotherapy (RT), thereby increasing the therapeutic ratio and decreasing the side effects from treatment. Contour of target volumes (TV) and organs at risk (OARs) in RT is a complicated process. In recent years, automatic contouring of TV and OARs has rapidly developed due to the advances in deep learning (DL). This technology has the potential to save time and to reduce intra- or inter-observer variability. In this paper, the authors provide an overview of RT, introduce the concept of DL, summarize the data characteristics of the included literature, summarize the possible challenges for DL in the future, and discuss the possible research directions.
Collapse
Affiliation(s)
- M Chen
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - S Wu
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - W Zhao
- Bengbu Medical College, Bengbu, Anhui 233030, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - G Wang
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China.
| |
Collapse
|
8
|
Yan Y, Yao XJ, Wang SH, Zhang YD. A Survey of Computer-Aided Tumor Diagnosis Based on Convolutional Neural Network. BIOLOGY 2021; 10:biology10111084. [PMID: 34827077 PMCID: PMC8615026 DOI: 10.3390/biology10111084] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/19/2021] [Accepted: 10/20/2021] [Indexed: 01/10/2023]
Abstract
Simple Summary One of the hottest areas in deep learning is computerized tumor diagnosis and treatment. The identification of tumor markers, the outline of tumor growth activity, and the staging of various tumor kinds are frequently included. There are several deep learning models based on convolutional neural networks that have high performance and accurate identification, with the potential to improve medical tasks. Breakthroughs and updates in computer algorithms and hardware devices, and intelligent algorithms applied in medical images have a diagnostic accuracy that doctors cannot match in some diseases. This paper reviews the progress of tumor detection from traditional computer-aided methods to convolutional neural networks and demonstrates the potential of the practical application of convolutional neural networks from practical cases to transform the detection model from experiment to clinical application. Abstract Tumors are new tissues that are harmful to human health. The malignant tumor is one of the main diseases that seriously affect human health and threaten human life. For cancer treatment, early detection of pathological features is essential to reduce cancer mortality effectively. Traditional diagnostic methods include routine laboratory tests of the patient’s secretions, and serum, immune and genetic tests. At present, the commonly used clinical imaging examinations include X-ray, CT, MRI, SPECT scan, etc. With the emergence of new problems of radiation noise reduction, medical image noise reduction technology is more and more investigated by researchers. At the same time, doctors often need to rely on clinical experience and academic background knowledge in the follow-up diagnosis of lesions. However, it is challenging to promote clinical diagnosis technology. Therefore, due to the medical needs, research on medical imaging technology and computer-aided diagnosis appears. The advantages of a convolutional neural network in tumor diagnosis are increasingly obvious. The research on computer-aided diagnosis based on medical images of tumors has become a sharper focus in the industry. Neural networks have been commonly used to research intelligent methods to assist medical image diagnosis and have made significant progress. This paper introduces the traditional methods of computer-aided diagnosis of tumors. It introduces the segmentation and classification of tumor images as well as the diagnosis methods based on CNN to help doctors determine tumors. It provides a reference for developing a CNN computer-aided system based on tumor detection research in the future.
Collapse
|
9
|
Huang D, Bai H, Wang L, Hou Y, Li L, Xia Y, Yan Z, Chen W, Chang L, Li W. The Application and Development of Deep Learning in Radiotherapy: A Systematic Review. Technol Cancer Res Treat 2021; 20:15330338211016386. [PMID: 34142614 PMCID: PMC8216350 DOI: 10.1177/15330338211016386] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.
Collapse
Affiliation(s)
- Danju Huang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Han Bai
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Wang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yu Hou
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Lan Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yaoxiong Xia
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Zhirui Yan
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenrui Chen
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Chang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenhui Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| |
Collapse
|
10
|
Brain tumour image classification using improved convolution neural networks. APPLIED NANOSCIENCE 2021. [DOI: 10.1007/s13204-021-01906-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
11
|
Zegers C, Posch J, Traverso A, Eekers D, Postma A, Backes W, Dekker A, van Elmpt W. Current applications of deep-learning in neuro-oncological MRI. Phys Med 2021; 83:161-173. [DOI: 10.1016/j.ejmp.2021.03.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 02/01/2021] [Accepted: 03/02/2021] [Indexed: 12/18/2022] Open
|
12
|
Artificial intelligence: Deep learning in oncological radiomics and challenges of interpretability and data harmonization. Phys Med 2021; 83:108-121. [DOI: 10.1016/j.ejmp.2021.03.009] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 03/01/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
13
|
MRI brain tumor medical images analysis using deep learning techniques: a systematic review. HEALTH AND TECHNOLOGY 2021. [DOI: 10.1007/s12553-020-00514-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
14
|
Zadeh Shirazi A, Fornaciari E, McDonnell MD, Yaghoobi M, Cevallos Y, Tello-Oquendo L, Inca D, Gomez GA. The Application of Deep Convolutional Neural Networks to Brain Cancer Images: A Survey. J Pers Med 2020; 10:E224. [PMID: 33198332 PMCID: PMC7711876 DOI: 10.3390/jpm10040224] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/10/2020] [Accepted: 11/10/2020] [Indexed: 12/15/2022] Open
Abstract
In recent years, improved deep learning techniques have been applied to biomedical image processing for the classification and segmentation of different tumors based on magnetic resonance imaging (MRI) and histopathological imaging (H&E) clinical information. Deep Convolutional Neural Networks (DCNNs) architectures include tens to hundreds of processing layers that can extract multiple levels of features in image-based data, which would be otherwise very difficult and time-consuming to be recognized and extracted by experts for classification of tumors into different tumor types, as well as segmentation of tumor images. This article summarizes the latest studies of deep learning techniques applied to three different kinds of brain cancer medical images (histology, magnetic resonance, and computed tomography) and highlights current challenges in the field for the broader applicability of DCNN in personalized brain cancer care by focusing on two main applications of DCNNs: classification and segmentation of brain cancer tumors images.
Collapse
Affiliation(s)
- Amin Zadeh Shirazi
- Centre for Cancer Biology, SA Pathology and the University of South of Australia, Adelaide, SA 5000, Australia;
- Computational Learning Systems Laboratory, UniSA STEM, University of South Australia, Mawson Lakes, SA 5095, Australia;
| | - Eric Fornaciari
- Department of Mathematics of Computation, University of California, Los Angeles (UCLA), Los Angeles, CA 90095, USA;
| | - Mark D. McDonnell
- Computational Learning Systems Laboratory, UniSA STEM, University of South Australia, Mawson Lakes, SA 5095, Australia;
| | - Mahdi Yaghoobi
- Electrical and Computer Engineering Department, Islamic Azad University, Mashhad Branch, Mashad 917794-8564, Iran;
| | - Yesenia Cevallos
- College of Engineering, Universidad Nacional de Chimborazo, Riobamba 060150, Ecuador; (Y.C.); (L.T.-O.); (D.I.)
| | - Luis Tello-Oquendo
- College of Engineering, Universidad Nacional de Chimborazo, Riobamba 060150, Ecuador; (Y.C.); (L.T.-O.); (D.I.)
| | - Deysi Inca
- College of Engineering, Universidad Nacional de Chimborazo, Riobamba 060150, Ecuador; (Y.C.); (L.T.-O.); (D.I.)
| | - Guillermo A. Gomez
- Centre for Cancer Biology, SA Pathology and the University of South of Australia, Adelaide, SA 5000, Australia;
| |
Collapse
|
15
|
Radiomics in radiation oncology-basics, methods, and limitations. Strahlenther Onkol 2020; 196:848-855. [PMID: 32647917 PMCID: PMC7498498 DOI: 10.1007/s00066-020-01663-3] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Accepted: 06/22/2020] [Indexed: 12/19/2022]
Abstract
Over the past years, the quantity and complexity of imaging data available for the clinical management of patients with solid tumors has increased substantially. Without the support of methods from the field of artificial intelligence (AI) and machine learning, a complete evaluation of the available image information is hardly feasible in clinical routine. Especially in radiotherapy planning, manual detection and segmentation of lesions is laborious, time consuming, and shows significant variability among observers. Here, AI already offers techniques to support radiation oncologists, whereby ultimately, the productivity and the quality are increased, potentially leading to an improved patient outcome. Besides detection and segmentation of lesions, AI allows the extraction of a vast number of quantitative imaging features from structural or functional imaging data that are typically not accessible by means of human perception. These features can be used alone or in combination with other clinical parameters to generate mathematical models that allow, for example, prediction of the response to radiotherapy. Within the large field of AI, radiomics is the subdiscipline that deals with the extraction of quantitative image features as well as the generation of predictive or prognostic mathematical models. This review gives an overview of the basics, methods, and limitations of radiomics, with a focus on patients with brain tumors treated by radiation therapy.
Collapse
|
16
|
Lohmann P, Galldiks N, Kocher M, Heinzel A, Filss CP, Stegmayr C, Mottaghy FM, Fink GR, Jon Shah N, Langen KJ. Radiomics in neuro-oncology: Basics, workflow, and applications. Methods 2020; 188:112-121. [PMID: 32522530 DOI: 10.1016/j.ymeth.2020.06.003] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 05/28/2020] [Accepted: 06/03/2020] [Indexed: 02/02/2023] Open
Abstract
Over the last years, the amount, variety, and complexity of neuroimaging data acquired in patients with brain tumors for routine clinical purposes and the resulting number of imaging parameters have substantially increased. Consequently, a timely and cost-effective evaluation of imaging data is hardly feasible without the support of methods from the field of artificial intelligence (AI). AI can facilitate and shorten various time-consuming steps in the image processing workflow, e.g., tumor segmentation, thereby optimizing productivity. Besides, the automated and computer-based analysis of imaging data may help to increase data comparability as it is independent of the experience level of the evaluating clinician. Importantly, AI offers the potential to extract new features from the routinely acquired neuroimages of brain tumor patients. In combination with patient data such as survival, molecular markers, or genomics, mathematical models can be generated that allow, for example, the prediction of treatment response or prognosis, as well as the noninvasive assessment of molecular markers. The subdiscipline of AI dealing with the computation, identification, and extraction of image features, as well as the generation of prognostic or predictive mathematical models, is termed radiomics. This review article summarizes the basics, the current workflow, and methods used in radiomics with a focus on feature-based radiomics in neuro-oncology and provides selected examples of its clinical application.
Collapse
Affiliation(s)
- Philipp Lohmann
- Institute of Neuroscience and Medicine (INM-3, -4, -11), Research Center Juelich, Wilhelm-Johnen-Str., 52428 Juelich, Germany; Department of Stereotaxy and Functional Neurosurgery, Center for Neurosurgery, Faculty of Medicine and University Hospital Cologne, Kerpener Str. 62, 50937 Cologne, Germany.
| | - Norbert Galldiks
- Institute of Neuroscience and Medicine (INM-3, -4, -11), Research Center Juelich, Wilhelm-Johnen-Str., 52428 Juelich, Germany; Department of Neurology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany; Center of Integrated Oncology (CIO), Universities of Aachen, Bonn, Cologne and Duesseldorf, Kerpener Str. 62, 50937 Cologne, Germany
| | - Martin Kocher
- Institute of Neuroscience and Medicine (INM-3, -4, -11), Research Center Juelich, Wilhelm-Johnen-Str., 52428 Juelich, Germany; Department of Stereotaxy and Functional Neurosurgery, Center for Neurosurgery, Faculty of Medicine and University Hospital Cologne, Kerpener Str. 62, 50937 Cologne, Germany; Center of Integrated Oncology (CIO), Universities of Aachen, Bonn, Cologne and Duesseldorf, Kerpener Str. 62, 50937 Cologne, Germany
| | - Alexander Heinzel
- Institute of Neuroscience and Medicine (INM-3, -4, -11), Research Center Juelich, Wilhelm-Johnen-Str., 52428 Juelich, Germany; Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Christian P Filss
- Institute of Neuroscience and Medicine (INM-3, -4, -11), Research Center Juelich, Wilhelm-Johnen-Str., 52428 Juelich, Germany; Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Carina Stegmayr
- Institute of Neuroscience and Medicine (INM-3, -4, -11), Research Center Juelich, Wilhelm-Johnen-Str., 52428 Juelich, Germany
| | - Felix M Mottaghy
- Center of Integrated Oncology (CIO), Universities of Aachen, Bonn, Cologne and Duesseldorf, Kerpener Str. 62, 50937 Cologne, Germany; Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany; Department of Radiology and Nuclear Medicine, Maastricht University Medical Center (MUMC+), P.Debeylaan 25, 6229 HX Maastricht, P.O. Box 5800, 6202 AZ Maastricht, the Netherlands
| | - Gereon R Fink
- Institute of Neuroscience and Medicine (INM-3, -4, -11), Research Center Juelich, Wilhelm-Johnen-Str., 52428 Juelich, Germany; Department of Neurology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| | - N Jon Shah
- Institute of Neuroscience and Medicine (INM-3, -4, -11), Research Center Juelich, Wilhelm-Johnen-Str., 52428 Juelich, Germany; JARA - BRAIN - Translational Medicine, Aachen, Germany; Department of Neurology, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Karl-Josef Langen
- Institute of Neuroscience and Medicine (INM-3, -4, -11), Research Center Juelich, Wilhelm-Johnen-Str., 52428 Juelich, Germany; Center of Integrated Oncology (CIO), Universities of Aachen, Bonn, Cologne and Duesseldorf, Kerpener Str. 62, 50937 Cologne, Germany; Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany; JARA - BRAIN - Translational Medicine, Aachen, Germany
| |
Collapse
|
17
|
Ting HW, Chung SL, Chen CF, Chiu HY, Hsieh YW. A drug identification model developed using deep learning technologies: experience of a medical center in Taiwan. BMC Health Serv Res 2020; 20:312. [PMID: 32293426 PMCID: PMC7158008 DOI: 10.1186/s12913-020-05166-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Accepted: 03/27/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Issuing of correct prescriptions is a foundation of patient safety. Medication errors represent one of the most important problems in health care, with 'look-alike and sound-alike' (LASA) being the lead error. Existing solutions to prevent LASA still have their limitations. Deep learning techniques have revolutionized identification classifiers in many fields. In search of better image-based solutions for blister package identification problem, this study using a baseline deep learning drug identification (DLDI) aims to understand how identification confusion of look-alike images by human occurs through the cognitive counterpart of deep learning solutions and thereof to suggest further solutions to approach them. METHODS We collected images of 250 types of blister-packaged drug from the Out-Patient Department (OPD) of a medical center for identification. The deep learning framework of You Only Look Once (YOLO) was adopted for implementation of the proposed deep learning. The commonly-used F1 score, defined by precision and recall for large numbers of identification tests, was used as the performance criterion. This study trained and compared the proposed models based on images of either the front-side or back-side of blister-packaged drugs. RESULTS Our results showed that the total training time for the front-side model and back-side model was 5 h 34 min and 7 h 42 min, respectively. The F1 score of the back-side model (95.99%) was better than that of the front-side model (93.72%). CONCLUSIONS In conclusion, this study constructed a deep learning-based model for blister-packaged drug identification, with an accuracy greater than 90%. This model outperformed identification using conventional computer vision solutions, and could assist pharmacists in identifying drugs while preventing medication errors caused by look-alike blister packages. By integration into existing prescription systems in hospitals, the results of this study indicated that using this model, drugs dispensed could be verified in order to achieve automated prescription and dispensing.
Collapse
Affiliation(s)
- Hsien-Wei Ting
- Department of Neurosurgery, Taipei Hospital, Ministry of Health and Welfare, New Taipei City, Taiwan.,Graduate Program in Biomedical Informatics, Yuan Ze University, Taoyuan City, Taiwan
| | - Sheng-Luen Chung
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei City, Taiwan
| | - Chih-Fang Chen
- Pharmaceutical Department, Mackay Memorial Hospital, No. 92, Sec. 2, Zhongshan N. Rd, Taipei City, 10449, Taiwan.
| | - Hsin-Yi Chiu
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei City, Taiwan
| | - Yow-Wen Hsieh
- Department of Pharmacy, China Medical University Hospital, Taichung, Taiwan
| |
Collapse
|
18
|
Nadeem MW, Ghamdi MAA, Hussain M, Khan MA, Khan KM, Almotiri SH, Butt SA. Brain Tumor Analysis Empowered with Deep Learning: A Review, Taxonomy, and Future Challenges. Brain Sci 2020; 10:brainsci10020118. [PMID: 32098333 PMCID: PMC7071415 DOI: 10.3390/brainsci10020118] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 02/07/2020] [Accepted: 02/13/2020] [Indexed: 12/17/2022] Open
Abstract
Deep Learning (DL) algorithms enabled computational models consist of multiple processing layers that represent data with multiple levels of abstraction. In recent years, usage of deep learning is rapidly proliferating in almost every domain, especially in medical image processing, medical image analysis, and bioinformatics. Consequently, deep learning has dramatically changed and improved the means of recognition, prediction, and diagnosis effectively in numerous areas of healthcare such as pathology, brain tumor, lung cancer, abdomen, cardiac, and retina. Considering the wide range of applications of deep learning, the objective of this article is to review major deep learning concepts pertinent to brain tumor analysis (e.g., segmentation, classification, prediction, evaluation.). A review conducted by summarizing a large number of scientific contributions to the field (i.e., deep learning in brain tumor analysis) is presented in this study. A coherent taxonomy of research landscape from the literature has also been mapped, and the major aspects of this emerging field have been discussed and analyzed. A critical discussion section to show the limitations of deep learning techniques has been included at the end to elaborate open research challenges and directions for future work in this emergent area.
Collapse
Affiliation(s)
- Muhammad Waqas Nadeem
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan;
- Correspondence:
| | - Mohammed A. Al Ghamdi
- Department of Computer Science, Umm Al-Qura University, Makkah 23500, Saudi Arabia; (M.A.A.G.); (S.H.A.)
| | - Muzammil Hussain
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan;
| | - Muhammad Adnan Khan
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
| | - Khalid Masood Khan
- Department of Computer Science, Lahore Garrison University, Lahore 54000, Pakistan; (M.A.K.); (K.M.K.)
| | - Sultan H. Almotiri
- Department of Computer Science, Umm Al-Qura University, Makkah 23500, Saudi Arabia; (M.A.A.G.); (S.H.A.)
| | - Suhail Ashfaq Butt
- Department of Information Sciences, Division of Science and Technology, University of Education Township, Lahore 54700, Pakistan;
| |
Collapse
|