1
|
Gong EJ, Bang CS, Lee JJ, Baik GH, Lim H, Jeong JH, Choi SW, Cho J, Kim DY, Lee KB, Shin SI, Sigmund D, Moon BI, Park SC, Lee SH, Bang KB, Son DS. Deep learning-based clinical decision support system for gastric neoplasms in real-time endoscopy: development and validation study. Endoscopy 2023; 55:701-708. [PMID: 36754065 DOI: 10.1055/a-2031-0691] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
BACKGROUND : Deep learning models have previously been established to predict the histopathology and invasion depth of gastric lesions using endoscopic images. This study aimed to establish and validate a deep learning-based clinical decision support system (CDSS) for the automated detection and classification (diagnosis and invasion depth prediction) of gastric neoplasms in real-time endoscopy. METHODS : The same 5017 endoscopic images that were employed to establish previous models were used for the training data. The primary outcomes were: (i) the lesion detection rate for the detection model, and (ii) the lesion classification accuracy for the classification model. For performance validation of the lesion detection model, 2524 real-time procedures were tested in a randomized pilot study. Consecutive patients were allocated either to CDSS-assisted or conventional screening endoscopy. The lesion detection rate was compared between the groups. For performance validation of the lesion classification model, a prospective multicenter external test was conducted using 3976 novel images from five institutions. RESULTS : The lesion detection rate was 95.6 % (internal test). On performance validation, CDSS-assisted endoscopy showed a higher lesion detection rate than conventional screening endoscopy, although statistically not significant (2.0 % vs. 1.3 %; P = 0.21) (randomized study). The lesion classification rate was 89.7 % in the four-class classification (advanced gastric cancer, early gastric cancer, dysplasia, and non-neoplastic) and 89.2 % in the invasion depth prediction (mucosa confined or submucosa invaded; internal test). On performance validation, the CDSS reached 81.5 % accuracy in the four-class classification and 86.4 % accuracy in the binary classification (prospective multicenter external test). CONCLUSIONS : The CDSS demonstrated its potential for real-life clinical application and high performance in terms of lesion detection and classification of detected lesions in the stomach.
Collapse
Affiliation(s)
- Eun Jeong Gong
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, South Korea
| | - Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, South Korea
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, South Korea
| | - Jae Jun Lee
- Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, South Korea
- Division of Big Data and Artificial Intelligence, Chuncheon Sacred Heart Hospital, South Korea
- Department of Anesthesiology and Pain Medicine, Hallym University College of Medicine, Chuncheon, South Korea
| | - Gwang Ho Baik
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
| | - Hyun Lim
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, South Korea
- Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, South Korea
| | | | | | | | | | | | | | | | | | - Sung Chul Park
- Department of Internal Medicine, School of Medicine, Kangwon National University, Chuncheon, South Korea
| | - Sang Hoon Lee
- Department of Internal Medicine, School of Medicine, Kangwon National University, Chuncheon, South Korea
| | - Ki Bae Bang
- Department of Internal Medicine, Dankook University College of Medicine, Cheonan, South Korea
| | - Dae-Soon Son
- Division of Data Science, Data Science Convergence Research Center, Hallym University, Chuncheon, South Korea
| |
Collapse
|
2
|
Demir F, Akbulut Y, Taşcı B, Demir K. Improving brain tumor classification performance with an effective approach based on new deep learning model named 3ACL from 3D MRI data. Biomed Signal Process Control 2023; 81:104424. [DOI: 10.1016/j.bspc.2022.104424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
3
|
Koteswara Rao Chinnam S, Sistla V, Krishna Kishore Kolli V. Multimodal attention-gated cascaded U-Net model for automatic brain tumor detection and segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
4
|
El Kader IA, Xu G, Shuai Z, Brahim EMS, Saminu S. An Efficient Convolutional Neural Network Model for Brain MRI Segmentation. WSEAS TRANSACTIONS ON BIOLOGY AND BIOMEDICINE 2022; 19:77-84. [DOI: 10.37394/23208.2022.19.10] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Medical image analysis is a very interesting research area, and it is a significant challenge for researchers. Due to the complexity of the brain structure, accurate diagnosis of brain tumors is extremely difficult. In recent years, research focused on medical image processing to solve this problem by relying on deep learning techniques, and it has achieved good results in this field. This paper proposes an efficient convolutional neural network model for MR brain image segmentation and analysis. The novel model consists of segmentation efficient-CNN and pre-efficient-CNN blocks for dataset diminution and improvement blocks. The unique efficient-CNN is specially designed according to the model proposed by ASCNN (application) CNN-specific) to perform unidirectional and transverse feature extraction and tumor and pixel classification. The recommended Full-ReLU activation feature halves the number of cores in a high-coil filtered winding layer without reducing process quality. In this specific efficient-CNN consists of 8 convolutional layers and 110 kernels. The experiment results were done using the MR brain database from the Arizona university, including eluding with and without tumor images. The proposal model achieved an accuracy of 97.2% to 98%, which proves the efficiency of the model and its ability to assist in the early diagnosis of brain tumors with sufficient accuracy to support the doctors' decision during diagnosis.
Collapse
Affiliation(s)
- Isselmou Abd El Kader
- School of Health Science and Biomedical Engineering, Hebei University of Technology. Tianjin City, CHINA
| | - Guizhi Xu
- School of Health Science and Biomedical Engineering, Hebei University of Technology. Tianjin City, CHINA
| | - Zhang Shuai
- School of Health Science and Biomedical Engineering, Hebei University of Technology. Tianjin City, CHINA
| | | | - Sani Saminu
- School of Health Science and Biomedical Engineering, Hebei University of Technology. Tianjin City, CHINA
| |
Collapse
|
5
|
Qiao Z, Ge J, He W, Xu X, He J. Artificial Intelligence Algorithm-Based Computerized Tomography Image Features Combined with Serum Tumor Markers for Diagnosis of Pancreatic Cancer. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8979404. [PMID: 35281945 PMCID: PMC8906968 DOI: 10.1155/2022/8979404] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 01/01/2022] [Accepted: 01/31/2022] [Indexed: 12/12/2022]
Abstract
The objective of this study was to analyze the value of artificial intelligence algorithm-based computerized tomography (CT) image combined with serum tumor markers for diagnoses of pancreatic cancer. In the study, 68 hospitalized patients with pancreatic cancer were selected as the experimental group, and 68 hospitalized patients with chronic pancreatitis were selected as the control group, all underwent CT imaging. An image segmentation algorithm on account of two-dimensional (2D)-three-dimensional (3D) convolution neural network (CNN) was proposed. It also introduced full convolutional network (FCN) and UNet network algorithm. The diagnostic performance of CT, serum carbohydrate antigen-50 (CA-50), serum carbohydrate antigen-199 (CA-199), serum carbohydrate antigen-242 (CA-242), combined detection of tumor markers, and CT-combined tumor marker testing (CT-STUM) for pancreatic cancer were compared and analyzed. The results showed that the average Dice coefficient of 2D-3D training was 84.27%, which was higher than that of 2D and 3D CNNs. During the test, the maximum and average Dice coefficient of the 2D-3D CNN algorithm was 90.75% and 84.32%, respectively, which were higher than the other two algorithms, and the differences were statistically significant (P < 0.05). The penetration ratio of pancreatic duct in the experimental group was lower than that in the control group, the rest were higher than that in the control group, and the differences were statistically significant (P < 0.05). CA-50, CA-199, and CA-242 in the experimental group were 141.72 U/mL, 1548.24 U/mL, and 83.65 U/mL, respectively, which were higher than those in the control group, and the differences were statistically significant (P < 0.05). The sensitivity, specificity, positive predictive value, and authenticity of combined detection of serum tumor markers were higher than those of CA-50, CA-199, and CA-242, and the differences were statistically significant (P < 0.05). The results showed that the proposed algorithm 2D-3D CNN had good stability and image segmentation performance. CT-STUM had high sensitivity and specificity in diagnoses of pancreatic cancer.
Collapse
Affiliation(s)
- Zhengmei Qiao
- Department of Clinical Laboratory, Baoji Hi-Tech Hospital, Baoji, 721013 Shaanxi, China
| | - Junli Ge
- Department of Clinical Laboratory, Baoji Hi-Tech Hospital, Baoji, 721013 Shaanxi, China
| | - Wenping He
- Liver and Gallbladder Surgery, Ankang Hospital of Traditional Chinese Medicine, Ankang, 725000 Shaanxi, China
| | - Xinye Xu
- Emergency Surgery, Ankang Hospital of Traditional Chinese Medicine, Ankang, 725000 Shaanxi, China
| | - Jianxin He
- Liver and Gallbladder Surgery, Ankang Hospital of Traditional Chinese Medicine, Ankang, 725000 Shaanxi, China
| |
Collapse
|
6
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|
7
|
Ottom MA, Rahman HA, Dinov ID. Znet: Deep Learning Approach for 2D MRI Brain Tumor Segmentation. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:1800508. [PMID: 35774412 PMCID: PMC9236306 DOI: 10.1109/jtehm.2022.3176737] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 05/12/2022] [Accepted: 05/16/2022] [Indexed: 11/22/2022]
Abstract
Background: Detection and segmentation of brain tumors using MR images are challenging and valuable tasks in the medical field. Early diagnosing and localizing of brain tumors can save lives and provide timely options for physicians to select efficient treatment plans. Deep learning approaches have attracted researchers in medical imaging due to their capacity, performance, and potential to assist in accurate diagnosis, prognosis, and medical treatment technologies. Methods and procedures: This paper presents a novel framework for segmenting 2D brain tumors in MR images using deep neural networks (DNN) and utilizing data augmentation strategies. The proposed approach (Znet) is based on the idea of skip-connection, encoder-decoder architectures, and data amplification to propagate the intrinsic affinities of a relatively smaller number of expert delineated tumors, e.g., hundreds of patients of the low-grade glioma (LGG), to many thousands of synthetic cases. Results: Our experimental results showed high values of the mean dice similarity coefficient (dice = 0.96 during model training and dice = 0.92 for the independent testing dataset). Other evaluation measures were also relatively high, e.g., pixel accuracy = 0.996, F1 score = 0.81, and Matthews Correlation Coefficient, MCC = 0.81. The results and visualization of the DNN-derived tumor masks in the testing dataset showcase the ZNet model’s capability to localize and auto-segment brain tumors in MR images. This approach can further be generalized to 3D brain volumes, other pathologies, and a wide range of image modalities. Conclusion: We can confirm the ability of deep learning methods and the proposed Znet framework to detect and segment tumors in MR images. Furthermore, pixel accuracy evaluation may not be a suitable evaluation measure for semantic segmentation in case of class imbalance in MR images segmentation. This is because the dominant class in ground truth images is the background. Therefore, a high value of pixel accuracy can be misleading in some computer vision applications. On the other hand, alternative evaluation metrics, such as dice and IoU (Intersection over Union), are more factual for semantic segmentation. Clinical impact: Artificial intelligence (AI) applications in medicine are advancing swiftly, however, there is a lack of deployed techniques in clinical practice. This research demonstrates a practical example of AI applications in medical imaging, which can be deployed as a tool for auto-segmentation of tumors in MR images.
Collapse
Affiliation(s)
| | - Hanif Abdul Rahman
- Departments of Health Behavior and Biological Sciences and Computational Medicine and Bioinformatics, Statistics Online Computational Resource, University of Michigan, Ann Arbor, MI, USA
| | - Ivo D. Dinov
- Departments of Health Behavior and Biological Sciences and Computational Medicine and Bioinformatics, Statistics Online Computational Resource, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
8
|
Valizadeh A, Shariatee M. The Progress of Medical Image Semantic Segmentation Methods for Application in COVID-19 Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:7265644. [PMID: 34840563 PMCID: PMC8611358 DOI: 10.1155/2021/7265644] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 10/18/2021] [Indexed: 11/17/2022]
Abstract
Image medical semantic segmentation has been employed in various areas, including medical imaging, computer vision, and intelligent transportation. In this study, the method of semantic segmenting images is split into two sections: the method of the deep neural network and previous traditional method. The traditional method and the published dataset for segmentation are reviewed in the first step. The presented aspects, including all-convolution network, sampling methods, FCN connector with CRF methods, extended convolutional neural network methods, improvements in network structure, pyramid methods, multistage and multifeature methods, supervised methods, semiregulatory methods, and nonregulatory methods, are then thoroughly explored in current methods based on the deep neural network. Finally, a general conclusion on the use of developed advances based on deep neural network concepts in semantic segmentation is presented.
Collapse
Affiliation(s)
- Amin Valizadeh
- Department of Mechanical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Morteza Shariatee
- Department of Mechanical Engineering, Iowa State University, Ames, IA, USA
| |
Collapse
|
9
|
Kano Y, Ikushima H, Sasaki M, Haga A. Automatic contour segmentation of cervical cancer using artificial intelligence. JOURNAL OF RADIATION RESEARCH 2021; 62:934-944. [PMID: 34401914 PMCID: PMC8438257 DOI: 10.1093/jrr/rrab070] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 05/11/2021] [Accepted: 07/17/2021] [Indexed: 05/10/2023]
Abstract
In cervical cancer treatment, radiation therapy is selected based on the degree of tumor progression, and radiation oncologists are required to delineate tumor contours. To reduce the burden on radiation oncologists, an automatic segmentation of the tumor contours would prove useful. To the best of our knowledge, automatic tumor contour segmentation has rarely been applied to cervical cancer treatment. In this study, diffusion-weighted images (DWI) of 98 patients with cervical cancer were acquired. We trained an automatic tumor contour segmentation model using 2D U-Net and 3D U-Net to investigate the possibility of applying such a model to clinical practice. A total of 98 cases were employed for the training, and they were then predicted by swapping the training and test images. To predict tumor contours, six prediction images were obtained after six training sessions for one case. The six images were then summed and binarized to output a final image through automatic contour segmentation. For the evaluation, the Dice similarity coefficient (DSC) and Hausdorff distance (HD) was applied to analyze the difference between tumor contour delineation by radiation oncologists and the output image. The DSC ranged from 0.13 to 0.93 (median 0.83, mean 0.77). The cases with DSC <0.65 included tumors with a maximum diameter < 40 mm and heterogeneous intracavitary concentration due to necrosis. The HD ranged from 2.7 to 9.6 mm (median 4.7 mm). Thus, the study confirmed that the tumor contours of cervical cancer can be automatically segmented with high accuracy.
Collapse
Affiliation(s)
- Yosuke Kano
- Department of Radiological Technology, Tokushima Prefecture Naruto Hospital, 32 Kotani, Muyacho, Kurosaki, Naruto-shi, Tokushima 772-8503, Japan
| | - Hitoshi Ikushima
- Department of Therapeutic Radiology, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto-Cho, Tokushima, Tokushima 770-8503, Japan
| | - Motoharu Sasaki
- Corresponding author. Department of Therapeutic Radiology, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto-Cho, Tokushima, Tokushima 770-8503, Japan. Tel: +81-88-633-9053; Fax: +81-88-633-9051; E-mail:
| | - Akihiro Haga
- Department of Medical Image Informatics, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto-Cho, Tokushima, Tokushima 770-8503, Japan
| |
Collapse
|
10
|
Hasan SMK, Simon RA, Linte CA. Segmentation and Removal of Surgical Instruments for Background Scene Visualization from Endoscopic / Laparoscopic Video. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11598:115980A. [PMID: 34079156 PMCID: PMC8168980 DOI: 10.1117/12.2580668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Surgical tool segmentation is becoming imperative to provide detailed information during intra-operative execution. These tools can obscure surgeons' dexterity control due to narrow working space and visual field-of-view, which increases the risk of complications resulting from tissue injuries (e.g. tissue scars and tears). This paper demonstrates a novel application of segmenting and removing surgical instruments from laparoscopic/endoscopic video using digital inpainting algorithms. To segment the surgical instruments, we use a modified U-Net architecture (U-NetPlus) composed of a pre-trained VGG11 or VGG16 encoder and redesigned decoder. The decoder is modified by replacing the transposed convolution operation with an up-sampling operation based on nearest-neighbor (NN) interpolation. This modification removes the artifacts generated by the transposed convolution, and, furthermore, these new interpolation weights require no learning for the upsampling operation. The tool removal algorithms use the tool segmentation mask and either the instrument-free reference frames or previous instrument-containing frames to fill-in (i.e., inpaint) the instrument segmentation mask with the background tissue underneath. We have demonstrated the performance of the proposed surgical tool segmentation/removal algorithms on a robotic instrument dataset from the MICCAI 2015 EndoVis Challenge. We also showed successful performance of the tool removal algorithm from synthetically generated surgical instruments-containing videos obtained by embedding a moving surgical tool into surgical tool-free videos. Our application successfully segments and removes the surgical tool to unveil the background tissue view otherwise obstructed by the tool, producing visually comparable results to the ground truth.
Collapse
Affiliation(s)
- S. M. Kamrul Hasan
- Biomedical Modeling, Visualization and Image-guided Navigation (BiMVisIGN) Lab, RIT
- Center for Imaging Science, Rochester Institute of Technology, NY, USA
| | - Richard A. Simon
- Biomedical Modeling, Visualization and Image-guided Navigation (BiMVisIGN) Lab, RIT
- Biomedical Engineering, Rochester Institute of Technology, NY, USA
| | - Cristian A. Linte
- Biomedical Modeling, Visualization and Image-guided Navigation (BiMVisIGN) Lab, RIT
- Center for Imaging Science, Rochester Institute of Technology, NY, USA
- Biomedical Engineering, Rochester Institute of Technology, NY, USA
| |
Collapse
|
11
|
Bhandari A, Koppen J, Agzarian M. Convolutional neural networks for brain tumour segmentation. Insights Imaging 2020; 11:77. [PMID: 32514649 PMCID: PMC7280397 DOI: 10.1186/s13244-020-00869-4] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 04/02/2020] [Indexed: 12/11/2022] Open
Abstract
The introduction of quantitative image analysis has given rise to fields such as radiomics which have been used to predict clinical sequelae. One growing area of interest for analysis is brain tumours, in particular glioblastoma multiforme (GBM). Tumour segmentation is an important step in the pipeline in the analysis of this pathology. Manual segmentation is often inconsistent as it varies between observers. Automated segmentation has been proposed to combat this issue. Methodologies such as convolutional neural networks (CNNs) which are machine learning pipelines modelled on the biological process of neurons (called nodes) and synapses (connections) have been of interest in the literature. We investigate the role of CNNs to segment brain tumours by firstly taking an educational look at CNNs and perform a literature search to determine an example pipeline for segmentation. We then investigate the future use of CNNs by exploring a novel field-radiomics. This examines quantitative features of brain tumours such as shape, texture, and signal intensity to predict clinical outcomes such as survival and response to therapy.
Collapse
Affiliation(s)
- Abhishta Bhandari
- Townsville University Hospital, Townsville, Queensland, Australia. .,Department of Anatomy, James Cook University, Townsville, Queensland, Australia.
| | - Jarrad Koppen
- Townsville University Hospital, Townsville, Queensland, Australia
| | - Marc Agzarian
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, Australia.,College of Medicine & Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
12
|
Improved U-Net: Fully Convolutional Network Model for Skin-Lesion Segmentation. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10103658] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The early and accurate diagnosis of skin cancer is crucial for providing patients with advanced treatment by focusing medical personnel on specific parts of the skin. Networks based on encoder–decoder architectures have been effectively implemented for numerous computer-vision applications. U-Net, one of CNN architectures based on the encoder–decoder network, has achieved successful performance for skin-lesion segmentation. However, this network has several drawbacks caused by its upsampling method and activation function. In this paper, a fully convolutional network and its architecture are proposed with a modified U-Net, in which a bilinear interpolation method is used for upsampling with a block of convolution layers followed by parametric rectified linear-unit non-linearity. To avoid overfitting, a dropout is applied after each convolution block. The results demonstrate that our recommended technique achieves state-of-the-art performance for skin-lesion segmentation with 94% pixel accuracy and a 88% dice coefficient, respectively.
Collapse
|