1
|
Variational Autoencoders-BasedSelf-Learning Model for Tumor Identification and Impact Analysis from 2-D MRI Images. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:1566123. [PMID: 36704578 PMCID: PMC9873460 DOI: 10.1155/2023/1566123] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/13/2022] [Accepted: 01/07/2023] [Indexed: 01/19/2023]
Abstract
Over the past few years, a tremendous change has occurred in computer-aided diagnosis (CAD) technology. The evolution of numerous medical imaging techniques has enhanced the accuracy of the preliminary analysis of several diseases. Magnetic resonance imaging (MRI) is a prevalent technology extensively used in evaluating the progress of the spread of malignant tissues or abnormalities in the human body. This article aims to automate a computationally efficient mechanism that can accurately identify the tumor from MRI images and can analyze the impact of the tumor. The proposed model is robust enough to classify the tumors with minimal training data. The generative variational autoencoder models are efficient in reconstructing the images identical to the original images, which are used in adequately training the model. The proposed self-learning algorithm can learn from the insights from the autogenerated images and the original images. Incorporating long short-term memory (LSTM) is faster processing of the high dimensional imaging data, making the radiologist's task and the practitioners more comfortable assessing the tumor's progress. Self-learning models need comparatively less data for the training, and the models are more resource efficient than the various state-of-art models. The efficiency of the proposed model has been assessed using various benchmark metrics, and the obtained results have exhibited an accuracy of 89.7%. The analysis of the progress of tumor growth is presented in the current study. The obtained accuracy is not pleasing in the healthcare domain, yet the model is reasonably fair in dealing with a smaller size dataset by making use of an image generation mechanism. The study would outline the role of an autoencoder in self-learning models. Future technologies may include sturdy feature engineering models and optimized activation functions that would yield a better result.
Collapse
|
2
|
Raj R, Londhe ND, Sonawane R. PsLSNetV2: End to end deep learning system for measurement of area score of psoriasis regions in color images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
3
|
Rao VM, Wan Z, Arabshahi S, Ma DJ, Lee PY, Tian Y, Zhang X, Laine AF, Guo J. Improving across-dataset brain tissue segmentation for MRI imaging using transformer. FRONTIERS IN NEUROIMAGING 2022; 1:1023481. [PMID: 37555170 PMCID: PMC10406272 DOI: 10.3389/fnimg.2022.1023481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 10/24/2022] [Indexed: 08/10/2023]
Abstract
Brain tissue segmentation has demonstrated great utility in quantifying MRI data by serving as a precursor to further post-processing analysis. However, manual segmentation is highly labor-intensive, and automated approaches, including convolutional neural networks (CNNs), have struggled to generalize well due to properties inherent to MRI acquisition, leaving a great need for an effective segmentation tool. This study introduces a novel CNN-Transformer hybrid architecture designed to improve brain tissue segmentation by taking advantage of the increased performance and generality conferred by Transformers for 3D medical image segmentation tasks. We first demonstrate the superior performance of our model on various T1w MRI datasets. Then, we rigorously validate our model's generality applied across four multi-site T1w MRI datasets, covering different vendors, field strengths, scan parameters, and neuropsychiatric conditions. Finally, we highlight the reliability of our model on test-retest scans taken in different time points. In all situations, our model achieved the greatest generality and reliability compared to the benchmarks. As such, our method is inherently robust and can serve as a valuable tool for brain related T1w MRI studies. The code for the TABS network is available at: https://github.com/raovish6/TABS.
Collapse
Affiliation(s)
- Vishwanatha M. Rao
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Zihan Wan
- Department of Applied Mathematics, Columbia University, New York, NY, United States
| | - Soroush Arabshahi
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - David J. Ma
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Pin-Yu Lee
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Ye Tian
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Xuzhe Zhang
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Andrew F. Laine
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Jia Guo
- Department of Psychiatry, Columbia University, New York, NY, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| |
Collapse
|
4
|
SIP-UNet: Sequential Inputs Parallel UNet Architecture for Segmentation of Brain Tissues from Magnetic Resonance Images. MATHEMATICS 2022. [DOI: 10.3390/math10152755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Proper analysis of changes in brain structure can lead to a more accurate diagnosis of specific brain disorders. The accuracy of segmentation is crucial for quantifying changes in brain structure. In recent studies, UNet-based architectures have outperformed other deep learning architectures in biomedical image segmentation. However, improving segmentation accuracy is challenging due to the low resolution of medical images and insufficient data. In this study, we present a novel architecture that combines three parallel UNets using a residual network. This architecture improves upon the baseline methods in three ways. First, instead of using a single image as input, we use three consecutive images. This gives our model the freedom to learn from neighboring images as well. Additionally, the images are individually compressed and decompressed using three different UNets, which prevents the model from merging the features of the images. Finally, following the residual network architecture, the outputs of the UNets are combined in such a way that the features of the image corresponding to the output are enhanced by a skip connection. The proposed architecture performed better than using a single conventional UNet and other UNet variants.
Collapse
|
5
|
SM-SegNet: A Lightweight Squeeze M-SegNet for Tissue Segmentation in Brain MRI Scans. SENSORS 2022; 22:s22145148. [PMID: 35890829 PMCID: PMC9319649 DOI: 10.3390/s22145148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 07/02/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022]
Abstract
In this paper, we propose a novel squeeze M-SegNet (SM-SegNet) architecture featuring a fire module to perform accurate as well as fast segmentation of the brain on magnetic resonance imaging (MRI) scans. The proposed model utilizes uniform input patches, combined-connections, long skip connections, and squeeze-expand convolutional layers from the fire module to segment brain MRI data. The proposed SM-SegNet architecture involves a multi-scale deep network on the encoder side and deep supervision on the decoder side, which uses combined-connections (skip connections and pooling indices) from the encoder to the decoder layer. The multi-scale side input layers support the deep network layers' extraction of discriminative feature information, and the decoder side provides deep supervision to reduce the gradient problem. By using combined-connections, extracted features can be transferred from the encoder to the decoder resulting in recovering spatial information, which makes the model converge faster. Long skip connections were used to stabilize the gradient updates in the network. Owing to the adoption of the fire module, the proposed model was significantly faster to train and offered a more efficient memory usage with 83% fewer parameters than previously developed methods, owing to the adoption of the fire module. The proposed method was evaluated using the open-access series of imaging studies (OASIS) and the internet brain segmentation registry (IBSR) datasets. The experimental results demonstrate that the proposed SM-SegNet architecture achieves segmentation accuracies of 95% for cerebrospinal fluid, 95% for gray matter, and 96% for white matter, which outperforms the existing methods in both subjective and objective metrics in brain MRI segmentation.
Collapse
|
6
|
Marwa F, Zahzah EH, Bouallegue K, Machhout M. Deep learning based neural network application for automatic ultrasonic computed tomographic bone image segmentation. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:13537-13562. [PMID: 35194385 PMCID: PMC8853291 DOI: 10.1007/s11042-022-12322-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/30/2021] [Accepted: 01/17/2022] [Indexed: 06/14/2023]
Abstract
Deep-learning techniques have led to technological progress in the area of medical imaging segmentation especially in the ultrasound domain. In this paper, the main goal of this study is to optimize a deep-learning-based neural network architecture for automatic segmentation in Ultrasonic Computed Tomography (USCT) bone images in a short time process. The proposed method is based on an end to end neural network architecture. First, the novelty is shown by the improvement of Variable Structure Model of Neuron (VSMN), which is trained for both USCT noise removal and dataset augmentation. Second, a VGG-SegNet neural network architecture is trained and tested on new USCT images not seen before for automatic bone segmentation. Therefore, we offer a free USCT dataset. In addition, the proposed model is implemented on both the CPU and the GPU, hence overcoming previous works by a value of 97.38% and 96% for training and validation and achieving high segmentation accuracy for testing with a small error of 0.006, in a short time process. The suggested method demonstrates its ability to augment USCT data and then to automatically segment USCT bone structures achieving excellent accuracy outperforming the state of the art.
Collapse
Affiliation(s)
- Fradi Marwa
- Physic Department of Faculty of Sciences of Monastir, Monastir University, Monastir, Tunisia
- Laboratory of Informatics, Image and Interaction (L3i, France), La Rochelle University, La Rochelle, France
| | - El-hadi Zahzah
- Laboratory of Informatics, Image and Interaction (L3i, France), La Rochelle University, La Rochelle, France
| | | | - Mohsen Machhout
- Physic Department of Faculty of Sciences of Monastir, Monastir University, Monastir, Tunisia
| |
Collapse
|
7
|
Real-time application based CNN architecture for automatic USCT bone image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103123] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
8
|
Automated segmentation of epidermis in high-frequency ultrasound of pathological skin using a cascade of DeepLab v3+ networks and fuzzy connectedness. Comput Med Imaging Graph 2021; 95:102023. [PMID: 34883364 DOI: 10.1016/j.compmedimag.2021.102023] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/18/2021] [Accepted: 11/12/2021] [Indexed: 11/23/2022]
Abstract
This study proposes a novel, fully automated framework for epidermal layer segmentation in different skin diseases based on 75 MHz high-frequency ultrasound (HFUS) image data. A robust epidermis segmentation is a vital first step to detect changes in thickness, shape, and intensity and therefore support diagnosis and treatment monitoring in inflammatory and neoplastic skin lesions. Our framework links deep learning and fuzzy connectedness for image analysis. It consists of a cascade of two DeepLab v3+ models with a ResNet-50 backbone and a fuzzy connectedness analysis module for fine segmentation. Both deep models are pre-trained on the ImageNet dataset and subjected to transfer learning using our HFUS database of 580 images with atopic dermatitis, psoriasis and non-melanocytic skin tumors. The first deep model is used to detect the appropriate region of interest, while the second stands for the main segmentation procedure. We use the softmax layer of the latter twofold to prepare the input data for fuzzy connectedness analysis: as a reservoir of seed points and a direct contribution to the input image. In the experiments, we analyze different configurations of the framework, including region of interest detection, deep model backbones and training loss functions, or fuzzy connectedness analysis with parameter settings. We also use the Dice index and epidermis thickness to compare our results to state-of-the-art approaches. The Dice index of 0.919 yielded by our model over the entire dataset (and exceeding 0.93 in inflammatory diseases) proves its superiority over the other methods.
Collapse
|
9
|
Deep Learning Framework to Detect Ischemic Stroke Lesion in Brain MRI Slices of Flair/DW/T1 Modalities. Symmetry (Basel) 2021. [DOI: 10.3390/sym13112080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Ischemic stroke lesion (ISL) is a brain abnormality. Studies proved that early detection and treatment could reduce the disease impact. This research aimed to develop a deep learning (DL) framework to detect the ISL in multi-modality magnetic resonance image (MRI) slices. It proposed a convolutional neural network (CNN)-supported segmentation and classification to execute a consistent disease detection framework. The developed framework consisted of the following phases; (i) visual geometry group (VGG) developed VGG16 scheme supported SegNet (VGG-SegNet)-based ISL mining, (ii) handcrafted feature extraction, (iii) deep feature extraction using the chosen DL scheme, (iv) feature ranking and serial feature concatenation, and (v) classification using binary classifiers. Fivefold cross-validation was employed in this work, and the best feature was selected as the final result. The attained results were separately examined for (i) segmentation; (ii) deep-feature-based classification, and (iii) concatenated feature-based classification. The experimental investigation is presented using the Ischemic Stroke Lesion Segmentation (ISLES2015) database. The attained result confirms that the proposed ISL detection framework gives better segmentation and classification results. The VGG16 scheme helped to obtain a better result with deep features (accuracy > 97%) and concatenated features (accuracy > 98%).
Collapse
|
10
|
Al-Mohannadi A, Al-Maadeed S, Elharrouss O, Sadasivuni KK. Encoder-Decoder Architecture for Ultrasound IMC Segmentation and cIMT Measurement. SENSORS (BASEL, SWITZERLAND) 2021; 21:6839. [PMID: 34696054 PMCID: PMC8541435 DOI: 10.3390/s21206839] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/26/2021] [Accepted: 10/09/2021] [Indexed: 11/17/2022]
Abstract
Cardiovascular diseases (CVDs) have shown a huge impact on the number of deaths in the world. Thus, common carotid artery (CCA) segmentation and intima-media thickness (IMT) measurements have been significantly implemented to perform early diagnosis of CVDs by analyzing IMT features. Using computer vision algorithms on CCA images is not widely used for this type of diagnosis, due to the complexity and the lack of dataset to do it. The advancement of deep learning techniques has made accurate early diagnosis from images possible. In this paper, a deep-learning-based approach is proposed to apply semantic segmentation for intima-media complex (IMC) and to calculate the cIMT measurement. In order to overcome the lack of large-scale datasets, an encoder-decoder-based model is proposed using multi-image inputs that can help achieve good learning for the model using different features. The obtained results were evaluated using different image segmentation metrics which demonstrate the effectiveness of the proposed architecture. In addition, IMT thickness is computed, and the experiment showed that the proposed model is robust and fully automated compared to the state-of-the-art work.
Collapse
Affiliation(s)
- Aisha Al-Mohannadi
- Department of Computer Science and Engineering, Qatar University, Doha P.O. Box 2713, Qatar; (A.A.-M.); (O.E.)
| | - Somaya Al-Maadeed
- Department of Computer Science and Engineering, Qatar University, Doha P.O. Box 2713, Qatar; (A.A.-M.); (O.E.)
| | - Omar Elharrouss
- Department of Computer Science and Engineering, Qatar University, Doha P.O. Box 2713, Qatar; (A.A.-M.); (O.E.)
| | | |
Collapse
|
11
|
Czajkowska J, Badura P, Korzekwa S, Płatkowska-Szczerek A. Deep learning approach to skin layers segmentation in inflammatory dermatoses. ULTRASONICS 2021; 114:106412. [PMID: 33784575 DOI: 10.1016/j.ultras.2021.106412] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 02/15/2021] [Accepted: 02/23/2021] [Indexed: 06/12/2023]
Abstract
Monitoring skin layers with medical imaging is critical to diagnosing and treating patients with chronic inflammatory skin diseases. The high-frequency ultrasound (HFUS) makes it possible to monitor skin condition in different dermatoses. Accurate and reliable segmentation of skin layers in patients with atopic dermatitis or psoriasis enables the assessment of the treatment effect by the layer thickness measurements. The epidermis and the subepidermal low echogenic band (SLEB) are the most important for further diagnosis since their appearance is an indicator of different skin problems. In medical practice, the analysis, including segmentation, is usually performed manually by the physician with all drawbacks of such an approach, e.g., extensive time consumption and lack of repeatability. Recently, HFUS becomes common in dermatological practice, yet it is barely supported by the development of automated analysis tools. To meet the need for skin layer segmentation and measurement, we developed an automated segmentation method of both epidermis and SLEB layers. It consists of a fuzzy c-means clustering-based preprocessing step followed by a U-shaped convolutional neural network. The network employs batch normalization layers adjusting and scaling the activation to make the segmentation more robust. The obtained segmentation results are verified and compared to the current state-of-the-art methods addressing the skin layer segmentation. The obtained Dice coefficient equal to 0.87 and 0.83 for the epidermis and SLEB, respectively, proves the developed framework's efficiency, outperforming the other approaches.
Collapse
Affiliation(s)
- Joanna Czajkowska
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland.
| | - Pawel Badura
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland
| | - Szymon Korzekwa
- Department of Temporomandibular Disorders, Division of Prosthodontics, Poznan University of Medical Sciences, Poland
| | | |
Collapse
|
12
|
Adegun AA, Viriri S, Ogundokun RO. Deep Learning Approach for Medical Image Analysis. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:1-9. [DOI: 10.1155/2021/6215281] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
Localization of region of interest (ROI) is paramount to the analysis of medical images to assist in the identification and detection of diseases. In this research, we explore the application of a deep learning approach in the analysis of some medical images. Traditional methods have been restricted due to the coarse and granulated appearance of most of these images. Recently, deep learning techniques have produced promising results in the segmentation of medical images for the diagnosis of diseases. This research experiments on medical images using a robust deep learning architecture based on the Fully Convolutional Network- (FCN-) UNET method for the segmentation of three samples of medical images such as skin lesion, retinal images, and brain Magnetic Resonance Imaging (MRI) images. The proposed method can efficiently identify the ROI on these images to assist in the diagnosis of diseases such as skin cancer, eye defects and diabetes, and brain tumor. This system was evaluated on publicly available databases such as the International Symposium on Biomedical Imaging (ISBI) skin lesion images, retina images, and brain tumor datasets with over 90% accuracy and dice coefficient.
Collapse
Affiliation(s)
- Adekanmi Adeyinka Adegun
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| | | |
Collapse
|
13
|
Hybrid Deep Learning Models with Sparse Enhancement Technique for Detection of Newly Grown Tree Leaves. SENSORS 2021; 21:s21062077. [PMID: 33809537 PMCID: PMC8001602 DOI: 10.3390/s21062077] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 03/04/2021] [Accepted: 03/12/2021] [Indexed: 12/21/2022]
Abstract
The life cycle of leaves, from sprout to senescence, is the phenomenon of regular changes such as budding, branching, leaf spreading, flowering, fruiting, leaf fall, and dormancy due to seasonal climate changes. It is the effect of temperature and moisture in the life cycle on physiological changes, so the detection of newly grown leaves (NGL) is helpful for the estimation of tree growth and even climate change. This study focused on the detection of NGL based on deep learning convolutional neural network (CNN) models with sparse enhancement (SE). As the NGL areas found in forest images have similar sparse characteristics, we used a sparse image to enhance the signal of the NGL. The difference between the NGL and the background could be further improved. We then proposed hybrid CNN models that combined U-net and SegNet features to perform image segmentation. As the NGL in the image were relatively small and tiny targets, in terms of data characteristics, they also belonged to the problem of imbalanced data. Therefore, this paper further proposed 3-Layer SegNet, 3-Layer U-SegNet, 2-Layer U-SegNet, and 2-Layer Conv-U-SegNet architectures to reduce the pooling degree of traditional semantic segmentation models, and used a loss function to increase the weight of the NGL. According to the experimental results, our proposed algorithms were indeed helpful for the image segmentation of NGL and could achieve better kappa results by 0.743.
Collapse
|
14
|
Road Extraction from Very-High-Resolution Remote Sensing Images via a Nested SE-Deeplab Model. REMOTE SENSING 2020. [DOI: 10.3390/rs12182985] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Automatic road extraction from very-high-resolution remote sensing images has become a popular topic in a wide range of fields. Convolutional neural networks are often used for this purpose. However, many network models do not achieve satisfactory extraction results because of the elongated nature and varying sizes of roads in images. To improve the accuracy of road extraction, this paper proposes a deep learning model based on the structure of Deeplab v3. It incorporates squeeze-and-excitation (SE) module to apply weights to different feature channels, and performs multi-scale upsampling to preserve and fuse shallow and deep information. To solve the problems associated with unbalanced road samples in images, different loss functions and backbone network modules are tested in the model’s training process. Compared with cross entropy, dice loss can improve the performance of the model during training and prediction. The SE module is superior to ResNext and ResNet in improving the integrity of the extracted roads. Experimental results obtained using the Massachusetts Roads Dataset show that the proposed model (Nested SE-Deeplab) improves F1-Score by 2.4% and Intersection over Union by 2.0% compared with FC-DenseNet. The proposed model also achieves better segmentation accuracy in road extraction compared with other mainstream deep-learning models including Deeplab v3, SegNet, and UNet.
Collapse
|
15
|
Automatic segmentation of brain MRI using a novel patch-wise U-net deep architecture. PLoS One 2020; 15:e0236493. [PMID: 32745102 PMCID: PMC7398543 DOI: 10.1371/journal.pone.0236493] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Accepted: 07/07/2020] [Indexed: 12/22/2022] Open
Abstract
Accurate segmentation of brain magnetic resonance imaging (MRI) is an essential step in quantifying the changes in brain structure. Deep learning in recent years has been extensively used for brain image segmentation with highly promising performance. In particular, the U-net architecture has been widely used for segmentation in various biomedical related fields. In this paper, we propose a patch-wise U-net architecture for the automatic segmentation of brain structures in structural MRI. In the proposed brain segmentation method, the non-overlapping patch-wise U-net is used to overcome the drawbacks of conventional U-net with more retention of local information. In our proposed method, the slices from an MRI scan are divided into non-overlapping patches that are fed into the U-net model along with their corresponding patches of ground truth so as to train the network. The experimental results show that the proposed patch-wise U-net model achieves a Dice similarity coefficient (DSC) score of 0.93 in average and outperforms the conventional U-net and the SegNet-based methods by 3% and 10%, respectively, for on Open Access Series of Imaging Studies (OASIS) and Internet Brain Segmentation Repository (IBSR) dataset.
Collapse
|
16
|
Agnes SA, Anitha J. Appraisal of Deep-Learning Techniques on Computer-Aided Lung Cancer Diagnosis with Computed Tomography Screening. J Med Phys 2020; 45:98-106. [PMID: 32831492 PMCID: PMC7416858 DOI: 10.4103/jmp.jmp_101_19] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 03/03/2020] [Accepted: 03/27/2020] [Indexed: 12/19/2022] Open
Abstract
Aims: Deep-learning methods are becoming versatile in the field of medical image analysis. The hand-operated examination of smaller nodules from computed tomography scans becomes a challenging and time-consuming task due to the limitation of human vision. A standardized computer-aided diagnosis (CAD) framework is required for rapid and accurate lung cancer diagnosis. The National Lung Screening Trial recommends routine screening with low-dose computed tomography among high-risk patients to reduce the risk of dying from lung cancer by early cancer detection. The evolvement of clinically acceptable CAD system for lung cancer diagnosis demands perfect prototypes for segmenting lung region, followed by identifying nodules with reduced false positives. Recently, deep-learning methods are increasingly adopted in medical image diagnosis applications. Subjects and Methods: In this study, a deep-learning-based CAD framework for lung cancer diagnosis with chest computed tomography (CT) images is built using dilated SegNet and convolutional neural networks (CNNs). A dilated SegNet model is employed to segment lung from chest CT images, and a CNN model with batch normalization is developed to identify the true nodules from all possible nodules. The dilated SegNet and CNN models have been trained on the sample cases taken from the LUNA16 dataset. The performance of the segmentation model is measured in terms of Dice coefficient, and the nodule classifier is evaluated with sensitivity. The discriminant ability of the features learned by a CNN classifier is further confirmed with principal component analysis. Results: Experimental results confirm that the dilated SegNet model segments the lung with an average Dice coefficient of 0.89 ± 0.23 and the customized CNN model yields a sensitivity of 94.8 on categorizing cancerous and noncancerous nodules. Conclusions: Thus, the proposed CNN models achieve efficient lung segmentation and two-dimensional nodule patch classification in CAD system for lung cancer diagnosis with CT screening.
Collapse
Affiliation(s)
- S Akila Agnes
- Department of CSE, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| | - J Anitha
- Department of CSE, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| |
Collapse
|