151
|
Ali R, Li H, Dillman JR, Altaye M, Wang H, Parikh NA, He L. A self-training deep neural network for early prediction of cognitive deficits in very preterm infants using brain functional connectome data. Pediatr Radiol 2022; 52:2227-2240. [PMID: 36131030 PMCID: PMC9574648 DOI: 10.1007/s00247-022-05510-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 08/09/2022] [Accepted: 09/01/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND Deep learning has been employed using brain functional connectome data for evaluating the risk of cognitive deficits in very preterm infants. Although promising, training these deep learning models typically requires a large amount of labeled data, and labeled medical data are often very difficult and expensive to obtain. OBJECTIVE This study aimed to develop a self-training deep neural network (DNN) model for early prediction of cognitive deficits at 2 years of corrected age in very preterm infants (gestational age ≤32 weeks) using both labeled and unlabeled brain functional connectome data. MATERIALS AND METHODS We collected brain functional connectome data from 343 very preterm infants at a mean (standard deviation) postmenstrual age of 42.7 (2.5) weeks, among whom 103 children had a cognitive assessment at 2 years (i.e. labeled data), and the remaining 240 children had not received 2-year assessments at the time this study was conducted (i.e. unlabeled data). To develop a self-training DNN model, we built an initial student model using labeled brain functional connectome data. Then, we applied the trained model as a teacher model to generate pseudo-labels for unlabeled brain functional connectome data. Next, we combined labeled and pseudo-labeled data to train a new student model. We iterated this procedure to obtain the best student model for the early prediction task in very preterm infants. RESULTS In our cross-validation experiments, the proposed self-training DNN model achieved an accuracy of 71.0%, a specificity of 71.5%, a sensitivity of 70.4% and an area under the curve of 0.75, significantly outperforming transfer learning models through pre-training approaches. CONCLUSION We report the first self-training prognostic study in very preterm infants, efficiently utilizing a small amount of labeled data with a larger share of unlabeled data to aid the model training. The proposed technique is expected to facilitate deep learning with insufficient training data.
Collapse
|
152
|
Kha QH, Tran TO, Nguyen TTD, Nguyen VN, Than K, Le NQK. An interpretable deep learning model for classifying adaptor protein complexes from sequence information. Methods 2022; 207:90-96. [PMID: 36174933 DOI: 10.1016/j.ymeth.2022.09.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 08/19/2022] [Accepted: 09/22/2022] [Indexed: 11/15/2022] Open
Abstract
Adaptor proteins (APs) are a family of proteins that aids in intracellular membrane trafficking, and their impairments or defects are closely related to various disorders. Traditional methods to identify and classify APs require time and complex techniques, which were then advanced by machine learning and computational approaches to facilitate the APs recognition task. However, most studies focused on recognizing separate ones in the APs family or the APs in general with non-APs, lacking one comprehensive strategy to distinguish the complexes of AP subtypes. Herein, we proposed a novel method to implement one novel task as discriminating the AP complexes in the APs family, utilizing an interpretable deep neural network architecture on sequence-based encoding features. This work also introduced a benchmark data set of AP complexes originating from the UniProt and GeneOntology databases. To assess the robustness of our proposed method, we compared our performance to various machine learning algorithms and feature extraction strategies. Furthermore, the interpretation of the model's prediction performance was implemented using t-distributed stochastic neighbor embedding (t-SNE), uniform manifold approximation and projection (UMAP), and SHapley Additive exPlanations (SHAP) analysis to show the distribution of AP complexes on optimal features. The promising performance of our architecture can assist scientists not only in AP complexes distinction but also in general protein sequences. Moreover, we have also made our work publicly on GitHub https://github.com/khanhlee/adaptor-dnn.
Collapse
|
153
|
Ma L, Wu J, Yang Q, Zhou Z, He H, Bao J, Bao L, Wang X, Zhang P, Zhong J, Cai C, Cai S, Chen Z. Single-shot multi-parametric mapping based on multiple overlapping-echo detachment (MOLED) imaging. Neuroimage 2022; 263:119645. [PMID: 36155244 DOI: 10.1016/j.neuroimage.2022.119645] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 09/21/2022] [Accepted: 09/21/2022] [Indexed: 11/29/2022] Open
Abstract
Multi-parametric quantitative magnetic resonance imaging (mqMRI) allows the characterization of multiple tissue properties non-invasively and has shown great potential to enhance the sensitivity of MRI measurements. However, real-time mqMRI during dynamic physiological processes or general motions remains challenging. To overcome this bottleneck, we propose a novel mqMRI technique based on multiple overlapping-echo detachment (MOLED) imaging, termed MQMOLED, to enable mqMRI in a single shot. In the data acquisition of MQMOLED, multiple MR echo signals with different multi-parametric weightings and phase modulations are generated and acquired in the same k-space. The k-space data is Fourier transformed and fed into a well-trained neural network for the reconstruction of multi-parametric maps. We demonstrated the accuracy and repeatability of MQMOLED in simultaneous mapping apparent proton density (APD) and any two parameters among T2, T2*, and apparent diffusion coefficient (ADC) in 130-170 ms. The abundant information delivered by the multiple overlapping-echo signals in MQMOLED makes the technique potentially robust to system imperfections, such as inhomogeneity of static magnetic field or radiofrequency field. Benefitting from the single-shot feature, MQMOLED exhibits a strong motion tolerance to the continuous movements of subjects. For the first time, it captured the synchronous changes of ADC, T2, and T1-weighted APD in contrast-enhanced perfusion imaging on patients with brain tumors, providing additional information about vascular density to the hemodynamic parametric maps. We expect that MQMOLED would promote the development of mqMRI technology and greatly benefit the applications of mqMRI, including therapeutics and analysis of metabolic/functional processes.
Collapse
|
154
|
A machine learning framework for predicting entrapment efficiency in niosomal particles. Int J Pharm 2022; 627:122203. [PMID: 36116690 DOI: 10.1016/j.ijpharm.2022.122203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 09/07/2022] [Accepted: 09/11/2022] [Indexed: 11/22/2022]
Abstract
Niosomes are vesicles formed mostly by nonionic surfactant and cholesterol incorporation as an excipient. The drug entrapment efficiency of niosomal vesicles is particularly important and depends on many parameters. Changing the effective parameters to have maximum entrapment efficiency in the laboratory is time-consuming and costly. In this study, a machine learning framework was proposed to address these problems. In order to find the most critical parameter affecting the entrapment efficiency and its optimal value in a specific experiment, data were first extracted from articles of the last decade using keywords of niosome and thin-film hydration method. Then, deep neural network (DNN), linear regression, and polynomial regression models were trained with four cost functions. Afterward, the most influential parameter on entrapment efficiency was determined using the sensitivity experiment. Finally, the optimal point of the most influential parameter was found by keeping the other parameters constant and changing the most influential parameter. The veracity of this test was evaluated by entrapment efficiency results of 7 niosomal formulations containing doxycycline hyclate prepared in the laboratory. The best model was DNN, which yielded root mean square error (RMSE) of 13.587 ± 2.61, mean absolute error (MAE) of 10.17 ± 1.421, and R-squared (R2) of 0.763 ± 0.1 evaluated by 5-fold cross-validation. The hydrophilic-lipophilic balance (HLB) was identified as the most influential parameter, and the entrapment efficiency change curve was plotted versus the HLB value. This study uses machine learning methods to synthesize niosomal systems with optimal entrapment efficiency at a lower cost and time.
Collapse
|
155
|
Jayaswal R, Dixit M. AI-based face mask detection system: a straightforward proposition to fight with Covid-19 situation. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:13241-13273. [PMID: 36101885 PMCID: PMC9454394 DOI: 10.1007/s11042-022-13697-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 03/21/2022] [Accepted: 08/15/2022] [Indexed: 06/15/2023]
Abstract
The whole world is suffering from a novel coronavirus, which has become an epidemic. According to a World Health Organization report, this is a communicable disease, i.e., it transfers from an infected person to a healthy person. Therefore, wearing a mask is the most important precaution to protect from COVID-19. This paper presented a deep learning-based approach to design a Face Mask Detection framework to predict whether a person is wearing a mask or not. The proposed method uses a Single Shot Multibox detector as a face detector model and a deep Inception V3 architecture (SSDIV3) to extract the pertinent features of images and discriminate them in mask and without masks labels. Optimizing the SSDIV3 approach using different modeling parameters is a genuine contribution of this work. In addition to this, the system is tested and analyzed on VGG16, VGG19, Xception, Mobilenet V2 models at different modeling parameters. Furthermore, two synthesized novel Face Mask Datasets are introduced containing diversified masks (2d_printed, 3d_printed, handkerchief, transparent, natural-looking mask appearance masks) and unmask images of humans collected in outdoor and indoor environments such as parks, homes, laboratories. The experiment outcomes demonstrate that the proposed system has achieved an accuracy of 98% on the synthesized benchmark datasets, which comparatively outperforms other state-of-art methods and datasets in a real-time environment.
Collapse
|
156
|
Lin Y, Zhang N, Qu Y, Li T, Liu J, Song Y. The House-Tree-Person test is not valid for the prediction of mental health: An empirical study using deep neural networks. Acta Psychol (Amst) 2022; 230:103734. [PMID: 36058187 DOI: 10.1016/j.actpsy.2022.103734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 08/08/2022] [Accepted: 08/31/2022] [Indexed: 11/25/2022] Open
Abstract
As one of the projective drawing techniques, the House-Tree-Person test (HTP) has been widely used in psychological counseling. However, its validity in diagnosing mental health problems remains controversial. Here, we adopted two approaches to examine the validity of HTP in diagnosing mental health problems objectively. First, we summarized the diagnostic features reported in previous HTP studies and found no reliable association between the existing HTP indicators and mental health problems studied. Next, after obtaining HTP drawings and depression scores from 4196 Chinese children and adolescents (1890 females), we used the Deep Neural Networks (DNNs) to explore implicit features from entire HTP drawings that might have been missed in previous studies. We found that although the DNNs successfully learned to extract critical features of houses, trees, and persons in HTP drawings for object classification, it failed to classify the drawings of depressive individuals from those of non-depressive individuals. Taken together, our study casts doubts on the validity of the HTP in diagnosing mental health problems, and provides a practical paradigm of examining the validity of projective tests with deep learning.
Collapse
|
157
|
Sun G, Hu H, Su Y, Liu Q, Lu X. ApaNet: adversarial perturbations alleviation network for face verification. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:7443-7461. [PMID: 36035322 PMCID: PMC9395815 DOI: 10.1007/s11042-022-13641-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 03/21/2022] [Accepted: 08/02/2022] [Indexed: 06/15/2023]
Abstract
Albeit Deep neural networks (DNNs) are widely used in computer vision, natural language processing and speech recognition, they have been discovered to be fragile to adversarial attacks. Specifically, in computer vision, an attacker can easily deceive DNNs by contaminating an input image with perturbations imperceptible to humans. As one of the important vision tasks, face verification is also subject to adversarial attack. Thus, in this paper, we focus on defending against the adversarial attack for face verification to mitigate the potential risk. We learn a network via an implementation of stacked residual blocks, namely adversarial perturbations alleviation network (ApaNet), to alleviate latent adversarial perturbations hidden in the input facial image. During the supervised learning of ApaNet, only the Labeled Faces in the Wild (LFW) is used as the training set, and the legitimate examples and corresponding adversarial examples produced by projected gradient descent algorithm compose supervision and inputs respectively. By leveraging the middle and high layer's activation of FaceNet, the discrepancy between an image output by ApaNet and the supervision is calculated as the loss function to optimize ApaNet. Empirical experiment results on the LFW, YouTube Faces DB and CASIA-FaceV5 confirm the effectiveness of the proposed defender against some representative white-box and black-box adversarial attacks. Also, experimental results show the superiority performance of the ApaNet as comparing with several currently available techniques.
Collapse
|
158
|
Anandhi V, Vinod P, Menon VG, Aditya KM. Performance evaluation of deep neural network on malware detection: visual feature approach. CLUSTER COMPUTING 2022; 25:4601-4615. [PMID: 35999895 PMCID: PMC9387895 DOI: 10.1007/s10586-022-03702-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 07/02/2022] [Accepted: 07/26/2022] [Indexed: 06/15/2023]
Abstract
Nowadays, several malicious applications target computers and mobile users. So, malware detection plays a vital role on the internet so that the device is secure without any malicious activity affecting or gathering the useful content of the user. Researches indicate that the vulnerability of adversarial attacks is more in deep neural networks. When there is a malicious sample in a family, there will not be many changes in the variants, but there will be more signatures. So, a deep learning model, DenseNet was used for detection. The adversarial samples are created by other types of noise, including the Gaussian noise. We added this noise to a subset of malware samples and observed that for Malimg, the modified samples were precisely identified by the DenseNet, and the attack cannot be done. But for BIG2015, we found that there was some marginal decrease in the performance of the classifier, which shows that the model performs well. Further, experiments on the Fast Gradient Sign Method (FGSM) were conducted, and it was observed that a significant decrease in classification accuracy was detected for both datasets. We understand that deep learning models should be robust to adversarial attacks.
Collapse
|
159
|
Wei W, Shi F, Kolb JF. Analysis of microstructural parameters of trabecular bone based on electrical impedance spectroscopy and deep neural networks. Bioelectrochemistry 2022; 148:108232. [PMID: 35987060 DOI: 10.1016/j.bioelechem.2022.108232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 08/09/2022] [Accepted: 08/10/2022] [Indexed: 11/02/2022]
Abstract
The potential of electrical impedance spectroscopy (EIS) was demonstrated for the investigation of microstructural properties of osseous tissue. Therefore, a deep neural network (DNN) was implemented for a sensitive assessment of different structural features that were derived on the basis of dielectric parameters, especially relative permittivities, recorded over a frequency range from 40 Hz to 5 MHz. The advantages of the developed method over conventional approaches, including equivalent circuit models (ECMs), linear regression and effective medium approximation (EMA), is the comprehensive quantification of bone morphologies by several microstructural parameters simultaneously, such as bone volume fraction (BV/TV), bone surface-volume-ratio (BS/BV), structure model index (SMI), trabecular number (Tb.N) and trabecular thickness (Tb.Th). The comparison of predictions of the DNN with an analysis of µCT-images confirmed a high accuracy for different microstructural parameters, which was indicated by corresponding Pearson correlation coefficients, especially for Tb.Th (r = 0.89) and BS/BV (r = 0.80). Concurrently, the approach was able to unambiguously discriminate anatomically similar bone regions (femoral head, greater trochanter and femoral neck) and therefore was capable to determine the morphological status of osseous tissue in detail. The classification was more discriminative than one based on classical linear discriminant analysis (LDA), due to the distinguishing features extracted by the DNN model. Accordingly, the method and model can serve as a potential tool for evaluating bone quality and bone status.
Collapse
|
160
|
Lee JRH, Pavlova M, Famouri M, Wong A. Cancer-Net SCa: tailored deep neural network designs for detection of skin cancer from dermoscopy images. BMC Med Imaging 2022; 22:143. [PMID: 35945505 PMCID: PMC9364616 DOI: 10.1186/s12880-022-00871-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 07/26/2022] [Indexed: 11/25/2022] Open
Abstract
Background Skin cancer continues to be the most frequently diagnosed form of cancer in the U.S., with not only significant effects on health and well-being but also significant economic costs associated with treatment. A crucial step to the treatment and management of skin cancer is effective early detection with key screening approaches such as dermoscopy examinations, leading to stronger recovery prognoses. Motivated by the advances of deep learning and inspired by the open source initiatives in the research community, in this study we introduce Cancer-Net SCa, a suite of deep neural network designs tailored for the detection of skin cancer from dermoscopy images that is open source and available to the general public. To the best of the authors’ knowledge, Cancer-Net SCa comprises the first machine-driven design of deep neural network architectures tailored specifically for skin cancer detection, one of which leverages attention condensers for an efficient self-attention design. Results We investigate and audit the behaviour of Cancer-Net SCa in a responsible and transparent manner through explainability-driven performance validation. All the proposed designs achieved improved accuracy when compared to the ResNet-50 architecture while also achieving significantly reduced architectural and computational complexity. In addition, when evaluating the decision making process of the networks, it can be seen that diagnostically relevant critical factors are leveraged rather than irrelevant visual indicators and imaging artifacts. Conclusion The proposed Cancer-Net SCa designs achieve strong skin cancer detection performance on the International Skin Imaging Collaboration (ISIC) dataset, while providing a strong balance between computation and architectural efficiency and accuracy. While Cancer-Net SCa is not a production-ready screening solution, the hope is that the release of Cancer-Net SCa in open source, open access form will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.
Collapse
|
161
|
Amjad A, Khan L, Chang HT. Data augmentation and deep neural networks for the classification of Pakistani racial speakers recognition. PeerJ Comput Sci 2022; 8:e1053. [PMID: 36091976 PMCID: PMC9454772 DOI: 10.7717/peerj-cs.1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 07/06/2022] [Indexed: 06/15/2023]
Abstract
Speech emotion recognition (SER) systems have evolved into an important method for recognizing a person in several applications, including e-commerce, everyday interactions, law enforcement, and forensics. The SER system's efficiency depends on the length of the audio samples used for testing and training. However, the different suggested models successfully obtained relatively high accuracy in this study. Moreover, the degree of SER efficiency is not yet optimum due to the limited database, resulting in overfitting and skewing samples. Therefore, the proposed approach presents a data augmentation method that shifts the pitch, uses multiple window sizes, stretches the time, and adds white noise to the original audio. In addition, a deep model is further evaluated to generate a new paradigm for SER. The data augmentation approach increased the limited amount of data from the Pakistani racial speaker speech dataset in the proposed system. The seven-layer framework was employed to provide the most optimal performance in terms of accuracy compared to other multilayer approaches. The seven-layer method is used in existing works to achieve a very high level of accuracy. The suggested system achieved 97.32% accuracy with a 0.032% loss in the 75%:25% splitting ratio. In addition, more than 500 augmentation data samples were added. Therefore, the proposed approach results show that deep neural networks with data augmentation can enhance the SER performance on the Pakistani racial speech dataset.
Collapse
|
162
|
Cuingnet R, Ladegaillerie Y, Jossent J, Maitrot A, Chedal-Anglay J, Richard W, Bernard M, Woolfenden J, Birot E, Chenu D. PortiK: A computer vision based solution for real-time automatic solid waste characterization - Application to an aluminium stream. WASTE MANAGEMENT (NEW YORK, N.Y.) 2022; 150:267-279. [PMID: 35870362 DOI: 10.1016/j.wasman.2022.05.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 05/10/2022] [Accepted: 05/25/2022] [Indexed: 06/15/2023]
Abstract
In Material Recovery Facilities (MRFs), recyclable municipal solid waste is turned into a precious commodity. However, effective recycling relies on effective waste sorting, which is still a challenge to sustainable development of our society. To help the operations improve and optimise their process, this paper describes PortiK, a solution for automatic waste analysis. Based on image analysis and object recognition, it allows for continuous, real-time, non-intrusive measurements of mass composition of waste streams. The end-to-end solution is detailed with all the steps necessary for the system to operate, from hardware specifications and data collection to supervisory information obtained by deep learning and statistical analysis. The overall system was tested and validated in an operational environment in a material recovery facility. PortiK monitored an aluminium can stream to estimate its purity. Aluminium cans were detected with 91.2% precision and 90.3% recall, respectively, resulting in an underestimation of the number of cans by less than 1%. Regarding contaminants (i.e. other types of waste), precision and recall were 80.2% and 78.4%, respectively, giving an 2.2% underestimation. Based on five sample analyses where pieces of waste were counted and weighed per batch, the detection results were used to estimate purity and its confidence level. The estimation error was calculated to be within ±7% after 5 minutes of monitoring and ±5% after 8 hours. These results have demonstrated the feasibility and the relevance of the proposed solution for online quality control of aluminium can stream.
Collapse
|
163
|
Karnati M, Seal A, Sahu G, Yazidi A, Krejcar O. A novel multi-scale based deep convolutional neural network for detecting COVID-19 from X-rays. Appl Soft Comput 2022; 125:109109. [PMID: 35693544 PMCID: PMC9167691 DOI: 10.1016/j.asoc.2022.109109] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 04/26/2022] [Accepted: 05/26/2022] [Indexed: 11/23/2022]
Abstract
The COVID-19 pandemic has posed an unprecedented threat to the global public health system, primarily infecting the airway epithelial cells in the respiratory tract. Chest X-ray (CXR) is widely available, faster, and less expensive therefore it is preferred to monitor the lungs for COVID-19 diagnosis over other techniques such as molecular test, antigen test, antibody test, and chest computed tomography (CT). As the pandemic continues to reveal the limitations of our current ecosystems, researchers are coming together to share their knowledge and experience in order to develop new systems to tackle it. In this work, an end-to-end IoT infrastructure is designed and built to diagnose patients remotely in the case of a pandemic, limiting COVID-19 dissemination while also improving measurement science. The proposed framework comprises six steps. In the last step, a model is designed to interpret CXR images and intelligently measure the severity of COVID-19 lung infections using a novel deep neural network (DNN). The proposed DNN employs multi-scale sampling filters to extract reliable and noise-invariant features from a variety of image patches. Experiments are conducted on five publicly available databases, including COVIDx, COVID-19 Radiography, COVID-XRay-5K, COVID-19-CXR, and COVIDchestxray, with classification accuracies of 96.01%, 99.62%, 99.22%, 98.83%, and 100%, and testing times of 0.541, 0.692, 1.28, 0.461, and 0.202 s, respectively. The obtained results show that the proposed model surpasses fourteen baseline techniques. As a result, the newly developed model could be utilized to evaluate treatment efficacy, particularly in remote locations.
Collapse
|
164
|
van de Leur RR, Bos MN, Taha K, Sammani A, Yeung MW, van Duijvenboden S, Lambiase PD, Hassink RJ, van der Harst P, Doevendans PA, Gupta DK, van Es R. Improving explainability of deep neural network-based electrocardiogram interpretation using variational auto-encoders . EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2022; 3:390-404. [PMID: 36712164 PMCID: PMC9707974 DOI: 10.1093/ehjdh/ztac038] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 06/16/2022] [Indexed: 02/01/2023]
Abstract
Aims Deep neural networks (DNNs) perform excellently in interpreting electrocardiograms (ECGs), both for conventional ECG interpretation and for novel applications such as detection of reduced ejection fraction (EF). Despite these promising developments, implementation is hampered by the lack of trustworthy techniques to explain the algorithms to clinicians. Especially, currently employed heatmap-based methods have shown to be inaccurate. Methods and results We present a novel pipeline consisting of a variational auto-encoder (VAE) to learn the underlying factors of variation of the median beat ECG morphology (the FactorECG), which are subsequently used in common and interpretable prediction models. As the ECG factors can be made explainable by generating and visualizing ECGs on both the model and individual level, the pipeline provides improved explainability over heatmap-based methods. By training on a database with 1.1 million ECGs, the VAE can compress the ECG into 21 generative ECG factors, most of which are associated with physiologically valid underlying processes. Performance of the explainable pipeline was similar to 'black box' DNNs in conventional ECG interpretation [area under the receiver operating curve (AUROC) 0.94 vs. 0.96], detection of reduced EF (AUROC 0.90 vs. 0.91), and prediction of 1-year mortality (AUROC 0.76 vs. 0.75). Contrary to the 'black box' DNNs, our pipeline provided explainability on which morphological ECG changes were important for prediction. Results were confirmed in a population-based external validation dataset. Conclusions Future studies on DNNs for ECGs should employ pipelines that are explainable to facilitate clinical implementation by gaining confidence in artificial intelligence and making it possible to identify biased models.
Collapse
|
165
|
Luo H, Xiang Y, Fang X, Lin W, Wang F, Wu H, Wang H. BatchDTA: implicit batch alignment enhances deep learning-based drug-target affinity estimation. Brief Bioinform 2022; 23:6632927. [PMID: 35794723 DOI: 10.1093/bib/bbac260] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/23/2022] [Accepted: 06/03/2022] [Indexed: 11/14/2022] Open
Abstract
Candidate compounds with high binding affinities toward a target protein are likely to be developed as drugs. Deep neural networks (DNNs) have attracted increasing attention for drug-target affinity (DTA) estimation owning to their efficiency. However, the negative impact of batch effects caused by measure metrics, system technologies and other assay information is seldom discussed when training a DNN model for DTA. Suffering from the data deviation caused by batch effects, the DNN models can only be trained on a small amount of 'clean' data. Thus, it is challenging for them to provide precise and consistent estimations. We design a batch-sensitive training framework, namely BatchDTA, to train the DNN models. BatchDTA implicitly aligns multiple batches toward the same protein through learning the orders of candidate compounds with respect to the batches, alleviating the impact of the batch effects on the DNN models. Extensive experiments demonstrate that BatchDTA facilitates four mainstream DNN models to enhance the ability and robustness on multiple DTA datasets (BindingDB, Davis and KIBA). The average concordance index of the DNN models achieves a relative improvement of 4.0%. The case study reveals that BatchDTA can successfully learn the ranking orders of the compounds from multiple batches. In addition, BatchDTA can also be applied to the fused data collected from multiple sources to achieve further improvement.
Collapse
|
166
|
Alshayeji MH, ChandraBhasi Sindhu S, Abed S. CAD systems for COVID-19 diagnosis and disease stage classification by segmentation of infected regions from CT images. BMC Bioinformatics 2022; 23:264. [PMID: 35794537 PMCID: PMC9261058 DOI: 10.1186/s12859-022-04818-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 06/30/2022] [Indexed: 11/19/2022] Open
Abstract
Background Here propose a computer-aided diagnosis (CAD) system to differentiate COVID-19 (the coronavirus disease of 2019) patients from normal cases, as well as to perform infection region segmentation along with infection severity estimation using computed tomography (CT) images. The developed system facilitates timely administration of appropriate treatment by identifying the disease stage without reliance on medical professionals. So far, this developed model gives the most accurate, fully automatic COVID-19 real-time CAD framework. Results The CT image dataset of COVID-19 and non-COVID-19 individuals were subjected to conventional ML stages to perform binary classification. In the feature extraction stage, SIFT, SURF, ORB image descriptors and bag of features technique were implemented for the appropriate differentiation of chest CT regions affected with COVID-19 from normal cases. This is the first work introducing this concept for COVID-19 diagnosis application. The preferred diverse database and selected features that are invariant to scale, rotation, distortion, noise etc. make this framework real-time applicable. Also, this fully automatic approach which is faster compared to existing models helps to incorporate it into CAD systems. The severity score was measured based on the infected regions along the lung field. Infected regions were segmented through a three-class semantic segmentation of the lung CT image. Using severity score, the disease stages were classified as mild if the lesion area covers less than 25% of the lung area; moderate if 25–50% and severe if greater than 50%. Our proposed model resulted in classification accuracy of 99.7% with a PNN classifier, along with area under the curve (AUC) of 0.9988, 99.6% sensitivity, 99.9% specificity and a misclassification rate of 0.0027. The developed infected region segmentation model gave 99.47% global accuracy, 94.04% mean accuracy, 0.8968 mean IoU (intersection over union), 0.9899 weighted IoU, and a mean Boundary F1 (BF) contour matching score of 0.9453, using Deepabv3+ with its weights initialized using ResNet-50. Conclusions The developed CAD system model is able to perform fully automatic and accurate diagnosis of COVID-19 along with infected region extraction and disease stage identification. The ORB image descriptor with bag of features technique and PNN classifier achieved the superior classification performance.
Collapse
|
167
|
Hanczar B, Bourgeais V, Zehraoui F. Assessment of deep learning and transfer learning for cancer prediction based on gene expression data. BMC Bioinformatics 2022; 23:262. [PMID: 35786378 PMCID: PMC9250744 DOI: 10.1186/s12859-022-04807-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 06/15/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Machine learning is now a standard tool for cancer prediction based on gene expression data. However, deep learning is still new for this task, and there is no clear consensus about its performance and utility. Few experimental works have evaluated deep neural networks and compared them with state-of-the-art machine learning. Moreover, their conclusions are not consistent. RESULTS We extensively evaluate the deep learning approach on 22 cancer prediction tasks based on gene expression data. We measure the impact of the main hyper-parameters and compare the performances of neural networks with the state-of-the-art. We also investigate the effectiveness of several transfer learning schemes in different experimental setups. CONCLUSION Based on our experimentations, we provide several recommendations to optimize the construction and training of a neural network model. We show that neural networks outperform the state-of-the-art methods only for very large training set size. For a small training set, we show that transfer learning is possible and may strongly improve the model performance in some cases.
Collapse
|
168
|
Sammani A, van de Leur RR, Henkens MTHM, Meine M, Loh P, Hassink RJ, Oberski DL, Heymans SRB, Doevendans PA, Asselbergs FW, Te Riele ASJM, van Es R. Life-threatening ventricular arrhythmia prediction in patients with dilated cardiomyopathy using explainable electrocardiogram-based deep neural networks. Europace 2022; 24:1645-1654. [PMID: 35762524 PMCID: PMC9559909 DOI: 10.1093/europace/euac054] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 04/10/2022] [Indexed: 11/17/2022] Open
Abstract
Aims While electrocardiogram (ECG) characteristics have been associated with life-threatening ventricular arrhythmias (LTVA) in dilated cardiomyopathy (DCM), they typically rely on human-derived parameters. Deep neural networks (DNNs) can discover complex ECG patterns, but the interpretation is hampered by their ‘black-box’ characteristics. We aimed to detect DCM patients at risk of LTVA using an inherently explainable DNN. Methods and results In this two-phase study, we first developed a variational autoencoder DNN on more than 1 million 12-lead median beat ECGs, compressing the ECG into 21 different factors (F): FactorECG. Next, we used two cohorts with a combined total of 695 DCM patients and entered these factors in a Cox regression for the composite LTVA outcome, which was defined as sudden cardiac arrest, spontaneous sustained ventricular tachycardia, or implantable cardioverter-defibrillator treated ventricular arrhythmia. Most patients were male (n = 442, 64%) with a median age of 54 years [interquartile range (IQR) 44–62], and median left ventricular ejection fraction of 30% (IQR 23–39). A total of 115 patients (16.5%) reached the study outcome. Factors F8 (prolonged PR-interval and P-wave duration, P < 0.005), F15 (reduced P-wave height, P = 0.04), F25 (increased right bundle branch delay, P = 0.02), F27 (P-wave axis P < 0.005), and F32 (reduced QRS-T voltages P = 0.03) were significantly associated with LTVA. Conclusion Inherently explainable DNNs can detect patients at risk of LTVA which is mainly driven by P-wave abnormalities.
Collapse
|
169
|
Kayser H, Hermansky H, Meyer BT. Spatial speech detection for binaural hearing aids using deep phoneme classifiers. ACTA ACUSTICA. EUROPEAN ACOUSTICS ASSOCIATION 2022; 6:25. [PMID: 36159631 PMCID: PMC9502715 DOI: 10.1051/aacus/2022013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Current hearing aids are limited with respect to speech-specific optimization for spatial sound sources to perform speech enhancement. In this study, we therefore propose an approach for spatial detection of speech based on sound source localization and blind optimization of speech enhancement for binaural hearing aids. We have combined an estimator for the direction of arrival (DOA), featuring high spatial resolution but no specialization to speech, with a measure of speech quality with low spatial resolution obtained after directional filtering. The DOA estimator provides spatial sound source probability in the frontal horizontal plane. The measure of speech quality is based on phoneme representations obtained from a deep neural network, which is part of a hybrid automatic speech recognition (ASR) system. Three ASR-based speech quality measures (ASQM) are explored: entropy, mean temporal distance (M-Measure), matched phoneme (MaP) filtering. We tested the approach in four acoustic scenes with one speaker and either a localized or a diffuse noise source at various signal-to-noise ratios (SNR) in anechoic or reverberant conditions. The effects of incorrect spatial filtering and noise were analyzed. We show that two of the three ASQMs (M-Measure, MaP filtering) are suited to reliably identify the speech target in different conditions. The system is not adapted to the environment and does not require a-priori information about the acoustic scene or a reference signal to estimate the quality of the enhanced speech signal. Nevertheless, our approach performs well in all acoustic scenes tested and varying SNRs and reliably detects incorrect spatial filtering angles.
Collapse
|
170
|
Shin H, Kim JK, Choo YJ, Choi GS, Chang MC. Prediction of Motor Outcome of Stroke Patients Using a Deep Learning Algorithm with Brain MRI as Input Data. Eur Neurol 2022; 85:460-466. [PMID: 35738236 DOI: 10.1159/000525222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 05/22/2022] [Indexed: 11/19/2022]
Abstract
BACKGROUND Deep learning techniques can outperform traditional machine learning techniques and learn from unstructured and perceptual data, such as images and languages. We evaluated whether a convolutional neural network (CNN) model using whole axial brain T2-weighted magnetic resonance (MR) images as input data can help predict motor outcomes of the upper and lower limbs at the chronic stage in stroke patients. METHODS We collected MR images taken at the early stage of stroke in 1,233 consecutive stroke patients. We categorized modified Brunnstrom classification (MBC) scores of ≥5 and functional ambulatory category (FAC) scores of ≥4 at 6 months after stroke as favorable outcomes in the upper and lower limbs, respectively, and MBC scores of <5 and FAC scores of <4 as poor outcomes. We applied a CNN to train the image data. Of the 1,233 patients, 70% (863 patients) were randomly selected for the training set and the remaining 30% (370 patients) were assigned to the validation set. RESULTS In the prediction of upper limb motor function on the validation dataset, the area under the curve (AUC) was 0.768, and for lower limb motor function, the AUC was 0.828. CONCLUSION We showed that a CNN model trained using whole-brain axial T2-weighted MR images of stroke patients would help predict upper and lower limb motor function at the chronic stage.
Collapse
|
171
|
Li DW, Hansen AL, Bruschweiler-Li L, Yuan C, Brüschweiler R. Fundamental and practical aspects of machine learning for the peak picking of biomolecular NMR spectra. JOURNAL OF BIOMOLECULAR NMR 2022; 76:49-57. [PMID: 35389128 PMCID: PMC9246764 DOI: 10.1007/s10858-022-00393-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 02/28/2022] [Indexed: 06/14/2023]
Abstract
Rapid progress in machine learning offers new opportunities for the automated analysis of multidimensional NMR spectra ranging from protein NMR to metabolomics applications. Most recently, it has been demonstrated how deep neural networks (DNN) designed for spectral peak picking are capable of deconvoluting highly crowded NMR spectra rivaling the facilities of human experts. Superior DNN-based peak picking is one of a series of critical steps during NMR spectral processing, analysis, and interpretation where machine learning is expected to have a major impact. In this perspective, we lay out some of the unique strengths as well as challenges of machine learning approaches in this new era of automated NMR spectral analysis. Such a discussion seems timely and should help define common goals for the NMR community, the sharing of software tools, standardization of protocols, and calibrate expectations. It will also help prepare for an NMR future where machine learning and artificial intelligence tools will be common place.
Collapse
|
172
|
Zhang K, Karanth S, Patel B, Murphy R, Jiang X. A multi-task Gaussian process self-attention neural network for real-time prediction of the need for mechanical ventilators in COVID-19 patients. J Biomed Inform 2022; 130:104079. [PMID: 35489596 PMCID: PMC9044651 DOI: 10.1016/j.jbi.2022.104079] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Revised: 04/06/2022] [Accepted: 04/18/2022] [Indexed: 02/04/2023]
Abstract
OBJECTIVE The Coronavirus Disease 2019 (COVID-19) pandemic has overwhelmed the capacity of healthcare resources and posed a challenge for worldwide hospitals. The ability to distinguish potentially deteriorating patients from the rest helps facilitate reasonable allocation of medical resources, such as ventilators, hospital beds, and human resources. The real-time accurate prediction of a patient's risk scores could also help physicians to provide earlier respiratory support for the patient and reduce the risk of mortality. METHODS We propose a robust real-time prediction model for the in-hospital COVID-19 patients' probability of requiring mechanical ventilation (MV). The end-to-end neural network model incorporates the Multi-task Gaussian Process to handle the irregular sampling rate in observational data together with a self-attention neural network for the prediction task. RESULTS We evaluate our model on a large database with 9,532 nationwide in-hospital patients with COVID-19. The model demonstrates significant robustness and consistency improvements compared to conventional machine learning models. The proposed prediction model also shows performance improvements in terms of area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC) compared to various deep learning models, especially at early times after a patient's hospital admission. CONCLUSION The availability of large and real-time clinical data calls for new methods to make the best use of them for real-time patient risk prediction. It is not ideal for simplifying the data for traditional methods or for making unrealistic assumptions that deviate from observation's true dynamics. We demonstrate a pilot effort to harmonize cross-sectional and longitudinal information for mechanical ventilation needing prediction.
Collapse
|
173
|
Decoding the dopamine transporter imaging for the differential diagnosis of parkinsonism using deep learning. Eur J Nucl Med Mol Imaging 2022; 49:2798-2811. [PMID: 35588012 PMCID: PMC9206631 DOI: 10.1007/s00259-022-05804-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 04/12/2022] [Indexed: 11/16/2022]
Abstract
Purpose
This work attempts to decode the discriminative information in dopamine transporter (DAT) imaging using deep learning for the differential diagnosis of parkinsonism. Methods This study involved 1017 subjects who underwent DAT PET imaging ([11C]CFT) including 43 healthy subjects and 974 parkinsonian patients with idiopathic Parkinson’s disease (IPD), multiple system atrophy (MSA) or progressive supranuclear palsy (PSP). We developed a 3D deep convolutional neural network to learn distinguishable DAT features for the differential diagnosis of parkinsonism. A full-gradient saliency map approach was employed to investigate the functional basis related to the decision mechanism of the network. Furthermore, deep-learning-guided radiomics features and quantitative analysis were compared with their conventional counterparts to further interpret the performance of deep learning. Results The proposed network achieved area under the curve of 0.953 (sensitivity 87.7%, specificity 93.2%), 0.948 (sensitivity 93.7%, specificity 97.5%), and 0.900 (sensitivity 81.5%, specificity 93.7%) in the cross-validation, together with sensitivity of 90.7%, 84.1%, 78.6% and specificity of 88.4%, 97.5% 93.3% in the blind test for the differential diagnosis of IPD, MSA and PSP, respectively. The saliency map demonstrated the most contributed areas determining the diagnosis located at parkinsonism-related regions, e.g., putamen, caudate and midbrain. The deep-learning-guided binding ratios showed significant differences among IPD, MSA and PSP groups (P < 0.001), while the conventional putamen and caudate binding ratios had no significant difference between IPD and MSA (P = 0.24 and P = 0.30). Furthermore, compared to conventional radiomics features, there existed average above 78.1% more deep-learning-guided radiomics features that had significant differences among IPD, MSA and PSP. Conclusion This study suggested the developed deep neural network can decode in-depth information from DAT and showed potential to assist the differential diagnosis of parkinsonism. The functional regions supporting the diagnosis decision were generally consistent with known parkinsonian pathology but provided more specific guidance for feature selection and quantitative analysis. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05804-x.
Collapse
|
174
|
Xu Z, Guo Y, Zhao T, Zhao Y, Liu Z, Sun X, Xie G, Li Y. Abnormality classification from electrocardiograms with various lead combinations. Physiol Meas 2022; 43. [PMID: 35580597 DOI: 10.1088/1361-6579/ac70a4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 05/17/2022] [Indexed: 11/12/2022]
Abstract
As cardiovascular diseases have been one of the leading causes of death, early and accurate diagnosis of cardiac abnormalities with less cost becomes particularly important. Given the electrocardiogram (ECG) datasets from multiple sources, there exist many challenges to develop the generalized models that can identify multiple types of cardiac abnormalities from both 12-lead ECG signals and reduced-lead ECG signals. In this study, our objective is to build robust models which can accurately classify 30 types of abnormalities from various lead combinations of ECG signals. Given the challenges of this problem, we proposed a framework for building robust models for ECG signal classification. Firstly, a pre-processing workflow was adopted for each ECG dataset to mitigate the problem of data divergence. Secondly, to capture the lead-wise relations, we used a squeeze-and-excitation deep residual network (SE_ResNet) as our base model. Thirdly, we proposed the cross relabeling strategy and applied the sign-augmented loss function to tackle the corrupted labels in the data. Furthermore, we utilized a pos-if-any-pos ensemble strategy and a dataset-wise cross evaluation strategy to handle the uncertainty of the data distribution in the application. In the Physionet/Computing in Cardiology Challenge 2021, our approach achieved the challenge metric scores of 0.57, 0.59, 0.59, 0.58, 0.57 on 12, 6, 4, 3, 2 lead versions and an averaged challenge metric score of 0.58 over all the lead versions.Using the proposed framework, we developed the models from several large datasets with sufficiently labeled abnormalities. Our models could identify 30 ECG abnormalities accurately based on various lead combinations of ECG signals. The performance on hidden test data demonstrated the effectiveness of the proposed approaches.
Collapse
|
175
|
Battalapalli D, Rao BVVSNP, Yogeeswari P, Kesavadas C, Rajagopalan V. An optimal brain tumor segmentation algorithm for clinical MRI dataset with low resolution and non-contiguous slices. BMC Med Imaging 2022; 22:89. [PMID: 35568820 PMCID: PMC9107172 DOI: 10.1186/s12880-022-00812-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 04/20/2022] [Indexed: 11/27/2022] Open
Abstract
Background Segmenting brain tumor and its constituent regions from magnetic resonance images (MRI) is important for planning diagnosis and treatment. In clinical routine often an experienced radiologist delineates the tumor regions using multimodal MRI. But this manual segmentation is prone to poor reproducibility and is time consuming. Also, routine clinical scans are usually of low resolution. To overcome these limitations an automated and precise segmentation algorithm based on computer vision is needed. Methods We investigated the performance of three widely used segmentation methods namely region growing, fuzzy C means and deep neural networks (deepmedic). We evaluated these algorithms on the BRATS 2018 dataset by choosing randomly 48 patients data (high grade, n = 24 and low grade, n = 24) and on our routine clinical MRI brain tumor dataset (high grade, n = 15 and low grade, n = 28). We measured their performance using dice similarity coefficient, Hausdorff distance and volume measures. Results Region growing method performed very poorly when compared to fuzzy C means (fcm) and deepmedic network. Dice similarity coefficient scores for FCM and deepmedic algorithms were close to each other for BRATS and clinical dataset. The accuracy was below 70% for both these methods in general. Conclusion Even though the deepmedic network showed very high accuracy in BRATS challenge for brain tumor segmentation, it has to be custom trained for the low resolution routine clinical scans. It also requires large training data to be used as a stand-alone algorithm for clinical applications. Nevertheless deepmedic may be a better algorithm for brain tumor segmentation when compared to region growing or FCM.
Collapse
|