1
|
Akinyelu AA, Zaccagna F, Grist JT, Castelli M, Rundo L. Brain Tumor Diagnosis Using Machine Learning, Convolutional Neural Networks, Capsule Neural Networks and Vision Transformers, Applied to MRI: A Survey. J Imaging 2022; 8:205. [PMID: 35893083 PMCID: PMC9331677 DOI: 10.3390/jimaging8080205] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/20/2022] [Accepted: 07/12/2022] [Indexed: 02/01/2023] Open
Abstract
Management of brain tumors is based on clinical and radiological information with presumed grade dictating treatment. Hence, a non-invasive assessment of tumor grade is of paramount importance to choose the best treatment plan. Convolutional Neural Networks (CNNs) represent one of the effective Deep Learning (DL)-based techniques that have been used for brain tumor diagnosis. However, they are unable to handle input modifications effectively. Capsule neural networks (CapsNets) are a novel type of machine learning (ML) architecture that was recently developed to address the drawbacks of CNNs. CapsNets are resistant to rotations and affine translations, which is beneficial when processing medical imaging datasets. Moreover, Vision Transformers (ViT)-based solutions have been very recently proposed to address the issue of long-range dependency in CNNs. This survey provides a comprehensive overview of brain tumor classification and segmentation techniques, with a focus on ML-based, CNN-based, CapsNet-based, and ViT-based techniques. The survey highlights the fundamental contributions of recent studies and the performance of state-of-the-art techniques. Moreover, we present an in-depth discussion of crucial issues and open challenges. We also identify some key limitations and promising future research directions. We envisage that this survey shall serve as a good springboard for further study.
Collapse
|
Review |
3 |
31 |
2
|
Hoang GM, Kim UH, Kim JG. Vision transformers for the prediction of mild cognitive impairment to Alzheimer's disease progression using mid-sagittal sMRI. Front Aging Neurosci 2023; 15:1102869. [PMID: 37122374 PMCID: PMC10133493 DOI: 10.3389/fnagi.2023.1102869] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Accepted: 03/22/2023] [Indexed: 05/02/2023] Open
Abstract
Background Alzheimer's disease (AD) is one of the most common causes of neurodegenerative disease affecting over 50 million people worldwide. However, most AD diagnosis occurs in the moderate to late stage, which means that the optimal time for treatment has already passed. Mild cognitive impairment (MCI) is an intermediate state between cognitively normal people and AD patients. Therefore, the accurate prediction in the conversion process of MCI to AD may allow patients to start preventive intervention to slow the progression of the disease. Nowadays, neuroimaging techniques have been developed and are used to determine AD-related structural biomarkers. Deep learning approaches have rapidly become a key methodology applied to these techniques to find biomarkers. Methods In this study, we aimed to investigate an MCI-to-AD prediction method using Vision Transformers (ViT) to structural magnetic resonance images (sMRI). The Alzheimer's Disease Neuroimaging Initiative (ADNI) database containing 598 MCI subjects was used to predict MCI subjects' progression to AD. There are three main objectives in our study: (i) to propose an MRI-based Vision Transformers approach for MCI to AD progression classification, (ii) to evaluate the performance of different ViT architectures to obtain the most advisable one, and (iii) to visualize the brain region mostly affect the prediction of deep learning approach to MCI progression. Results Our method achieved state-of-the-art classification performance in terms of accuracy (83.27%), specificity (85.07%), and sensitivity (81.48%) compared with a set of conventional methods. Next, we visualized the brain regions that mostly contribute to the prediction of MCI progression for interpretability of the proposed model. The discriminative pathological locations include the thalamus, medial frontal, and occipital-corroborating the reliability of our model. Conclusion In conclusion, our methods provide an effective and accurate technique for the prediction of MCI conversion to AD. The results obtained in this study outperform previous reports using the ADNI collection, and it suggests that sMRI-based ViT could be efficiently applied with a considerable potential benefit for AD patient management. The brain regions mostly contributing to prediction, in conjunction with the identified anatomical features, will support the building of a robust solution for other neurodegenerative diseases in future.
Collapse
|
methods-article |
2 |
21 |
3
|
Yan R, Qu L, Wei Q, Huang SC, Shen L, Rubin DL, Xing L, Zhou Y. Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1932-1943. [PMID: 37018314 PMCID: PMC10880587 DOI: 10.1109/tmi.2022.3233574] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL.
Collapse
|
Research Support, N.I.H., Extramural |
2 |
15 |
4
|
Moutik O, Sekkat H, Tigani S, Chehri A, Saadane R, Tchakoucht TA, Paul A. Convolutional Neural Networks or Vision Transformers: Who Will Win the Race for Action Recognitions in Visual Data? SENSORS (BASEL, SWITZERLAND) 2023; 23:734. [PMID: 36679530 PMCID: PMC9862752 DOI: 10.3390/s23020734] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/01/2023] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Understanding actions in videos remains a significant challenge in computer vision, which has been the subject of several pieces of research in the last decades. Convolutional neural networks (CNN) are a significant component of this topic and play a crucial role in the renown of Deep Learning. Inspired by the human vision system, CNN has been applied to visual data exploitation and has solved various challenges in various computer vision tasks and video/image analysis, including action recognition (AR). However, not long ago, along with the achievement of the transformer in natural language processing (NLP), it began to set new trends in vision tasks, which has created a discussion around whether the Vision Transformer models (ViT) will replace CNN in action recognition in video clips. This paper conducts this trending topic in detail, the study of CNN and Transformer for Action Recognition separately and a comparative study of the accuracy-complexity trade-off. Finally, based on the performance analysis's outcome, the question of whether CNN or Vision Transformers will win the race will be discussed.
Collapse
|
Review |
2 |
14 |
5
|
Ghali R, Akhloufi MA, Mseddi WS. Deep Learning and Transformer Approaches for UAV-Based Wildfire Detection and Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22051977. [PMID: 35271126 PMCID: PMC8914964 DOI: 10.3390/s22051977] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 02/28/2022] [Accepted: 03/01/2022] [Indexed: 05/29/2023]
Abstract
Wildfires are a worldwide natural disaster causing important economic damages and loss of lives. Experts predict that wildfires will increase in the coming years mainly due to climate change. Early detection and prediction of fire spread can help reduce affected areas and improve firefighting. Numerous systems were developed to detect fire. Recently, Unmanned Aerial Vehicles were employed to tackle this problem due to their high flexibility, their low-cost, and their ability to cover wide areas during the day or night. However, they are still limited by challenging problems such as small fire size, background complexity, and image degradation. To deal with the aforementioned limitations, we adapted and optimized Deep Learning methods to detect wildfire at an early stage. A novel deep ensemble learning method, which combines EfficientNet-B5 and DenseNet-201 models, is proposed to identify and classify wildfire using aerial images. In addition, two vision transformers (TransUNet and TransFire) and a deep convolutional model (EfficientSeg) were employed to segment wildfire regions and determine the precise fire regions. The obtained results are promising and show the efficiency of using Deep Learning and vision transformers for wildfire classification and segmentation. The proposed model for wildfire classification obtained an accuracy of 85.12% and outperformed many state-of-the-art works. It proved its ability in classifying wildfire even small fire areas. The best semantic segmentation models achieved an F1-score of 99.9% for TransUNet architecture and 99.82% for TransFire architecture superior to recent published models. More specifically, we demonstrated the ability of these models to extract the finer details of wildfire using aerial images. They can further overcome current model limitations, such as background complexity and small wildfire areas.
Collapse
|
research-article |
3 |
11 |
6
|
Parez S, Dilshad N, Alghamdi NS, Alanazi TM, Lee JW. Visual Intelligence in Precision Agriculture: Exploring Plant Disease Detection via Efficient Vision Transformers. SENSORS (BASEL, SWITZERLAND) 2023; 23:6949. [PMID: 37571732 PMCID: PMC10422257 DOI: 10.3390/s23156949] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 07/31/2023] [Accepted: 08/01/2023] [Indexed: 08/13/2023]
Abstract
In order for a country's economy to grow, agricultural development is essential. Plant diseases, however, severely hamper crop growth rate and quality. In the absence of domain experts and with low contrast information, accurate identification of these diseases is very challenging and time-consuming. This leads to an agricultural management system in need of a method for automatically detecting disease at an early stage. As a consequence of dimensionality reduction, CNN-based models use pooling layers, which results in the loss of vital information, including the precise location of the most prominent features. In response to these challenges, we propose a fine-tuned technique, GreenViT, for detecting plant infections and diseases based on Vision Transformers (ViTs). Similar to word embedding, we divide the input image into smaller blocks or patches and feed these to the ViT sequentially. Our approach leverages the strengths of ViTs in order to overcome the problems associated with CNN-based models. Experiments on widely used benchmark datasets were conducted to evaluate the proposed GreenViT performance. Based on the obtained experimental outcomes, the proposed technique outperforms state-of-the-art (SOTA) CNN models for detecting plant diseases.
Collapse
|
research-article |
2 |
10 |
7
|
Cantone M, Marrocco C, Tortorella F, Bria A. Convolutional Networks and Transformers for Mammography Classification: An Experimental Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:1229. [PMID: 36772268 PMCID: PMC9921468 DOI: 10.3390/s23031229] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/13/2023] [Accepted: 01/18/2023] [Indexed: 05/31/2023]
Abstract
Convolutional Neural Networks (CNN) have received a large share of research in mammography image analysis due to their capability of extracting hierarchical features directly from raw data. Recently, Vision Transformers are emerging as viable alternative to CNNs in medical imaging, in some cases performing on par or better than their convolutional counterparts. In this work, we conduct an extensive experimental study to compare the most recent CNN and Vision Transformer architectures for whole mammograms classification. We selected, trained and tested 33 different models, 19 convolutional- and 14 transformer-based, on the largest publicly available mammography image database OMI-DB. We also performed an analysis of the performance at eight different image resolutions and considering all the individual lesion categories in isolation (masses, calcifications, focal asymmetries, architectural distortions). Our findings confirm the potential of visual transformers, which performed on par with traditional CNNs like ResNet, but at the same time show a superiority of modern convolutional networks like EfficientNet.
Collapse
|
research-article |
2 |
8 |
8
|
Katakis S, Barotsis N, Kakotaritis A, Tsiganos P, Economou G, Panagiotopoulos E, Panayiotakis G. Muscle Cross-Sectional Area Segmentation in Transverse Ultrasound Images Using Vision Transformers. Diagnostics (Basel) 2023; 13:217. [PMID: 36673026 PMCID: PMC9858099 DOI: 10.3390/diagnostics13020217] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 12/28/2022] [Accepted: 01/03/2023] [Indexed: 01/10/2023] Open
Abstract
Automatically measuring a muscle’s cross-sectional area is an important application in clinical practice that has been studied extensively in recent years for its ability to assess muscle architecture. Additionally, an adequately segmented cross-sectional area can be used to estimate the echogenicity of the muscle, another valuable parameter correlated with muscle quality. This study assesses state-of-the-art convolutional neural networks and vision transformers for automating this task in a new, large, and diverse database. This database consists of 2005 transverse ultrasound images from four informative muscles for neuromuscular disorders, recorded from 210 subjects of different ages, pathological conditions, and sexes. Regarding the reported results, all of the evaluated deep learning models have achieved near-to-human-level performance. In particular, the manual vs. the automatic measurements of the cross-sectional area exhibit an average discrepancy of less than 38.15 mm2, a significant result demonstrating the feasibility of automating this task. Moreover, the difference in muscle echogenicity estimated from these two readings is only 0.88, another indicator of the proposed method’s success. Furthermore, Bland−Altman analysis of the measurements exhibits no systematic errors since most differences fall between the 95% limits of agreements and the two readings have a 0.97 Pearson’s correlation coefficient (p < 0.001, validation set) with ICC (2, 1) surpassing 0.97, showing the reliability of this approach. Finally, as a supplementary analysis, the texture of the muscle’s visible cross-sectional area was examined using deep learning to investigate whether a classification between healthy subjects and patients with pathological conditions solely from the muscle texture is possible. Our preliminary results indicate that such a task is feasible, but further and more extensive studies are required for more conclusive results.
Collapse
|
research-article |
2 |
8 |
9
|
Cirrincione G, Cannata S, Cicceri G, Prinzi F, Currieri T, Lovino M, Militello C, Pasero E, Vitabile S. Transformer-Based Approach to Melanoma Detection. SENSORS (BASEL, SWITZERLAND) 2023; 23:5677. [PMID: 37420843 DOI: 10.3390/s23125677] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/09/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
Melanoma is a malignant cancer type which develops when DNA damage occurs (mainly due to environmental factors such as ultraviolet rays). Often, melanoma results in intense and aggressive cell growth that, if not caught in time, can bring one toward death. Thus, early identification at the initial stage is fundamental to stopping the spread of cancer. In this paper, a ViT-based architecture able to classify melanoma versus non-cancerous lesions is presented. The proposed predictive model is trained and tested on public skin cancer data from the ISIC challenge, and the obtained results are highly promising. Different classifier configurations are considered and analyzed in order to find the most discriminating one. The best one reached an accuracy of 0.948, sensitivity of 0.928, specificity of 0.967, and AUROC of 0.948.
Collapse
|
|
2 |
7 |
10
|
Carcagnì P, Leo M, Del Coco M, Distante C, De Salve A. Convolution Neural Networks and Self-Attention Learners for Alzheimer Dementia Diagnosis from Brain MRI. SENSORS (BASEL, SWITZERLAND) 2023; 23:1694. [PMID: 36772733 PMCID: PMC9919436 DOI: 10.3390/s23031694] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/28/2022] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
Alzheimer's disease (AD) is the most common form of dementia. Computer-aided diagnosis (CAD) can help in the early detection of associated cognitive impairment. The aim of this work is to improve the automatic detection of dementia in MRI brain data. For this purpose, we used an established pipeline that includes the registration, slicing, and classification steps. The contribution of this research was to investigate for the first time, to our knowledge, three current and promising deep convolutional models (ResNet, DenseNet, and EfficientNet) and two transformer-based architectures (MAE and DeiT) for mapping input images to clinical diagnosis. To allow a fair comparison, the experiments were performed on two publicly available datasets (ADNI and OASIS) using multiple benchmarks obtained by changing the number of slices per subject extracted from the available 3D voxels. The experiments showed that very deep ResNet and DenseNet models performed better than the shallow ResNet and VGG versions tested in the literature. It was also found that transformer architectures, and DeiT in particular, produced the best classification results and were more robust to the noise added by increasing the number of slices. A significant improvement in accuracy (up to 7%) was achieved compared to the leading state-of-the-art approaches, paving the way for the use of CAD approaches in real-world applications.
Collapse
|
research-article |
2 |
7 |
11
|
Pasero E, Gaita F, Randazzo V, Meynet P, Cannata S, Maury P, Giustetto C. Artificial Intelligence ECG Analysis in Patients with Short QT Syndrome to Predict Life-Threatening Arrhythmic Events. SENSORS (BASEL, SWITZERLAND) 2023; 23:8900. [PMID: 37960599 PMCID: PMC10649184 DOI: 10.3390/s23218900] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 10/23/2023] [Accepted: 10/28/2023] [Indexed: 11/15/2023]
Abstract
Short QT syndrome (SQTS) is an inherited cardiac ion-channel disease related to an increased risk of sudden cardiac death (SCD) in young and otherwise healthy individuals. SCD is often the first clinical presentation in patients with SQTS. However, arrhythmia risk stratification is presently unsatisfactory in asymptomatic patients. In this context, artificial intelligence-based electrocardiogram (ECG) analysis has never been applied to refine risk stratification in patients with SQTS. The purpose of this study was to analyze ECGs from SQTS patients with the aid of different AI algorithms to evaluate their ability to discriminate between subjects with and without documented life-threatening arrhythmic events. The study group included 104 SQTS patients, 37 of whom had a documented major arrhythmic event at presentation and/or during follow-up. Thirteen ECG features were measured independently by three expert cardiologists; then, the dataset was randomly divided into three subsets (training, validation, and testing). Five shallow neural networks were trained, validated, and tested to predict subject-specific class (non-event/event) using different subsets of ECG features. Additionally, several deep learning and machine learning algorithms, such as Vision Transformer, Swin Transformer, MobileNetV3, EfficientNetV2, ConvNextTiny, Capsule Networks, and logistic regression were trained, validated, and tested directly on the scanned ECG images, without any manual feature extraction. Furthermore, a shallow neural network, a 1-D transformer classifier, and a 1-D CNN were trained, validated, and tested on ECG signals extracted from the aforementioned scanned images. Classification metrics were evaluated by means of sensitivity, specificity, positive and negative predictive values, accuracy, and area under the curve. Results prove that artificial intelligence can help clinicians in better stratifying risk of arrhythmia in patients with SQTS. In particular, shallow neural networks' processing features showed the best performance in identifying patients that will not suffer from a potentially lethal event. This could pave the way for refined ECG-based risk stratification in this group of patients, potentially helping in saving the lives of young and otherwise healthy individuals.
Collapse
|
research-article |
2 |
7 |
12
|
Gholami S, Lim JI, Leng T, Ong SSY, Thompson AC, Alam MN. Federated learning for diagnosis of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1259017. [PMID: 37901412 PMCID: PMC10613107 DOI: 10.3389/fmed.2023.1259017] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/25/2023] [Indexed: 10/31/2023] Open
Abstract
This paper presents a federated learning (FL) approach to train deep learning models for classifying age-related macular degeneration (AMD) using optical coherence tomography image data. We employ the use of residual network and vision transformer encoders for the normal vs. AMD binary classification, integrating four unique domain adaptation techniques to address domain shift issues caused by heterogeneous data distribution in different institutions. Experimental results indicate that FL strategies can achieve competitive performance similar to centralized models even though each local model has access to a portion of the training data. Notably, the Adaptive Personalization FL strategy stood out in our FL evaluations, consistently delivering high performance across all tests due to its additional local model. Furthermore, the study provides valuable insights into the efficacy of simpler architectures in image classification tasks, particularly in scenarios where data privacy and decentralization are critical using both encoders. It suggests future exploration into deeper models and other FL strategies for a more nuanced understanding of these models' performance. Data and code are available at https://github.com/QIAIUNCC/FL_UNCC_QIAI.
Collapse
|
research-article |
2 |
6 |
13
|
Abbas Q, Hussain A, Baig AR. Automatic Detection and Classification of Cardiovascular Disorders Using Phonocardiogram and Convolutional Vision Transformers. Diagnostics (Basel) 2022; 12:diagnostics12123109. [PMID: 36553116 PMCID: PMC9777096 DOI: 10.3390/diagnostics12123109] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
The major cause of death worldwide is due to cardiovascular disorders (CVDs). For a proper diagnosis of CVD disease, an inexpensive solution based on phonocardiogram (PCG) signals is proposed. (1) Background: Currently, a few deep learning (DL)-based CVD systems have been developed to recognize different stages of CVD. However, the accuracy of these systems is not up-to-the-mark, and the methods require high computational power and huge training datasets. (2) Methods: To address these issues, we developed a novel attention-based technique (CVT-Trans) on a convolutional vision transformer to recognize and categorize PCG signals into five classes. The continuous wavelet transform-based spectrogram (CWTS) strategy was used to extract representative features from PCG data. Following that, a new CVT-Trans architecture was created to categorize the CWTS signals into five groups. (3) Results: The dataset derived from our investigation indicated that the CVT-Trans system had an overall average accuracy ACC of 100%, SE of 99.00%, SP of 99.5%, and F1-score of 98%, based on 10-fold cross validation. (4) Conclusions: The CVD-Trans technique outperformed many state-of-the-art methods. The robustness of the constructed model was confirmed by 10-fold cross-validation. Cardiologists can use this CVT-Trans system to help patients with the diagnosis of heart valve problems.
Collapse
|
research-article |
3 |
5 |
14
|
Abbas Q, Daadaa Y, Rashid U, Ibrahim MEA. Assist-Dermo: A Lightweight Separable Vision Transformer Model for Multiclass Skin Lesion Classification. Diagnostics (Basel) 2023; 13:2531. [PMID: 37568894 PMCID: PMC10417387 DOI: 10.3390/diagnostics13152531] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/22/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023] Open
Abstract
A dermatologist-like automatic classification system is developed in this paper to recognize nine different classes of pigmented skin lesions (PSLs), using a separable vision transformer (SVT) technique to assist clinical experts in early skin cancer detection. In the past, researchers have developed a few systems to recognize nine classes of PSLs. However, they often require enormous computations to achieve high performance, which is burdensome to deploy on resource-constrained devices. In this paper, a new approach to designing SVT architecture is developed based on SqueezeNet and depthwise separable CNN models. The primary goal is to find a deep learning architecture with few parameters that has comparable accuracy to state-of-the-art (SOTA) architectures. This paper modifies the SqueezeNet design for improved runtime performance by utilizing depthwise separable convolutions rather than simple conventional units. To develop this Assist-Dermo system, a data augmentation technique is applied to control the PSL imbalance problem. Next, a pre-processing step is integrated to select the most dominant region and then enhance the lesion patterns in a perceptual-oriented color space. Afterwards, the Assist-Dermo system is designed to improve efficacy and performance with several layers and multiple filter sizes but fewer filters and parameters. For the training and evaluation of Assist-Dermo models, a set of PSL images is collected from different online data sources such as Ph2, ISBI-2017, HAM10000, and ISIC to recognize nine classes of PSLs. On the chosen dataset, it achieves an accuracy (ACC) of 95.6%, a sensitivity (SE) of 96.7%, a specificity (SP) of 95%, and an area under the curve (AUC) of 0.95. The experimental results show that the suggested Assist-Dermo technique outperformed SOTA algorithms when recognizing nine classes of PSLs. The Assist-Dermo system performed better than other competitive systems and can support dermatologists in the diagnosis of a wide variety of PSLs through dermoscopy. The Assist-Dermo model code is freely available on GitHub for the scientific community.
Collapse
|
research-article |
2 |
4 |
15
|
Pinčić D, Sušanj D, Lenac K. Gait Recognition with Self-Supervised Learning of Gait Features Based on Vision Transformers. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22197140. [PMID: 36236238 PMCID: PMC9571216 DOI: 10.3390/s22197140] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/10/2022] [Accepted: 09/16/2022] [Indexed: 05/14/2023]
Abstract
Gait is a unique biometric trait with several useful properties. It can be recognized remotely and without the cooperation of the individual, with low-resolution cameras, and it is difficult to obscure. Therefore, it is suitable for crime investigation, surveillance, and access control. Existing approaches for gait recognition generally belong to the supervised learning domain, where all samples in the dataset are annotated. In the real world, annotation is often expensive and time-consuming. Moreover, convolutional neural networks (CNNs) have dominated the field of gait recognition for many years and have been extensively researched, while other recent methods such as vision transformer (ViT) remain unexplored. In this manuscript, we propose a self-supervised learning (SSL) approach for pretraining the feature extractor using the DINO model to automatically learn useful gait features with the vision transformer architecture. The feature extractor is then used for extracting gait features on which the fully connected neural network classifier is trained using the supervised approach. Experiments on CASIA-B and OU-MVLP gait datasets show the effectiveness of the proposed approach.
Collapse
|
research-article |
3 |
3 |
16
|
Pachetti E, Colantonio S. 3D-Vision-Transformer Stacking Ensemble for Assessing Prostate Cancer Aggressiveness from T2w Images. Bioengineering (Basel) 2023; 10:1015. [PMID: 37760117 PMCID: PMC10525095 DOI: 10.3390/bioengineering10091015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/27/2023] [Accepted: 08/20/2023] [Indexed: 09/29/2023] Open
Abstract
Vision transformers represent the cutting-edge topic in computer vision and are usually employed on two-dimensional data following a transfer learning approach. In this work, we propose a trained-from-scratch stacking ensemble of 3D-vision transformers to assess prostate cancer aggressiveness from T2-weighted images to help radiologists diagnose this disease without performing a biopsy. We trained 18 3D-vision transformers on T2-weighted axial acquisitions and combined them into two- and three-model stacking ensembles. We defined two metrics for measuring model prediction confidence, and we trained all the ensemble combinations according to a five-fold cross-validation, evaluating their accuracy, confidence in predictions, and calibration. In addition, we optimized the 18 base ViTs and compared the best-performing base and ensemble models by re-training them on a 100-sample bootstrapped training set and evaluating each model on the hold-out test set. We compared the two distributions by calculating the median and the 95% confidence interval and performing a Wilcoxon signed-rank test. The best-performing 3D-vision-transformer stacking ensemble provided state-of-the-art results in terms of area under the receiving operating curve (0.89 [0.61-1]) and exceeded the area under the precision-recall curve of the base model of 22% (p < 0.001). However, it resulted to be less confident in classifying the positive class.
Collapse
|
research-article |
2 |
1 |
17
|
Fernandes GJ, Zheng J, Pedram M, Romano C, Shahabi F, Rothrock B, Cohen T, Zhu H, Butani TS, Hester J, Katsaggelos AK, Alshurafa N. HabitSense: A Privacy-Aware, AI-Enhanced Multimodal Wearable Platform for mHealth Applications. PROCEEDINGS OF THE ACM ON INTERACTIVE, MOBILE, WEARABLE AND UBIQUITOUS TECHNOLOGIES 2024; 8:101. [PMID: 40041122 PMCID: PMC11879279 DOI: 10.1145/3678591] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Wearable cameras provide an objective method to visually confirm and automate the detection of health-risk behaviors such as smoking and overeating, which is critical for developing and testing adaptive treatment interventions. Despite the potential of wearable camera systems, adoption is hindered by inadequate clinician input in the design, user privacy concerns, and user burden. To address these barriers, we introduced HabitSense, an open-source, multi-modal neck-worn platform developed with input from focus groups with clinicians (N=36) and user feedback from in-wild studies involving 105 participants over 35 days. Optimized for monitoring health-risk behaviors, the platform utilizes RGB, thermal, and inertial measurement unit sensors to detect eating and smoking events in real time. In a 7-day study involving 15 participants, HabitSense recorded 768 hours of footage, capturing 420.91 minutes of hand-to-mouth gestures associated with eating and smoking data crucial for training machine learning models, achieving a 92% F1-score in gesture recognition. To address privacy concerns, the platform records only during likely health-risk behavior events using SECURE, a smart activation algorithm. Additionally, HabitSense employs on-device obfuscation algorithms that selectively obfuscate the background during recording, maintaining individual privacy while leaving gestures related to health-risk behaviors unobfuscated. Our implementation of SECURE has resulted in a 48% reduction in storage needs and a 30% increase in battery life. This paper highlights the critical roles of clinician feedback, extensive field testing, and privacy-enhancing algorithms in developing an unobtrusive, lightweight, and reproducible wearable system that is both feasible and acceptable for monitoring health-risk behaviors in real-world settings.
Collapse
|
research-article |
1 |
1 |
18
|
Chetoui M, Akhloufi MA. Explainable Vision Transformers and Radiomics for COVID-19 Detection in Chest X-rays. J Clin Med 2022; 11:3013. [PMID: 35683400 PMCID: PMC9181325 DOI: 10.3390/jcm11113013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 05/18/2022] [Accepted: 05/23/2022] [Indexed: 01/30/2023] Open
Abstract
The rapid spread of COVID-19 across the globe since its emergence has pushed many countries' healthcare systems to the verge of collapse. To restrict the spread of the disease and lessen the ongoing cost on the healthcare system, it is critical to appropriately identify COVID-19-positive individuals and isolate them as soon as possible. The primary COVID-19 screening test, RT-PCR, although accurate and reliable, has a long turn-around time. More recently, various researchers have demonstrated the use of deep learning approaches on chest X-ray (CXR) for COVID-19 detection. However, existing Deep Convolutional Neural Network (CNN) methods fail to capture the global context due to their inherent image-specific inductive bias. In this article, we investigated the use of vision transformers (ViT) for detecting COVID-19 in Chest X-ray (CXR) images. Several ViT models were fine-tuned for the multiclass classification problem (COVID-19, Pneumonia and Normal cases). A dataset consisting of 7598 COVID-19 CXR images, 8552 CXR for healthy patients and 5674 for Pneumonia CXR were used. The obtained results achieved high performance with an Area Under Curve (AUC) of 0.99 for multi-class classification (COVID-19 vs. Other Pneumonia vs. normal). The sensitivity of the COVID-19 class achieved 0.99. We demonstrated that the obtained results outperformed comparable state-of-the-art models for detecting COVID-19 on CXR images using CNN architectures. The attention map for the proposed model showed that our model is able to efficiently identify the signs of COVID-19.
Collapse
|
research-article |
3 |
1 |
19
|
Ibrahem H, Salem A, Kang HS. RT-ViT: Real-Time Monocular Depth Estimation Using Lightweight Vision Transformers. SENSORS (BASEL, SWITZERLAND) 2022; 22:3849. [PMID: 35632271 PMCID: PMC9143167 DOI: 10.3390/s22103849] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 05/11/2022] [Accepted: 05/16/2022] [Indexed: 06/15/2023]
Abstract
The latest research in computer vision highlighted the effectiveness of the vision transformers (ViT) in performing several computer vision tasks; they can efficiently understand and process the image globally unlike the convolution which processes the image locally. ViTs outperform the convolutional neural networks in terms of accuracy in many computer vision tasks but the speed of ViTs is still an issue, due to the excessive use of the transformer layers that include many fully connected layers. Therefore, we propose a real-time ViT-based monocular depth estimation (depth estimation from single RGB image) method with encoder-decoder architectures for indoor and outdoor scenes. This main architecture of the proposed method consists of a vision transformer encoder and a convolutional neural network decoder. We started by training the base vision transformer (ViT-b16) with 12 transformer layers then we reduced the transformer layers to six layers, namely ViT-s16 (the Small ViT) and four layers, namely ViT-t16 (the Tiny ViT) to obtain real-time processing. We also try four different configurations of the CNN decoder network. The proposed architectures can learn the task of depth estimation efficiently and can produce more accurate depth predictions than the fully convolutional-based methods taking advantage of the multi-head self-attention module. We train the proposed encoder-decoder architecture end-to-end on the challenging NYU-depthV2 and CITYSCAPES benchmarks then we evaluate the trained models on the validation and test sets of the same benchmarks showing that it outperforms many state-of-the-art methods on depth estimation while performing the task in real-time (∼20 fps). We also present a fast 3D reconstruction (∼17 fps) experiment based on the depth estimated from our method which is considered a real-world application of our method.
Collapse
|
research-article |
3 |
1 |
20
|
Khan S, Ali H, Shah Z. Identifying the role of vision transformer for skin cancer-A scoping review. Front Artif Intell 2023; 6:1202990. [PMID: 37529760 PMCID: PMC10388102 DOI: 10.3389/frai.2023.1202990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 07/03/2023] [Indexed: 08/03/2023] Open
Abstract
Introduction Detecting and accurately diagnosing early melanocytic lesions is challenging due to extensive intra- and inter-observer variabilities. Dermoscopy images are widely used to identify and study skin cancer, but the blurred boundaries between lesions and besieging tissues can lead to incorrect identification. Artificial Intelligence (AI) models, including vision transformers, have been proposed as a solution, but variations in symptoms and underlying effects hinder their performance. Objective This scoping review synthesizes and analyzes the literature that uses vision transformers for skin lesion detection. Methods The review follows the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Revise) guidelines. The review searched online repositories such as IEEE Xplore, Scopus, Google Scholar, and PubMed to retrieve relevant articles. After screening and pre-processing, 28 studies that fulfilled the inclusion criteria were included. Results and discussions The review found that the use of vision transformers for skin cancer detection has rapidly increased from 2020 to 2022 and has shown outstanding performance for skin cancer detection using dermoscopy images. Along with highlighting intrinsic visual ambiguities, irregular skin lesion shapes, and many other unwanted challenges, the review also discusses the key problems that obfuscate the trustworthiness of vision transformers in skin cancer diagnosis. This review provides new insights for practitioners and researchers to understand the current state of knowledge in this specialized research domain and outlines the best segmentation techniques to identify accurate lesion boundaries and perform melanoma diagnosis. These findings will ultimately assist practitioners and researchers in making more authentic decisions promptly.
Collapse
|
Scoping Review |
2 |
|
21
|
Katar O, Yildirim O. An Explainable Vision Transformer Model Based White Blood Cells Classification and Localization. Diagnostics (Basel) 2023; 13:2459. [PMID: 37510202 PMCID: PMC10378025 DOI: 10.3390/diagnostics13142459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 07/21/2023] [Indexed: 07/30/2023] Open
Abstract
White blood cells (WBCs) are crucial components of the immune system that play a vital role in defending the body against infections and diseases. The identification of WBCs subtypes is useful in the detection of various diseases, such as infections, leukemia, and other hematological malignancies. The manual screening of blood films is time-consuming and subjective, leading to inconsistencies and errors. Convolutional neural networks (CNN)-based models can automate such classification processes, but are incapable of capturing long-range dependencies and global context. This paper proposes an explainable Vision Transformer (ViT) model for automatic WBCs detection from blood films. The proposed model uses a self-attention mechanism to extract features from input images. Our proposed model was trained and validated on a public dataset of 16,633 samples containing five different types of WBCs. As a result of experiments on the classification of five different types of WBCs, our model achieved an accuracy of 99.40%. Moreover, the model's examination of misclassified test samples revealed a correlation between incorrect predictions and the presence or absence of granules in the cell samples. To validate this observation, we divided the dataset into two classes, Granulocytes and Agranulocytes, and conducted a secondary training process. The resulting ViT model, trained for binary classification, achieved impressive performance metrics during the test phase, including an accuracy of 99.70%, recall of 99.54%, precision of 99.32%, and F-1 score of 99.43%. To ensure the reliability of the ViT model's, we employed the Score-CAM algorithm to visualize the pixel areas on which the model focuses during its predictions. Our proposed method is suitable for clinical use due to its explainable structure as well as its superior performance compared to similar studies in the literature. The classification and localization of WBCs with this model can facilitate the detection and reporting process for the pathologist.
Collapse
|
|
2 |
|
22
|
Chattopadhyay T, Ozarkar SS, Buwa K, Joshy NA, Komandur D, Naik J, Thomopoulos SI, Ver Steeg G, Ambite JL, Thompson PM. Comparison of deep learning architectures for predicting amyloid positivity in Alzheimer's disease, mild cognitive impairment, and healthy aging, from T1-weighted brain structural MRI. Front Neurosci 2024; 18:1387196. [PMID: 39015378 PMCID: PMC11250587 DOI: 10.3389/fnins.2024.1387196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 06/14/2024] [Indexed: 07/18/2024] Open
Abstract
Abnormal β-amyloid (Aβ) accumulation in the brain is an early indicator of Alzheimer's disease (AD) and is typically assessed through invasive procedures such as PET (positron emission tomography) or CSF (cerebrospinal fluid) assays. As new anti-Alzheimer's treatments can now successfully target amyloid pathology, there is a growing interest in predicting Aβ positivity (Aβ+) from less invasive, more widely available types of brain scans, such as T1-weighted (T1w) MRI. Here we compare multiple approaches to infer Aβ + from standard anatomical MRI: (1) classical machine learning algorithms, including logistic regression, XGBoost, and shallow artificial neural networks, (2) deep learning models based on 2D and 3D convolutional neural networks (CNNs), (3) a hybrid ANN-CNN, combining the strengths of shallow and deep neural networks, (4) transfer learning models based on CNNs, and (5) 3D Vision Transformers. All models were trained on paired MRI/PET data from 1,847 elderly participants (mean age: 75.1 yrs. ± 7.6SD; 863 females/984 males; 661 healthy controls, 889 with mild cognitive impairment (MCI), and 297 with Dementia), scanned as part of the Alzheimer's Disease Neuroimaging Initiative. We evaluated each model's balanced accuracy and F1 scores. While further tests on more diverse data are warranted, deep learning models trained on standard MRI showed promise for estimating Aβ + status, at least in people with MCI. This may offer a potential screening option before resorting to more invasive procedures.
Collapse
|
research-article |
1 |
|
23
|
Mask Usage Recognition using Vision Transformer with Transfer Learning and Data Augmentation. INTELLIGENT SYSTEMS WITH APPLICATIONS 2023:200186. [PMCID: PMC9851995 DOI: 10.1016/j.iswa.2023.200186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
The COVID-19 pandemic has disrupted various levels of society. The use of masks is essential in preventing the spread of COVID-19 by identifying an image of a person using a mask. Although only 23.1% of people use masks correctly, Artificial Neural Networks (ANN) can help classify the use of good masks to help slow the spread of the Covid-19 virus. However, it requires a large dataset to train an ANN that can classify the use of masks correctly. MaskedFace-Net is a suitable dataset consisting of 137016 digital images with 4 class labels, namely Mask, Mask Chin, Mask Mouth Chin, and Mask Nose Mouth. Mask classification training utilizes Vision Transformers (ViT) architecture with transfer learning method using pre-trained weights on ImageNet-21k, with random augmentation. In addition, the hyper-parameters of training of 20 epochs, an Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.03, a batch size of 64, a Gaussian Cumulative Distribution (GeLU) activation function, and a Cross-Entropy loss function are used to be applied on the training of three architectures of ViT, namely Base-16, Large-16, and Huge-14. Furthermore, comparisons of with and without augmentation and transfer learning are conducted. This study found that the best classification is transfer learning and augmentation using ViT Huge-14. Using this method on MaskedFace-Net dataset, the research reaches an accuracy of 0.9601 on training data, 0.9412 on validation data, and 0.9534 on test data. This research shows that training the ViT model with data augmentation and transfer learning improves classification of the mask usage, even better than convolutional-based Residual Network (ResNet).
Collapse
|
research-article |
2 |
|
24
|
Abidin ZU, Naqvi RA, Haider A, Kim HS, Jeong D, Lee SW. Recent deep learning-based brain tumor segmentation models using multi-modality magnetic resonance imaging: a prospective survey. Front Bioeng Biotechnol 2024; 12:1392807. [PMID: 39104626 PMCID: PMC11298476 DOI: 10.3389/fbioe.2024.1392807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 06/14/2024] [Indexed: 08/07/2024] Open
Abstract
Radiologists encounter significant challenges when segmenting and determining brain tumors in patients because this information assists in treatment planning. The utilization of artificial intelligence (AI), especially deep learning (DL), has emerged as a useful tool in healthcare, aiding radiologists in their diagnostic processes. This empowers radiologists to understand the biology of tumors better and provide personalized care to patients with brain tumors. The segmentation of brain tumors using multi-modal magnetic resonance imaging (MRI) images has received considerable attention. In this survey, we first discuss multi-modal and available magnetic resonance imaging modalities and their properties. Subsequently, we discuss the most recent DL-based models for brain tumor segmentation using multi-modal MRI. We divide this section into three parts based on the architecture: the first is for models that use the backbone of convolutional neural networks (CNN), the second is for vision transformer-based models, and the third is for hybrid models that use both convolutional neural networks and transformer in the architecture. In addition, in-depth statistical analysis is performed of the recent publication, frequently used datasets, and evaluation metrics for segmentation tasks. Finally, open research challenges are identified and suggested promising future directions for brain tumor segmentation to improve diagnostic accuracy and treatment outcomes for patients with brain tumors. This aligns with public health goals to use health technologies for better healthcare delivery and population health management.
Collapse
|
Review |
1 |
|
25
|
Islam MS, Suryavanshi P, Baule SM, Glykys J, Baek S. A Deep Learning Approach for Neuronal Cell Body Segmentation in Neurons Expressing GCaMP Using a Swin Transformer. eNeuro 2023; 10:ENEURO.0148-23.2023. [PMID: 37704367 PMCID: PMC10523838 DOI: 10.1523/eneuro.0148-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 09/08/2023] [Accepted: 09/09/2023] [Indexed: 09/15/2023] Open
Abstract
Neuronal cell body analysis is crucial for quantifying changes in neuronal sizes under different physiological and pathologic conditions. Neuronal cell body detection and segmentation mainly rely on manual or pseudo-manual annotations. Manual annotation of neuronal boundaries is time-consuming, requires human expertise, and has intra/interobserver variances. Also, determining where the neuron's cell body ends and where the axons and dendrites begin is taxing. We developed a deep-learning-based approach that uses a state-of-the-art shifted windows (Swin) transformer for automated, reproducible, fast, and unbiased 2D detection and segmentation of neuronal somas imaged in mouse acute brain slices by multiphoton microscopy. We tested our Swin algorithm during different experimental conditions of low and high signal fluorescence. Our algorithm achieved a mean Dice score of 0.91, a precision of 0.83, and a recall of 0.86. Compared with two different convolutional neural networks, the Swin transformer outperformed them in detecting the cell boundaries of GCamP6s expressing neurons. Thus, our Swin transform algorithm can assist in the fast and accurate segmentation of fluorescently labeled neuronal cell bodies in thick acute brain slices. Using our flexible algorithm, researchers can better study the fluctuations in neuronal soma size during physiological and pathologic conditions.
Collapse
|
Research Support, N.I.H., Extramural |
2 |
|