1
|
Zhao D, Ji L, Yang F. Land Cover Classification Based on Airborne Lidar Point Cloud with Possibility Method and Multi-Classifier. Sensors (Basel) 2023; 23:8841. [PMID: 37960542 PMCID: PMC10648668 DOI: 10.3390/s23218841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 10/18/2023] [Accepted: 10/28/2023] [Indexed: 11/15/2023]
Abstract
As important geospatial data, point cloud collected from an aerial laser scanner (ALS) provides three-dimensional (3D) information for the study of the distribution of typical urban land cover, which is critical in the construction of a "digital city". However, existing point cloud classification methods usually use a single machine learning classifier that experiences uncertainty in making decisions for fuzzy samples in confusing areas. This limits the improvement of classification accuracy. To take full advantage of different classifiers and reduce uncertainty, we propose a classification method based on possibility theory and multi-classifier fusion. Firstly, the feature importance measure was performed by the XGBoost algorithm to construct a feature space, and two commonly used support vector machines (SVMs) were the chosen base classifiers. Then, classification results from the two base classifiers were quantitatively evaluated to define the confusing areas in classification. Finally, the confidence degree of each classifier for different categories was calculated by the confusion matrix and normalized to obtain the weights. Then, we synthesize different classifiers based on possibility theory to achieve more accurate classification in the confusion areas. DALES datasets were utilized to assess the proposed method. The results reveal that the proposed method can significantly improve classification accuracy in confusing areas.
Collapse
Affiliation(s)
| | - Linna Ji
- School of Information and Communication Engineering, North University of China, Taiyuan 030051, China
| | | |
Collapse
|
2
|
Velpula VK, Sharma LD. Multi-stage glaucoma classification using pre-trained convolutional neural networks and voting-based classifier fusion. Front Physiol 2023; 14:1175881. [PMID: 37383146 PMCID: PMC10293617 DOI: 10.3389/fphys.2023.1175881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 05/19/2023] [Indexed: 06/30/2023] Open
Abstract
Aim: To design an automated glaucoma detection system for early detection of glaucoma using fundus images. Background: Glaucoma is a serious eye problem that can cause vision loss and even permanent blindness. Early detection and prevention are crucial for effective treatment. Traditional diagnostic approaches are time consuming, manual, and often inaccurate, thus making automated glaucoma diagnosis necessary. Objective: To propose an automated glaucoma stage classification model using pre-trained deep convolutional neural network (CNN) models and classifier fusion. Methods: The proposed model utilized five pre-trained CNN models: ResNet50, AlexNet, VGG19, DenseNet-201, and Inception-ResNet-v2. The model was tested using four public datasets: ACRIMA, RIM-ONE, Harvard Dataverse (HVD), and Drishti. Classifier fusion was created to merge the decisions of all CNN models using the maximum voting-based approach. Results: The proposed model achieved an area under the curve of 1 and an accuracy of 99.57% for the ACRIMA dataset. The HVD dataset had an area under the curve of 0.97 and an accuracy of 85.43%. The accuracy rates for Drishti and RIM-ONE were 90.55 and 94.95%, respectively. The experimental results showed that the proposed model performed better than the state-of-the-art methods in classifying glaucoma in its early stages. Understanding the model output includes both attribution-based methods such as activations and gradient class activation map and perturbation-based methods such as locally interpretable model-agnostic explanations and occlusion sensitivity, which generate heatmaps of various sections of an image for model prediction. Conclusion: The proposed automated glaucoma stage classification model using pre-trained CNN models and classifier fusion is an effective method for the early detection of glaucoma. The results indicate high accuracy rates and superior performance compared to the existing methods.
Collapse
|
3
|
Hudson AL, Wattiez N, Navarro-Sune X, Chavez M, Similowski T. Combined head accelerometry and EEG improves the detection of respiratory-related cortical activity during inspiratory loading in healthy participants. Physiol Rep 2022; 10:e15383. [PMID: 35818313 PMCID: PMC9273870 DOI: 10.14814/phy2.15383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 06/15/2022] [Accepted: 06/21/2022] [Indexed: 12/01/2022] Open
Abstract
Mechanical ventilation is a highly utilized life‐saving tool, particularly in the current era. The use of EEG in a brain–ventilator interface (BVI) to detect respiratory discomfort (due to sub‐optimal ventilator settings) would improve treatment in mechanically ventilated patients. This concept has been realized via development of an EEG covariance‐based classifier that detects respiratory‐related cortical activity associated with respiratory discomfort. The aim of this study was to determine if head movement, detected by an accelerometer, can detect and/or improve the detection of respiratory‐related cortical activity compared to EEG alone. In 25 healthy participants, EEG and acceleration of the head were recorded during loaded and quiet breathing in the seated and lying postures. Detection of respiratory‐related cortical activity using an EEG covariance‐based classifier was improved by inclusion of data from an Accelerometer‐based classifier, i.e. classifier ‘Fusion’. In addition, ‘smoothed’ data over 50s, rather than one 5 s window of EEG/Accelerometer signals, improved detection. Waveform averages of EEG and head acceleration showed the incidence of pre‐inspiratory potentials did not differ between loaded and quiet breathing, but head movement was greater in loaded breathing. This study confirms that compared to event‐related analysis with >5 min of signal acquisition, an EEG‐based classifier is a clinically valuable tool with rapid processing, detection times, and accuracy. Data smoothing would introduce a small delay (<1 min) but improves detection results. As head acceleration improved detection compared to EEG alone, the number of EEG signals required to detect respiratory discomfort with future BVIs could be reduced if head acceleration is included.
Collapse
Affiliation(s)
- Anna L Hudson
- College of Medicine and Public Health, Flinders University, Adelaide, Australia.,Neuroscience Research Australia and, University of New South Wales, Sydney, Australia.,Sorbonne Université, INSERM UMRS1158 Neurophysiologie Respiratoire Expérimentale et Clinique, Paris, France
| | - Nicolas Wattiez
- Sorbonne Université, INSERM UMRS1158 Neurophysiologie Respiratoire Expérimentale et Clinique, Paris, France
| | - Xavier Navarro-Sune
- Sorbonne Université, INSERM UMR 1127, CNRS UMR 7225, Institut du Cerveau et de la Moelle Épinière, Paris, France.,myBrain Technologies, Paris, France
| | - Mario Chavez
- Sorbonne Université, INSERM UMR 1127, CNRS UMR 7225, Institut du Cerveau et de la Moelle Épinière, Paris, France
| | - Thomas Similowski
- Sorbonne Université, INSERM UMRS1158 Neurophysiologie Respiratoire Expérimentale et Clinique, Paris, France.,AP-HP, Groupe Hospitalier APHP-Sorbonne Université, Hôpital Pitié-Salpêtrière, Département R3S, Paris, France
| |
Collapse
|
4
|
Mishra S, Shaw K, Mishra D, Patil S, Kotecha K, Kumar S, Bajaj S. Improving the Accuracy of Ensemble Machine Learning Classification Models Using a Novel Bit-Fusion Algorithm for Healthcare AI Systems. Front Public Health 2022; 10:858282. [PMID: 35602150 PMCID: PMC9114677 DOI: 10.3389/fpubh.2022.858282] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 03/15/2022] [Indexed: 12/11/2022] Open
Abstract
Healthcare AI systems exclusively employ classification models for disease detection. However, with the recent research advances into this arena, it has been observed that single classification models have achieved limited accuracy in some cases. Employing fusion of multiple classifiers outputs into a single classification framework has been instrumental in achieving greater accuracy and performing automated big data analysis. The article proposes a bit fusion ensemble algorithm that minimizes the classification error rate and has been tested on various datasets. Five diversified base classifiers k- nearest neighbor (KNN), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), Decision Tree (D.T.), and Naïve Bayesian Classifier (N.B.), are used in the implementation model. Bit fusion algorithm works on the individual input from the classifiers. Decision vectors of the base classifier are weighted transformed into binary bits by comparing with high-reliability threshold parameters. The output of each base classifier is considered as soft class vectors (CV). These vectors are weighted, transformed and compared with a high threshold value of initialized δ = 0.9 for reliability. Binary patterns are extracted, and the model is trained and tested again. The standard fusion approach and proposed bit fusion algorithm have been compared by average error rate. The error rate of the Bit-fusion algorithm has been observed with the values 5.97, 12.6, 4.64, 0, 0, 27.28 for Leukemia, Breast cancer, Lung Cancer, Hepatitis, Lymphoma, Embryonal Tumors, respectively. The model is trained and tested over datasets from UCI, UEA, and UCR repositories as well which also have shown reduction in the error rates.
Collapse
Affiliation(s)
- Sashikala Mishra
- Symbiosis Institute of Technology, Symbiosis International University, Pune, India
| | - Kailash Shaw
- Symbiosis Institute of Technology, Symbiosis International University, Pune, India
| | - Debahuti Mishra
- Department of Computer Science and Engineering, Siksha O Anusandhan Deemed to be University, Bhubaneshwar, India
| | - Shruti Patil
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, India
| | - Ketan Kotecha
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, India
| | - Satish Kumar
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, India
| | - Simi Bajaj
- School of Computer Data and Mathematical Sciences, University of Western Sydney, Sydney, NSW, Australia
| |
Collapse
|
5
|
Zhang D, Ding W, Zhang B, Xie C, Li H, Liu C, Han J. Automatic Modulation Classification Based on Deep Learning for Unmanned Aerial Vehicles. Sensors (Basel) 2018; 18:E924. [PMID: 29558434 PMCID: PMC5876703 DOI: 10.3390/s18030924] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2018] [Revised: 03/14/2018] [Accepted: 03/15/2018] [Indexed: 12/03/2022]
Abstract
Deep learning has recently attracted much attention due to its excellent performance in processing audio, image, and video data. However, few studies are devoted to the field of automatic modulation classification (AMC). It is one of the most well-known research topics in communication signal recognition and remains challenging for traditional methods due to complex disturbance from other sources. This paper proposes a heterogeneous deep model fusion (HDMF) method to solve the problem in a unified framework. The contributions include the following: (1) a convolutional neural network (CNN) and long short-term memory (LSTM) are combined by two different ways without prior knowledge involved; (2) a large database, including eleven types of single-carrier modulation signals with various noises as well as a fading channel, is collected with various signal-to-noise ratios (SNRs) based on a real geographical environment; and (3) experimental results demonstrate that HDMF is very capable of coping with the AMC problem, and achieves much better performance when compared with the independent network.
Collapse
Affiliation(s)
- Duona Zhang
- School of Beihang University, Beijing 100083, China.
| | - Wenrui Ding
- School of Beihang University, Beijing 100083, China.
| | | | - Chunyu Xie
- School of Beihang University, Beijing 100083, China.
| | - Hongguang Li
- School of Beihang University, Beijing 100083, China.
| | - Chunhui Liu
- School of Beihang University, Beijing 100083, China.
| | - Jungong Han
- School of Computing & Communications, Lancaster University, Lancaster LA1 4WA, UK.
| |
Collapse
|
6
|
Abstract
A novel technique of automatically selecting the best pairs of features and sampling techniques to predict the stage of prostate cancer is proposed in this study. The problem of class imbalance, which is prominent in most medical data sets is also addressed here. Three feature subsets obtained by the use of principal components analysis (PCA), genetic algorithm (GA) and rough sets (RS) based approaches were also used in the study. The performance of under-sampling, synthetic minority over-sampling technique (SMOTE) and a combination of the two were also investigated and the performance of the obtained models was compared. To combine the classifier outputs, we used the Dempster-Shafer (DS) theory, whereas the actual choice of combined models was made using a GA. We found that the best performance for the overall system resulted from the use of under sampled data combined with rough sets based features modeled as a support vector machine (SVM).
Collapse
Affiliation(s)
- Sandeep Chandana
- Department of Electrical and Computer Engineering, University of Calgary, Calgary, Alberta, Canada
| | | | | |
Collapse
|
7
|
Warfield SK, Zou KH, Wells WM. Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation. IEEE Trans Med Imaging 2004; 23:903-21. [PMID: 15250643 PMCID: PMC1283110 DOI: 10.1109/tmi.2004.828354] [Citation(s) in RCA: 1118] [Impact Index Per Article: 55.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Characterizing the performance of image segmentation approaches has been a persistent challenge. Performance analysis is important since segmentation algorithms often have limited accuracy and precision. Interactive drawing of the desired segmentation by human raters has often been the only acceptable approach, and yet suffers from intra-rater and inter-rater variability. Automated algorithms have been sought in order to remove the variability introduced by raters, but such algorithms must be assessed to ensure they are suitable for the task. The performance of raters (human or algorithmic) generating segmentations of medical images has been difficult to quantify because of the difficulty of obtaining or estimating a known true segmentation for clinical data. Although physical and digital phantoms can be constructed for which ground truth is known or readily estimated, such phantoms do not fully reflect clinical images due to the difficulty of constructing phantoms which reproduce the full range of imaging characteristics and normal and pathological anatomical variability observed in clinical data. Comparison to a collection of segmentations by raters is an attractive alternative since it can be carried out directly on the relevant clinical imaging data. However, the most appropriate measure or set of measures with which to compare such segmentations has not been clarified and several measures are used in practice. We present here an expectation-maximization algorithm for simultaneous truth and performance level estimation (STAPLE). The algorithm considers a collection of segmentations and computes a probabilistic estimate of the true segmentation and a measure of the performance level represented by each segmentation. The source of each segmentation in the collection may be an appropriately trained human rater or raters, or may be an automated segmentation algorithm. The probabilistic estimate of the true segmentation is formed by estimating an optimal combination of the segmentations, weighting each segmentation depending upon the estimated performance level, and incorporating a prior model for the spatial distribution of structures being segmented as well as spatial homogeneity constraints. STAPLE is straightforward to apply to clinical imaging data, it readily enables assessment of the performance of an automated image segmentation algorithm, and enables direct comparison of human rater and algorithm performance.
Collapse
Affiliation(s)
- Simon K Warfield
- Harvard Medical School and the Department of Radiology of Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115, USA.
| | | | | |
Collapse
|