1
|
|
2
|
Özbeyaz A. EEG-Based classification of branded and unbranded stimuli associating with smartphone products: comparison of several machine learning algorithms. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05779-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
3
|
Ragala R, Bharadwaja Kumar G. Recursive Block LU Decomposition based ELM in Apache Spark. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-189141] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Due to the massive memory and computational resources required to build complex machine learning models on large datasets, many researchers are employing distributed environments for training the models on large datasets. The parallel implementations of Extreme Learning Machine (ELM) with many variants have been developed using MapReduce and Spark frameworks in the recent years. However, these approaches have severe limitations in terms of Input-Output (I/O) cost, memory, etc. From the literature, it is known that the complexity of ELM is directly propositional to the computation of Moore-Penrose pseudo inverse of hidden layer matrix in ELM. Most of the ELM variants developed on Spark framework have employed Singular Value Decomposition (SVD) to compute the Moore-Penrose pseudo inverse. But, SVD has severe memory limitations when experimenting with large datasets. In this paper, a method that uses Recursive Block LU Decomposition to compute the Moore-Penrose generalized inverse over the Spark cluster has been proposed to reduce the computational complexity. This method enhances the ELM algorithm to be efficient in handling the scalability and also having faster execution of the model. The experimental results have shown that the proposed method is efficient than the existing algorithms available in the literature.
Collapse
Affiliation(s)
- Ramesh Ragala
- School of Computer Science and Engineering, Vellore Institute of Technology, VIT Chennai, Tamilnadu, India
| | - G Bharadwaja Kumar
- School of Computer Science and Engineering, Vellore Institute of Technology, VIT Chennai, Tamilnadu, India
| |
Collapse
|
4
|
A derived least square fast learning network model. APPL INTELL 2020. [DOI: 10.1007/s10489-020-01773-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
5
|
A Three-Class Classification of Cognitive Workload Based on EEG Spectral Data. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9245340] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Evaluation of cognitive workload finds its application in many areas, from educational program assessment through professional driver health examination to monitoring the mental state of people carrying out jobs of high responsibility, such as pilots or airline traffic dispatchers. Estimation of multilevel cognitive workload is a task usually realized in a subject-dependent way, while the present research is focused on developing the procedure of subject-independent evaluation of cognitive workload level. The aim of the paper is to estimate cognitive workload level in accordance with subject-independent approach, applying classical machine learning methods combined with feature selection techniques. The procedure of data acquisition was based on registering the EEG signal of the person performing arithmetical tasks divided into six intervals of advancement. The analysis included the stages of preprocessing, feature extraction, and selection, while the final step covered multiclass classification performed with several models. The results discussed show high maximal accuracies achieved: ~91% for both the validation dataset and for the cross-validation approach for kNN model.
Collapse
|
6
|
|