1
|
Ji J, Zhao J, Lin Q, Tan KC. Competitive Decomposition-Based Multiobjective Architecture Search for the Dendritic Neural Model. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:6829-6842. [PMID: 35476557 DOI: 10.1109/tcyb.2022.3165374] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The dendritic neural model (DNM) is computationally faster than other machine-learning techniques, because its architecture can be implemented by using logic circuits and its calculations can be performed entirely in binary form. To further improve the computational speed, a straightforward approach is to generate a more concise architecture for the DNM. Actually, the architecture search is a large-scale multiobjective optimization problem (LSMOP), where a large number of parameters need to be set with the aim of optimizing accuracy and structural complexity simultaneously. However, the issues of irregular Pareto front, objective discontinuity, and population degeneration strongly limit the performances of conventional multiobjective evolutionary algorithms (MOEAs) on the specific problem. Therefore, a novel competitive decomposition-based MOEA is proposed in this study, which decomposes the original problem into several constrained subproblems, with neighboring subproblems sharing overlapping regions in the objective space. The solutions in the overlapping regions participate in environmental selection for the neighboring subproblems and then propagate the selection pressure throughout the entire population. Experimental results demonstrate that the proposed algorithm can possess a more powerful optimization ability than the state-of-the-art MOEAs. Furthermore, both the DNM itself and its hardware implementation can achieve very competitive classification performances when trained by the proposed algorithm, compared with numerous widely used machine-learning approaches.
Collapse
|
2
|
Gul HH, Egrioglu E, Bas E. Statistical learning algorithms for dendritic neuron model artificial neural network based on sine cosine algorithm. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.02.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
3
|
Yuan Z, Gao S, Wang Y, Li J, Hou C, Guo L. Prediction of PM2.5 time series by seasonal trend decomposition-based dendritic neuron model. Neural Comput Appl 2023; 35:15397-15413. [PMID: 37273913 PMCID: PMC10107594 DOI: 10.1007/s00521-023-08513-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 03/21/2023] [Indexed: 06/06/2023]
Abstract
The rapid industrial development in the human society has brought about the air pollution, which seriously affects human health. PM2.5 concentration is one of the main factors causing the air pollution. To accurately predict PM2.5 microns, we propose a dendritic neuron model (DNM) trained by an improved state-of-matter heuristic algorithm (DSMS) based on STL-LOESS, namely DS-DNM. Firstly, DS-DNM adopts STL-LOESS for the data preprocessing to obtain three characteristic quantities from original data: seasonal, trend, and residual components. Then, DNM trained by DSMS predicts the residual values. Finally, three sets of feature quantities are summed to obtain the predicted values. In the performance test experiments, five real-world PM2.5 concentration data are used to test DS-DNM. On the other hand, four training algorithms and seven prediction models were selected for comparison to verify the rationality of the training algorithms and the accuracy of the prediction models, respectively. The experimental results show that DS-DNM has the more competitive performance in PM2.5 concentration prediction problem.
Collapse
Affiliation(s)
- Zijing Yuan
- Faculty of Engineering, University of Toyama, Toyama-shi, 930-8555 Japan
| | - Shangce Gao
- Faculty of Engineering, University of Toyama, Toyama-shi, 930-8555 Japan
| | - Yirui Wang
- Engineering and Computer Science, Ningbo University, Zhejiang, 315221 China
| | - Jiayi Li
- Faculty of Engineering, University of Toyama, Toyama-shi, 930-8555 Japan
| | - Chunzhi Hou
- Faculty of Engineering, University of Toyama, Toyama-shi, 930-8555 Japan
| | - Lijun Guo
- Engineering and Computer Science, Ningbo University, Zhejiang, 315221 China
| |
Collapse
|
4
|
Gao S, Zhou M, Wang Z, Sugiyama D, Cheng J, Wang J, Todo Y. Fully Complex-Valued Dendritic Neuron Model. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2105-2118. [PMID: 34487498 DOI: 10.1109/tnnls.2021.3105901] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A single dendritic neuron model (DNM) that owns the nonlinear information processing ability of dendrites has been widely used for classification and prediction. Complex-valued neural networks that consist of a number of multiple/deep-layer McCulloch-Pitts neurons have achieved great successes so far since neural computing was utilized for signal processing. Yet no complex value representations appear in single neuron architectures. In this article, we first extend DNM from a real-value domain to a complex-valued one. Performance of complex-valued DNM (CDNM) is evaluated through a complex XOR problem, a non-minimum phase equalization problem, and a real-world wind prediction task. Also, a comparative analysis on a set of elementary transcendental functions as an activation function is implemented and preparatory experiments are carried out for determining hyperparameters. The experimental results indicate that the proposed CDNM significantly outperforms real-valued DNM, complex-valued multi-layer perceptron, and other complex-valued neuron models.
Collapse
|
5
|
Song Z, Tang C, Song S, Tang Y, Li J, Ji J. A complex network-based firefly algorithm for numerical optimization and time series forecasting. Appl Soft Comput 2023. [DOI: 10.1016/j.asoc.2023.110158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
6
|
Cao J, Zhao D, Tian C, Jin T, Song F. Adopting improved Adam optimizer to train dendritic neuron model for water quality prediction. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:9489-9510. [PMID: 37161253 DOI: 10.3934/mbe.2023417] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
As one of continuous concern all over the world, the problem of water quality may cause diseases and poisoning and even endanger people's lives. Therefore, the prediction of water quality is of great significance to the efficient management of water resources. However, existing prediction algorithms not only require more operation time but also have low accuracy. In recent years, neural networks are widely used to predict water quality, and the computational power of individual neurons has attracted more and more attention. The main content of this research is to use a novel dendritic neuron model (DNM) to predict water quality. In DNM, dendrites combine synapses of different states instead of simple linear weighting, which has a better fitting ability compared with traditional neural networks. In addition, a recent optimization algorithm called AMSGrad (Adaptive Gradient Method) has been introduced to improve the performance of the Adam dendritic neuron model (ADNM). The performance of ADNM is compared with that of traditional neural networks, and the simulation results show that ADNM is better than traditional neural networks in mean square error, root mean square error and other indicators. Furthermore, the stability and accuracy of ADNM are better than those of other conventional models. Based on trained neural networks, policymakers and managers can use the model to predict the water quality. Real-time water quality level at the monitoring site can be presented so that measures can be taken to avoid diseases caused by water quality problems.
Collapse
Affiliation(s)
- Jing Cao
- College of Science, Nanjing Forestry University, Nanjing 210037, Jiangsu, China
| | - Dong Zhao
- Wuxi Guotong Environmental Testing Technology, Co., Ltd, 214191, Jiangsu, China
| | - Chenlei Tian
- College of Science, Nanjing Forestry University, Nanjing 210037, Jiangsu, China
| | - Ting Jin
- College of Science, Nanjing Forestry University, Nanjing 210037, Jiangsu, China
| | - Fei Song
- College of Science, Nanjing Forestry University, Nanjing 210037, Jiangsu, China
| |
Collapse
|
7
|
Kumar S, Singh RK, Chaudhary A. A novel non-linear neuron model based on multiplicative aggregation in quaternionic domain. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00911-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractThe learning algorithm for a three-layered neural structure with novel non-linear quaternionic-valued multiplicative (QVM) neurons is proposed in this paper. The computing capability of non-linear aggregation in the cell body of biological neurons inspired the development of a non-linear neuron model. However, unlike linear neuron models, most non-linear neuron models are built on higher order aggregation, which is more mathematically complex and difficult to train. As a result, building non-linear neuron models with a simple structure is a difficult and time-consuming endeavor in the neurocomputing field. The concept of a QVM neuron model was influenced by the non-linear neuron model, which has a simple structure and the great computational ability. The suggested neuron’s linearity is determined by the weight and bias associated with each quaternionic-valued input. Non-commutative multiplication of all linearly connected quaternionic input-weight terms accommodates the non-linearity. To train three-layered networks with QVM neurons, the standard quaternionic-gradient-based backpropagation (QBP) algorithm is utilized. The computational and generalization capabilities of the QVM neuron are assessed through training and testing in the quaternionic domain utilizing benchmark problems, such as 3D and 4D chaotic time-series predictions, 3D geometrical transformations, and 3D face recognition. The training and testing outcomes are compared to conventional and root-power mean (RPM) neurons in quaternionic domain using training–testing MSEs, network topology (parameters), variance, and AIC as statistical measures. According to these findings, networks with QVM neurons have greater computational and generalization capabilities than networks with conventional and RPM neurons in quaternionic domain.
Collapse
|
8
|
A new hybrid recurrent artificial neural network for time series forecasting. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07753-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
9
|
Winsorized dendritic neuron model artificial neural network and a robust training algorithm with Tukey’s biweight loss function based on particle swarm optimization. GRANULAR COMPUTING 2022. [DOI: 10.1007/s41066-022-00345-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
10
|
|
11
|
A survey on dendritic neuron model: Mechanisms, algorithms and practical applications. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
12
|
Artificial Visual System for Orientation Detection. ELECTRONICS 2022. [DOI: 10.3390/electronics11040568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The human visual system is one of the most important components of the nervous system, responsible for visual perception. The research on orientation detection, in which neurons of the visual cortex respond only to a line stimulus in a particular orientation, is an important driving force of computer vision and biological vision. However, the principle underlying orientation detection remains a mystery. In order to solve this mystery, we first propose a completely new mechanism that explains planar orientation detection in a quantitative manner. First, we assume that there are planar orientation-detective neurons which respond only to a particular planar orientation locally and that these neurons detect local planar orientation information based on nonlinear interactions that take place on the dendrites. Then, we propose an implementation of these local planar orientation-detective neurons based on their dendritic computations, use them to extract the local planar orientation information, and infer the global planar orientation information from the local planar orientation information. Furthermore, based on this mechanism, we propose an artificial visual system (AVS) for planar orientation detection and other visual information processing. In order to prove the effectiveness of our mechanism and the AVS, we conducted a series of experiments on rectangular images which included rectangles of various sizes, shapes and positions. Computer simulations show that the mechanism can perfectly perform planar orientation detection regardless of their sizes, shapes and positions in all experiments. Furthermore, we compared the performance of both AVS and a traditional convolution neural network (CNN) on planar orientation detection and found that AVS completely outperformed CNN in planar orientation detection in terms of identification accuracy, noise resistance, computation and learning cost, hardware implementation and reasonability.
Collapse
|
13
|
Abstract
In recent years, the dendritic neural model has been widely employed in various fields because of its simple structure and inexpensive cost. Traditional numerical optimization is ineffective for the parameter optimization problem of the dendritic neural model; it is easy to fall into local in the optimization process, resulting in poor performance of the model. This paper proposes an intelligent dendritic neural model firstly, which uses the intelligent optimization algorithm to optimize the model instead of the traditional dendritic neural model with a backpropagation algorithm. The experiment compares the performance of ten representative intelligent optimization algorithms in six classification datasets. The optimal combination of user-defined parameters for the model evaluates by using Taguchi’s method, systemically. The results show that the performance of an intelligent dendritic neural model is significantly better than a traditional dendritic neural model. The intelligent dendritic neural model has small classification errors and high accuracy, which provides an effective approach for the application of dendritic neural model in engineering classification problems. In addition, among ten intelligent optimization algorithms, an evolutionary algorithm called biogeographic optimization algorithm has excellent performance, and can quickly obtain high-quality solutions and excellent convergence speed.
Collapse
|
14
|
|
15
|
Dendritic neuron model trained by information feedback-enhanced differential evolution algorithm for classification. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107536] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
16
|
Ji J, Tang Y, Ma L, Li J, Lin Q, Tang Z, Todo Y. Accuracy Versus Simplification in an Approximate Logic Neural Model. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:5194-5207. [PMID: 33156795 DOI: 10.1109/tnnls.2020.3027298] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
An approximate logic neural model (ALNM) is a novel single-neuron model with plastic dendritic morphology. During the training process, the model can eliminate unnecessary synapses and useless branches of dendrites. It will produce a specific dendritic structure for a particular task. The simplified structure of ALNM can be substituted by a logic circuit classifier (LCC) without losing any essential information. The LCC merely consists of the comparator and logic NOT, AND, and OR gates. Thus, it can be easily implemented in hardware. However, the architecture of ALNM affects the learning capacity, generalization capability, computing time and approximation of LCC. Thus, a Pareto-based multiobjective differential evolution (MODE) algorithm is proposed to simultaneously optimize ALNM's topology and weights. MODE can generate a concise and accurate LCC for every specific task from ALNM. To verify the effectiveness of MODE, extensive experiments are performed on eight benchmark classification problems. The statistical results demonstrate that MODE is superior to conventional learning methods, such as the backpropagation algorithm and single-objective evolutionary algorithms. In addition, compared against several commonly used classifiers, both ALNM and LCC are capable of obtaining promising and competitive classification performances on the benchmark problems. Besides, the experimental results also verify that the LCC obtains the faster classification speed than the other classifiers.
Collapse
|
17
|
A seasonal-trend decomposition-based dendritic neuron model for financial time series prediction. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107488] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
18
|
A Simple Dendritic Neural Network Model-Based Approach for Daily PM2.5 Concentration Prediction. ELECTRONICS 2021. [DOI: 10.3390/electronics10040373] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Air pollution in cities has a massive impact on human health, and an increase in fine particulate matter (PM2.5) concentrations is the main reason for air pollution. Due to the chaotic and intrinsic complexities of PM2.5 concentration time series, it is difficult to utilize traditional approaches to extract useful information from these data. Therefore, a neural model with a dendritic mechanism trained via the states of matter search algorithm (SDNN) is employed to conduct daily PM2.5 concentration forecasting. Primarily, the time delay and embedding dimensions are calculated via the mutual information-based method and false nearest neighbours approach to train the data, respectively. Then, the phase space reconstruction is performed to map the PM2.5 concentration time series into a high-dimensional space based on the obtained time delay and embedding dimensions. Finally, the SDNN is employed to forecast the PM2.5 concentration. The effectiveness of this approach is verified through extensive experimental evaluations, which collect six real-world datasets from recent years. To the best of our knowledge, this study is the first attempt to utilize a dendritic neural model to perform real-world air quality forecasting. The extensive experimental results demonstrate that the SDNN offers very competitive performance relative to the latest prediction techniques.
Collapse
|
19
|
A Dendritic Neuron Model with Adaptive Synapses Trained by Differential Evolution Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:2710561. [PMID: 32405292 PMCID: PMC7201754 DOI: 10.1155/2020/2710561] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 11/12/2019] [Accepted: 12/30/2019] [Indexed: 11/24/2022]
Abstract
A dendritic neuron model with adaptive synapses (DMASs) based on differential evolution (DE) algorithm training is proposed. According to the signal transmission order, a DNM can be divided into four parts: the synaptic layer, dendritic layer, membrane layer, and somatic cell layer. It can be converted to a logic circuit that is easily implemented on hardware by removing useless synapses and dendrites after training. This logic circuit can be designed to solve complex nonlinear problems using only four basic logical devices: comparators, AND (conjunction), OR (disjunction), and NOT (negation). To obtain a faster and better solution, we adopt the most popular DE for DMAS training. We have chosen five classification datasets from the UCI Machine Learning Repository for an experiment. We analyze and discuss the experimental results in terms of the correct rate, convergence rate, ROC curve, and the cross-validation and then compare the results with a dendritic neuron model trained by the backpropagation algorithm (BP-DNM) and a neural network trained by the backpropagation algorithm (BPNN). The analysis results show that the DE-DMAS shows better performance in all aspects.
Collapse
|
20
|
Validation of Large-Scale Classification Problem in Dendritic Neuron Model Using Particle Antagonism Mechanism. ELECTRONICS 2020. [DOI: 10.3390/electronics9050792] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the characteristics of simple structure and low cost, the dendritic neuron model (DNM) is used as a neuron model to solve complex problems such as nonlinear problems for achieving high-precision models. Although the DNM obtains higher accuracy and effectiveness than the middle layer of the multilayer perceptron in small-scale classification problems, there are no examples that apply it to large-scale classification problems. To achieve better performance for solving practical problems, an approximate Newton-type method-neural network with random weights for the comparison; and three learning algorithms including back-propagation (BP), biogeography-based optimization (BBO), and a competitive swarm optimizer (CSO) are used in the DNM in this experiment. Moreover, three classification problems are solved by using the above learning algorithms to verify their precision and effectiveness in large-scale classification problems. As a consequence, in the case of execution time, DNM + BP is the optimum; DNM + CSO is the best in terms of both accuracy stability and execution time; and considering the stability of comprehensive performance and the convergence rate, DNM + BBO is a wise choice.
Collapse
|
21
|
Todo Y, Tang Z, Todo H, Ji J, Yamashita K. Neurons with Multiplicative Interactions of Nonlinear Synapses. Int J Neural Syst 2019; 29:1950012. [DOI: 10.1142/s0129065719500126] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Neurons are the fundamental units of the brain and nervous system. Developing a good modeling of human neurons is very important not only to neurobiology but also to computer science and many other fields. The McCulloch and Pitts neuron model is the most widely used neuron model, but has long been criticized as being oversimplified in view of properties of real neuron and the computations they perform. On the other hand, it has become widely accepted that dendrites play a key role in the overall computation performed by a neuron. However, the modeling of the dendritic computations and the assignment of the right synapses to the right dendrite remain open problems in the field. Here, we propose a novel dendritic neural model (DNM) that mimics the essence of known nonlinear interaction among inputs to the dendrites. In the model, each input is connected to branches through a distance-dependent nonlinear synapse, and each branch performs a simple multiplication on the inputs. The soma then sums the weighted products from all branches and produces the neuron’s output signal. We show that the rich nonlinear dendritic response and the powerful nonlinear neural computational capability, as well as many known neurobiological phenomena of neurons and dendrites, may be understood and explained by the DNM. Furthermore, we show that the model is capable of learning and developing an internal structure, such as the location of synapses in the dendritic branch and the type of synapses, that is appropriate for a particular task — for example, the linearly nonseparable problem, a real-world benchmark problem — Glass classification and the directional selectivity problem.
Collapse
Affiliation(s)
- Yuki Todo
- Faculty of Electrical and Computer Engineering, Kanazawa University, Kakuma-Machi, Kanazawa 920-1192, Japan
| | - Zheng Tang
- Department of Intelligence Information Systems, University of Toyama, 3190, Gofuku, Toyama 930-8555, Japan
| | - Hiroyoshi Todo
- Department of Pharmaceutical Technology, University of Toyama, 2630, Sugitani, Toyama 930-0194, Japan
| | - Junkai Ji
- Department of Intelligence Information Systems, University of Toyama, 3190, Gofuku, Toyama 930-8555, Japan
| | - Kazuya Yamashita
- Information Technology Center, University of Toyama, 3190, Gofuku, Toyama 930-8555, Japan
| |
Collapse
|
22
|
Mr 2DNM: A Novel Mutual Information-Based Dendritic Neuron Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:7362931. [PMID: 31485216 PMCID: PMC6702826 DOI: 10.1155/2019/7362931] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Accepted: 06/18/2019] [Indexed: 11/17/2022]
Abstract
By employing a neuron plasticity mechanism, the original dendritic neuron model (DNM) has been succeeded in the classification tasks with not only an encouraging accuracy but also a simple learning rule. However, the data collected in real world contain a lot of redundancy, which causes the process of analyzing data by DNM become complicated and time-consuming. This paper proposes a reliable hybrid model which combines a maximum relevance minimum redundancy (Mr2) feature selection technique with DNM (namely, Mr2DNM) for classifying the practical classification problems. The mutual information-based Mr2 is applied to evaluate and rank the most informative and discriminative features for the given dataset. The obtained optimal feature subset is used to train and test the DNM for classifying five different problems arisen from medical, physical, and social scenarios. Experimental results suggest that the proposed Mr2DNM outperforms DNM and other six classification algorithms in terms of accuracy and computational efficiency.
Collapse
|
23
|
Gao S, Zhou M, Wang Y, Cheng J, Yachi H, Wang J. Dendritic Neuron Model With Effective Learning Algorithms for Classification, Approximation, and Prediction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:601-614. [PMID: 30004892 DOI: 10.1109/tnnls.2018.2846646] [Citation(s) in RCA: 133] [Impact Index Per Article: 26.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
An artificial neural network (ANN) that mimics the information processing mechanisms and procedures of neurons in human brains has achieved a great success in many fields, e.g., classification, prediction, and control. However, traditional ANNs suffer from many problems, such as the hard understanding problem, the slow and difficult training problems, and the difficulty to scale them up. These problems motivate us to develop a new dendritic neuron model (DNM) by considering the nonlinearity of synapses, not only for a better understanding of a biological neuronal system, but also for providing a more useful method for solving practical problems. To achieve its better performance for solving problems, six learning algorithms including biogeography-based optimization, particle swarm optimization, genetic algorithm, ant colony optimization, evolutionary strategy, and population-based incremental learning are for the first time used to train it. The best combination of its user-defined parameters has been systemically investigated by using the Taguchi's experimental design method. The experiments on 14 different problems involving classification, approximation, and prediction are conducted by using a multilayer perceptron and the proposed DNM. The results suggest that the proposed learning algorithms are effective and promising for training DNM and thus make DNM more powerful in solving classification, approximation, and prediction problems.
Collapse
|
24
|
|
25
|
Zhou T, Gao S, Wang J, Chu C, Todo Y, Tang Z. Financial time series prediction using a dendritic neuron model. Knowl Based Syst 2016. [DOI: 10.1016/j.knosys.2016.05.031] [Citation(s) in RCA: 118] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|