1
|
Zheng L, Yu W, Xu Z, Zhang Z, Deng F. Design, Analysis, and Application of a Discrete Error Redefinition Neural Network for Time-Varying Quadratic Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:13646-13657. [PMID: 37224359 DOI: 10.1109/tnnls.2023.3270381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Time-varying quadratic programming (TV-QP) is widely used in artificial intelligence, robotics, and many other fields. To solve this important problem, a novel discrete error redefinition neural network (D-ERNN) is proposed. By redefining the error monitoring function and discretization, the proposed neural network is superior to some traditional neural networks in terms of convergence speed, robustness, and overshoot. Compared with the continuous ERNN, the proposed discrete neural network is more suitable for computer implementation. Unlike continuous neural networks, this article also analyzes and proves how to select the parameters and step size of the proposed neural networks to ensure the reliability of the network. Moreover, how to achieve the discretization of the ERNN is presented and discussed. The convergence of the proposed neural network without disturbance is proven, and bounded time-varying disturbances can be resisted in theory. Furthermore, the comparison results with other related neural networks show that the proposed D-ERNN has a faster convergence speed, better antidisturbance ability, and lower overshoot.
Collapse
|
2
|
Xiao L, Li X, Cao P, He Y, Tang W, Li J, Wang Y. A Dynamic-Varying Parameter Enhanced ZNN Model for Solving Time-Varying Complex-Valued Tensor Inversion With Its Application to Image Encryption. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:13681-13690. [PMID: 37224356 DOI: 10.1109/tnnls.2023.3270563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Time-varying complex-valued tensor inverse (TVCTI) is a public problem worthy of being studied, while numerical solutions for the TVCTI are not effective enough. This work aims to find the accurate solution to the TVCTI using zeroing neural network (ZNN), which is an effective tool in terms of solving time-varying problems and is improved in this article to solve the TVCTI problem for the first time. Based on the design idea of ZNN, an error-adaptive dynamic parameter and a new enhanced segmented signum exponential activation function (ESS-EAF) are first designed and applied to the ZNN. Then a dynamic-varying parameter-enhanced ZNN (DVPEZNN) model is proposed to solve the TVCTI problem. The convergence and robustness of the DVPEZNN model are theoretically analyzed and discussed. In order to highlight better convergence and robustness of the DVPEZNN model, it is compared with four varying-parameter ZNN models in the illustrative example. The results show that the DVPEZNN model has better convergence and robustness than the other four ZNN models in different situations. In addition, the state solution sequence generated by the DVPEZNN model in the process of solving the TVCTI cooperates with the chaotic system and deoxyribonucleic acid (DNA) coding rules to obtain the chaotic-ZNN-DNA (CZD) image encryption algorithm, which can encrypt and decrypt images with good performance.
Collapse
|
3
|
Li W, Pan Y. A Dini-Derivative-Aided Zeroing Neural Network for Time-Variant Quadratic Programming Involving Multi-Type Constraints With Robotic Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12482-12493. [PMID: 37027273 DOI: 10.1109/tnnls.2023.3263263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Time-variant quadratic programming (QP) with multi-type constraints including equality, inequality, and bound constraints is ubiquitous in practice. In the literature, there exist a few zeroing neural networks (ZNNs) that are applicable to time-variant QPs with multi-type constraints. These ZNN solvers involve continuous and differentiable elements for handling inequality and/or bound constraints, and they possess their own drawbacks such as the failure in solving problems, the approximated optimal solutions, and the boring and sometimes difficult process of tuning parameters. Differing from the existing ZNN solvers, this article aims to propose a novel ZNN solver for time-variant QPs with multi-type constraints based on a continuous but not differentiable projection operator that is deemed unsuitable for designing ZNN solvers in the community, due to the lack of the required time derivative information. To achieve the aforementioned aim, the upper right-hand Dini derivative of the projection operator with respect to its input is introduced to serve as a mode switcher, leading to a novel ZNN solver, termed Dini-derivative-aided ZNN (Dini-ZNN). In theory, the convergent optimal solution of the Dini-ZNN solver is rigorously analyzed and proved. Comparative validations are performed, verifying the effectiveness of the Dini-ZNN solver that has merits such as guaranteed capability to solve problems, high solution accuracy, and no extra hyperparameter to be tuned. To illustrate potential applications, the Dini-ZNN solver is successfully applied to kinematic control of a joint-constrained robot with simulation and experimentation conducted.
Collapse
|
4
|
Li H, Liao B, Li J, Li S. A Survey on Biomimetic and Intelligent Algorithms with Applications. Biomimetics (Basel) 2024; 9:453. [PMID: 39194432 DOI: 10.3390/biomimetics9080453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 07/12/2024] [Accepted: 07/22/2024] [Indexed: 08/29/2024] Open
Abstract
The question "How does it work" has motivated many scientists. Through the study of natural phenomena and behaviors, many intelligence algorithms have been proposed to solve various optimization problems. This paper aims to offer an informative guide for researchers who are interested in tackling optimization problems with intelligence algorithms. First, a special neural network was comprehensively discussed, and it was called a zeroing neural network (ZNN). It is especially intended for solving time-varying optimization problems, including origin, basic principles, operation mechanism, model variants, and applications. This paper presents a new classification method based on the performance index of ZNNs. Then, two classic bio-inspired algorithms, a genetic algorithm and a particle swarm algorithm, are outlined as representatives, including their origin, design process, basic principles, and applications. Finally, to emphasize the applicability of intelligence algorithms, three practical domains are introduced, including gene feature extraction, intelligence communication, and the image process.
Collapse
Affiliation(s)
- Hao Li
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
- School of Communication and Electronic Engineering, Jishou University, Jishou 416000, China
| | - Bolin Liao
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
| | - Jianfeng Li
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
| | - Shuai Li
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
| |
Collapse
|
5
|
Zhang Z, Song Y, Zheng L, Luo Y. A Jump-Gain Integral Recurrent Neural Network for Solving Noise-Disturbed Time-Variant Nonlinear Inequality Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5793-5806. [PMID: 37022813 DOI: 10.1109/tnnls.2023.3241207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Nonlinear inequalities are widely used in science and engineering areas, attracting the attention of many researchers. In this article, a novel jump-gain integral recurrent (JGIR) neural network is proposed to solve noise-disturbed time-variant nonlinear inequality problems. To do so, an integral error function is first designed. Then, a neural dynamic method is adopted and the corresponding dynamic differential equation is obtained. Third, a jump gain is exploited and applied to the dynamic differential equation. Fourth, the derivatives of errors are substituted into the jump-gain dynamic differential equation, and the corresponding JGIR neural network is set up. Global convergence and robustness theorems are proposed and proved theoretically. Computer simulations verify that the proposed JGIR neural network can solve noise-disturbed time-variant nonlinear inequality problems effectively. Compared with some advanced methods, such as modified zeroing neural network (ZNN), noise-tolerant ZNN, and varying-parameter convergent-differential neural network, the proposed JGIR method has smaller computational errors, faster convergence speed, and no overshoot when disturbance exists. In addition, physical experiments on manipulator control have verified the effectiveness and superiority of the proposed JGIR neural network.
Collapse
|
6
|
Xiao L, Cao P, Wang Z, Liu S. A novel fixed-time error-monitoring neural network for solving dynamic quaternion-valued Sylvester equations. Neural Netw 2024; 170:494-505. [PMID: 38039686 DOI: 10.1016/j.neunet.2023.11.058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 11/03/2023] [Accepted: 11/24/2023] [Indexed: 12/03/2023]
Abstract
This paper addresses the dynamic quaternion-valued Sylvester equation (DQSE) using the quaternion real representation and the neural network method. To transform the Sylvester equation in the quaternion field into an equivalent equation in the real field, three different real representation modes for the quaternion are adopted by considering the non-commutativity of quaternion multiplication. Based on the equivalent Sylvester equation in the real field, a novel recurrent neural network model with an integral design formula is proposed to solve the DQSE. The proposed model, referred to as the fixed-time error-monitoring neural network (FTEMNN), achieves fixed-time convergence through the action of a state-of-the-art nonlinear activation function. The fixed-time convergence of the FTEMNN model is theoretically analyzed. Two examples are presented to verify the performance of the FTEMNN model with a specific focus on fixed-time convergence. Furthermore, the chattering phenomenon of the FTEMNN model is discussed, and a saturation function scheme is designed. Finally, the practical value of the FTEMNN model is demonstrated through its application to image fusion denoising.
Collapse
Affiliation(s)
- Lin Xiao
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| | - Penglin Cao
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge, Middlesex, UB8 3PH, United Kingdom.
| | - Sai Liu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| |
Collapse
|
7
|
Zhang Y, Zhang J, Weng J. Dynamic Moore-Penrose Inversion With Unknown Derivatives: Gradient Neural Network Approach. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10919-10929. [PMID: 35536807 DOI: 10.1109/tnnls.2022.3171715] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Finding dynamic Moore-Penrose inverses (DMPIs) in real-time is a challenging problem due to the time-varying nature of the inverse. Traditional numerical methods for static Moore-Penrose inverse are not efficient for calculating DMPIs and are restricted by serial processing. The current state-of-the-art method for finding DMPIs is called the zeroing neural network (ZNN) method, which requires that the time derivative of the associated matrix is available all the time during the solution process. However, in practice, the time derivative of the associated dynamic matrix may not be available in a real-time manner or be subject to noises caused by differentiators. In this article, we propose a novel gradient-based neural network (GNN) method for computing DMPIs, which does not need the time derivative of the associated dynamic matrix. In particular, the neural state matrix of the proposed GNN converges to the theoretical DMPI in finite time. The finite-time convergence is kept by simply setting a large parameter when there are additive noises in the implementation of the GNN model. Simulation results demonstrate the efficacy and superiority of the proposed GNN method.
Collapse
|
8
|
Wang D, Liu XW. A varying-parameter fixed-time gradient-based dynamic network for convex optimization. Neural Netw 2023; 167:798-809. [PMID: 37738715 DOI: 10.1016/j.neunet.2023.08.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 07/05/2023] [Accepted: 08/28/2023] [Indexed: 09/24/2023]
Abstract
We focus on the fixed-time convergence and robustness of gradient-based dynamic networks for solving convex optimization. Most of the existing gradient-based dynamic networks with fixed-time convergence have limited ability to resist interferences of noises. To improve the convergence of the gradient-based dynamic networks, we design a new activation function and propose a gradient-based dynamic network with fixed-time convergence. The proposed dynamic network has a smaller upper bound of the convergence time than the existing dynamic networks with fixed-time convergence. A time-varying scaling parameter is employed to speed up the convergence. Our gradient-based dynamic network is proved to be robust against bounded noises and is able to resist the interference of unbounded noises. The numerical tests illustrate the effectiveness and superiority of the proposed network.
Collapse
Affiliation(s)
- Dan Wang
- School of Artificial Intelligence, Hebei University of Technology, Tianjin, 300401, China.
| | - Xin-Wei Liu
- Institute of Mathematics, Hebei University of Technology, Tianjin, 300401, China.
| |
Collapse
|
9
|
Qian Y. Stabilization for a class of delay systems via Z-type control. ISA TRANSACTIONS 2023; 135:138-149. [PMID: 36182610 DOI: 10.1016/j.isatra.2022.09.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 08/23/2022] [Accepted: 09/09/2022] [Indexed: 06/16/2023]
Abstract
This paper investigates the Z-type control method to stabilize species populations and make them converge to desired values in a two-species predator-prey model with time delay in the prey species' growth rate. Direct controllers that assume both species' growth rates can be altered through human intervention and indirect controllers that assume only one species' growth can be changed at will are constructed. The Z-type method of construction is theoretically justified, and the potential for certain non-linear activation functions to speed up the control process is highlighted. A general procedure for discretizing state-space systems using the Euler forward formula is described. The discrete versions of the directly and indirectly controlled systems are provided. All Z-type controllers constructed are validated through numerical simulations.
Collapse
Affiliation(s)
- Ying Qian
- Vanderbilt University, Electrical Engineering, 221 Kirkland Hall, Nashville, TN 37235, USA.
| |
Collapse
|
10
|
Liao B, Han L, Cao X, Li S, Li J. Double integral‐enhanced Zeroing neural network with linear noise rejection for time‐varying matrix inverse. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Affiliation(s)
- Bolin Liao
- College of Computer Science and Engineering Jishou University Jishou China
| | - Luyang Han
- College of Computer Science and Engineering Jishou University Jishou China
| | - Xinwei Cao
- School of Management Shanghai University Shanghai China
| | - Shuai Li
- School of Engineering Swansea University Swansea UK
| | - Jianfeng Li
- College of Computer Science and Engineering Jishou University Jishou China
| |
Collapse
|
11
|
Jin J, Zhao L, Chen L, Chen W. A robust zeroing neural network and its applications to dynamic complex matrix equation solving and robotic manipulator trajectory tracking. Front Neurorobot 2022; 16:1065256. [PMID: 36457416 PMCID: PMC9705728 DOI: 10.3389/fnbot.2022.1065256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 10/31/2022] [Indexed: 11/04/2023] Open
Abstract
Dynamic complex matrix equation (DCME) is frequently encountered in the fields of mathematics and industry, and numerous recurrent neural network (RNN) models have been reported to effectively find the solution of DCME in no noise environment. However, noises are unavoidable in reality, and dynamic systems must be affected by noises. Thus, the invention of anti-noise neural network models becomes increasingly important to address this issue. By introducing a new activation function (NAF), a robust zeroing neural network (RZNN) model for solving DCME in noisy-polluted environment is proposed and investigated in this paper. The robustness and convergence of the proposed RZNN model are proved by strict mathematical proof and verified by comparative numerical simulation results. Furthermore, the proposed RZNN model is applied to manipulator trajectory tracking control, and it completes the trajectory tracking task successfully, which further validates its practical applied prospects.
Collapse
Affiliation(s)
- Jie Jin
- School of Information Engineering, Changsha Medical University, Changsha, China
- School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, China
| | - Lv Zhao
- School of Information Engineering, Changsha Medical University, Changsha, China
- School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, China
| | - Lei Chen
- School of Information Engineering, Changsha Medical University, Changsha, China
- School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, China
| | - Weijie Chen
- School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, China
| |
Collapse
|
12
|
Sun M, Li X, Zhong G. Semi-global fixed/predefined-time RNN models with comprehensive comparisons for time-variant neural computing. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07820-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
13
|
Wang D, Liu XW. A gradient-type noise-tolerant finite-time neural network for convex optimization. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
14
|
Continuous and discrete zeroing neural network for a class of multilayer dynamic system. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
15
|
Double Features Zeroing Neural Network Model for Solving the Pseudoninverse of a Complex-Valued Time-Varying Matrix. MATHEMATICS 2022. [DOI: 10.3390/math10122122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The solution of a complex-valued matrix pseudoinverse is one of the key steps in various science and engineering fields. Owing to its important roles, researchers had put forward many related algorithms. With the development of research, a time-varying matrix pseudoinverse received more attention than a time-invarying one, as we know that a zeroing neural network (ZNN) is an efficient method to calculate a pseudoinverse of a complex-valued time-varying matrix. Due to the initial ZNN (IZNN) and its extensions lacking a mechanism to deal with both convergence and robustness, that is, most existing research on ZNN models only studied the convergence and robustness, respectively. In order to simultaneously improve the double features (i.e., convergence and robustness) of ZNN in solving a complex-valued time-varying pseudoinverse, this paper puts forward a double features ZNN (DFZNN) model by adopting a specially designed time-varying parameter and a novel nonlinear activation function. Moreover, two nonlinear activation types of complex number are investigated. The global convergence, predefined time convergence, and robustness are proven in theory, and the upper bound of the predefined convergence time is formulated exactly. The results of the numerical simulation verify the theoretical proof, in contrast to the existing complex-valued ZNN models, the DFZNN model has shorter predefined convergence time in a zero noise state, and enhances robustness in different noise states. Both the theoretical and the empirical results show that the DFZNN model has better ability in solving a time-varying complex-valued matrix pseudoinverse. Finally, the proposed DFZNN model is used to track the trajectory of a manipulator, which further verifies the reliability of the model.
Collapse
|
16
|
Prescribed-Time Convergent Adaptive ZNN for Time-Varying Matrix Inversion under Harmonic Noise. ELECTRONICS 2022. [DOI: 10.3390/electronics11101636] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Harmonic noises widely exist in industrial fields and always affect the computational accuracy of neural network models. The existing original adaptive zeroing neural network (OAZNN) model can effectively suppress harmonic noises. Nevertheless, the OAZNN model’s convergence rate only stays at the exponential convergence, that is, its convergence speed is usually greatly affected by the initial state. Consequently, to tackle the above issue, this work combines the dynamic characteristics of harmonic signals with prescribed-time convergence activation function, and proposes a prescribed-time convergent adaptive ZNN (PTCAZNN) for solving time-varying matrix inverse problem (TVMIP) under harmonic noises. Owing to the nonlinear activation function used having the ability to reject noises itself and the adaptive term also being able to compensate the influence of noises, the PTCAZNN model can realize double noise suppression. More importantly, the theoretical analysis of PTCAZNN model with prescribed-time convergence and robustness performance is provided. Finally, by varying a series of conditions such as the frequency of single harmonic noise, the frequency of multi-harmonic noise, and the initial value and the dimension of the matrix, the comparative simulation results further confirm the effectiveness and superiority of the PTCAZNN model.
Collapse
|
17
|
Xiao L, He Y, Dai J, Liu X, Liao B, Tan H. A Variable-Parameter Noise-Tolerant Zeroing Neural Network for Time-Variant Matrix Inversion With Guaranteed Robustness. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1535-1545. [PMID: 33361003 DOI: 10.1109/tnnls.2020.3042761] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Matrix inversion frequently occurs in the fields of science, engineering, and related fields. Numerous matrix inversion schemes are often based on the premise that the solution procedure is ideal and noise-free. However, external interference is generally ubiquitous and unavoidable in practice. Therefore, an integrated-enhanced zeroing neural network (IEZNN) model has been proposed to handle the time-variant matrix inversion issue interfered with by noise. However, the IEZNN model can only deal with small time-variant noise interference. With slightly larger noise interference, the IEZNN model may not converge to the theoretical solution exactly. Therefore, a variable-parameter noise-tolerant zeroing neural network (VPNTZNN) model is proposed to overcome shortcomings and improve the inadequacy. Moreover, the excellent convergence and robustness of the VPNTZNN model are rigorously analyzed and proven. Finally, compared with the original zeroing neural network (OZNN) model and the IEZNN model for matrix inversion, numerical simulations and a practical application reveal that the proposed VPNTZNN model has the best robust property under the same external noise interference.
Collapse
|
18
|
A fuzzy adaptive zeroing neural network with superior finite-time convergence for solving time-variant linear matrix equations. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108405] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
19
|
Design and Analysis of Anti-Noise Parameter-Variable Zeroing Neural Network for Dynamic Complex Matrix Inversion and Manipulator Trajectory Tracking. ELECTRONICS 2022. [DOI: 10.3390/electronics11050824] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Dynamic complex matrix inversion (DCMI) problems frequently arise in the territories of mathematics and engineering, and various recurrent neural network (RNN) models have been reported to effectively find the solutions of the DCMI problems. However, most of the reported works concentrated on solving DCMI problems in ideal no noise environment, and the inevitable noises in reality are not considered. To enhance the robustness of the existing models, an anti-noise parameter-variable zeroing neural network (ANPVZNN) is proposed by introducing a novel activation function (NAF). Both of mathematical analysis and numerical simulation results demonstrate that the proposed ANPVZNN model possesses fixed-time convergence and robustness for solving DCMI problems. Besides, a successful ANPVZNN-based manipulator trajectory tracking example further verifies its robustness and effectiveness in practical applications.
Collapse
|
20
|
Zhang Z, Zheng L, Qiu T. A gain-adjustment neural network based time-varying underdetermined linear equation solving method. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.096] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
21
|
Liu B, Fu D, Qi Y, Huang H, Jin L. Noise-tolerant gradient-oriented neurodynamic model for solving the Sylvester equation. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
22
|
Zhang Z, Zheng L, Yang H, Qu X. Design and Analysis of a Novel Integral Recurrent Neural Network for Solving Time-Varying Sylvester Equation. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:4312-4326. [PMID: 31545759 DOI: 10.1109/tcyb.2019.2939350] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
To solve a general time-varying Sylvester equation, a novel integral recurrent neural network (IRNN) is designed and analyzed. This kind of recurrent neural networks is based on an error-integral design equation and does not need training in advance. The IRNN can achieve global convergence performance and strong robustness if odd-monotonically increasing activation functions [i.e., the linear, bipolar-sigmoid, power, or sigmoid-power activation functions (SP-AFs)] are applied. Specifically, if linear or bipolar-sigmoid activation functions are applied, the IRNN possess exponential convergence performance. The IRNN has finite-time convergence property by using power activation function. To obtain faster convergence performance and finite-time convergence property, an SP-AF is designed. Furthermore, by using the discretization method, the discrete IRNN model and its convergence analysis are also presented. Practical application to robot manipulator and computer simulation results with using different activation functions and design parameters have verified the effectiveness, stability, and reliability of the proposed IRNN.
Collapse
|
23
|
Li H, Shao S, Qin S, Yang Y. Neural networks with finite-time convergence for solving time-varying linear complementarity problem. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
24
|
Kong Y, Jiang Y, Han R, Wu H. A generalized varying-parameter recurrent neural network for super solution of quadratic programming problem. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
25
|
Tan N, Huang M, Yu P, Wang T. Neural-dynamics-enabled Jacobian inversion for model-based kinematic control of multi-section continuum manipulators. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107114] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
26
|
Zhang X, Chen L, Li S, Stanimirović P, Zhang J, Jin L. Design and analysis of recurrent neural network models with non‐linear activation functions for solving time‐varying quadratic programming problems. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2021. [DOI: 10.1049/cit2.12019] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Xiaoyan Zhang
- School of Information Science and Engineering Lanzhou University Lanzhou China
| | - Liangming Chen
- School of Information Science and Engineering Lanzhou University Lanzhou China
| | - Shuai Li
- School of Information Science and Engineering Lanzhou University Lanzhou China
| | | | - Jiliang Zhang
- Department of Electronic and Electrical Engineering The University of Sheffield Sheffield UK
| | - Long Jin
- School of Information Science and Engineering Lanzhou University Lanzhou China
| |
Collapse
|
27
|
A Vary-Parameter Convergence-Accelerated Recurrent Neural Network for Online Solving Dynamic Matrix Pseudoinverse and its Robot Application. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10440-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
28
|
Xiao L, Dai J, Lu R, Li S, Li J, Wang S. Design and Comprehensive Analysis of a Noise-Tolerant ZNN Model With Limited-Time Convergence for Time-Dependent Nonlinear Minimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5339-5348. [PMID: 32031952 DOI: 10.1109/tnnls.2020.2966294] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Zeroing neural network (ZNN) is a powerful tool to address the mathematical and optimization problems broadly arisen in the science and engineering areas. The convergence and robustness are always co-pursued in ZNN. However, there exists no related work on the ZNN for time-dependent nonlinear minimization that achieves simultaneously limited-time convergence and inherently noise suppression. In this article, for the purpose of satisfying such two requirements, a limited-time robust neural network (LTRNN) is devised and presented to solve time-dependent nonlinear minimization under various external disturbances. Different from the previous ZNN model for this problem either with limited-time convergence or with noise suppression, the proposed LTRNN model simultaneously possesses such two characteristics. Besides, rigorous theoretical analyses are given to prove the superior performance of the LTRNN model when adopted to solve time-dependent nonlinear minimization under external disturbances. Comparative results also substantiate the effectiveness and advantages of LTRNN via solving a time-dependent nonlinear minimization problem.
Collapse
|
29
|
Hu Z, Li K, Li K, Li J, Xiao L. Zeroing neural network with comprehensive performance and its applications to time-varying Lyapunov equation and perturbed robotic tracking. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.08.037] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
30
|
Xiao L, Jia L, Dai J, Tan Z. Design and Application of A Robust Zeroing Neural Network to Kinematical Resolution of Redundant Manipulators Under Various External Disturbances. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.040] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
31
|
Prescribed-time convergent and noise-tolerant Z-type neural dynamics for calculating time-dependent quadratic programming. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05356-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
32
|
Zhang Z, Deng X, Kong L, Li S. A Circadian Rhythms Learning Network for Resisting Cognitive Periodic Noises of Time-Varying Dynamic System and Applications to Robots. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2948066] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
33
|
Zhang Z, Chen T, Wang M, Zheng L. An Exponential-Type Anti-Noise Varying-Gain Network for Solving Disturbed Time-Varying Inversion Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:3414-3427. [PMID: 31675344 DOI: 10.1109/tnnls.2019.2944485] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
To solve the disturbed time-varying inversion problem, an exponential-type anti-noise varying-gain network (EAVGN) is proposed and analyzed. To do so, a vector-based error function is first defined. By using the varying-gain neural dynamic design method, an EAVGN model is then formulated. Furthermore, the differentiation error and the model-implementation error are considered into the model, and the perturbed EAVGN model is obtained. For better illustrations, comparisons between the EAVGN and the conventional fixed-parameter recurrent neural network (FP-RNN) are conducted to illustrate the advantages of the proposed EAVGN. Mathematical proof demonstrates that the proposed EAVGN has much better anti-noise properties than FP-RNN. On one hand, the residual error of EAVGN can be reduced to zero in any case, but that of FP-RNN is large and cannot be convergent, in particular when the bound of Frobenius norm of the exact solution is large or the noise is large. On the other hand, the bound of the residual error of EAVGN is always smaller than that of FP-RNN. Simulation results verify that when different types of noises exist, the proposed EAVGN owns better anti-noise property compared with the state-of-the-art methods. In addition, a practical application is presented to illustrate the implementation process and the practical benefits of the EAVGN.
Collapse
|
34
|
Tan Z, Li W, Xiao L, Hu Y. New Varying-Parameter ZNN Models With Finite-Time Convergence and Noise Suppression for Time-Varying Matrix Moore-Penrose Inversion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2980-2992. [PMID: 31536017 DOI: 10.1109/tnnls.2019.2934734] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article aims to solve the Moore-Penrose inverse of time-varying full-rank matrices in the presence of various noises in real time. For this purpose, two varying-parameter zeroing neural networks (VPZNNs) are proposed. Specifically, VPZNN-R and VPZNN-L models, which are based on a new design formula, are designed to solve the right and left Moore-Penrose inversion problems of time-varying full-rank matrices, respectively. The two VPZNN models are activated by two novel varying-parameter nonlinear activation functions. Detailed theoretical derivations are presented to show the desired finite-time convergence and outstanding robustness of the proposed VPZNN models under various kinds of noises. In addition, existing neural models, such as the original ZNN (OZNN) and the integration-enhanced ZNN (IEZNN), are compared with the VPZNN models. Simulation observations verify the advantages of the VPZNN models over the OZNN and IEZNN models in terms of convergence and robustness. The potential of the VPZNN models for robotic applications is then illustrated by an example of robot path tracking.
Collapse
|
35
|
Jin J. A robust zeroing neural network for solving dynamic nonlinear equations and its application to kinematic control of mobile manipulator. COMPLEX INTELL SYST 2020. [DOI: 10.1007/s40747-020-00178-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
AbstractNonlinear phenomena are often encountered in various practical systems, and most of the nonlinear problems in science and engineering can be simply described by nonlinear equation, effectively solving nonlinear equation (NE) has aroused great interests of the academic and industrial communities. In this paper, a robust zeroing neural network (RZNN) activated by a new power versatile activation function (PVAF) is proposed and analyzed for finding the solutions of dynamic nonlinear equations (DNE) within fixed time in noise polluted environment. As compared with the previous ZNN model activated by other commonly used activation functions (AF), the main improvement of the presented RZNN model is the fixed-time convergence even in the presence of noises. In addition, the convergence time of the proposed RZNN model is irrelevant to its initial states, and it can be computed directly. Both the rigorous mathematical analysis and numerical simulation results are provided for the verification of the effectiveness and robustness of the proposed RZNN model. Moreover, a successful robotic manipulator path tracking example in noise polluted environment further demonstrates the practical application prospects of the proposed RZNN models.
Collapse
|
36
|
Zhang H, Wan L. Zeroing neural network methods for solving the Yang-Baxter-like matrix equation. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.101] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
37
|
Li J, Sun Y, Sun Z, Li F, Jin L. Noise-tolerant Z-type neural dynamics for online solving time-varying inverse square root problems: A control-based approach. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.035] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
38
|
Xiao L, Li K, Duan M. Computing Time-Varying Quadratic Optimization With Finite-Time Convergence and Noise Tolerance: A Unified Framework for Zeroing Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:3360-3369. [PMID: 30716052 DOI: 10.1109/tnnls.2019.2891252] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Zeroing neural network (ZNN), as a powerful calculating tool, is extensively applied in various computation and optimization fields. Convergence and noise-tolerance performance are always pursued and investigated in the ZNN field. Up to now, there are no unified ZNN models that simultaneously achieve the finite-time convergence and inherent noise tolerance for computing time-varying quadratic optimization problems, although this superior property is highly demanded in practical applications. In this paper, for computing time-varying quadratic optimization within finite-time convergence in the presence of various additive noises, a new framework for ZNN is designed to fill this gap in a unified manner. Specifically, different from the previous design formulas either possessing finite-time convergence or possessing noise-tolerance performance, a new design formula with finite-time convergence and noise tolerance is proposed in a unified framework (and thus called unified design formula). Then, on the basis of the unified design formula, a unified ZNN (UZNN) is, thus, proposed and investigated in the unified framework of ZNN for computing time-varying quadratic optimization problems in the presence of various additive noises. In addition, theoretical analyses of the unified design formula and the UZNN model are given to guarantee the finite-time convergence and inherent noise tolerance. Computer simulation results verify the superior property of the UZNN model for computing time-varying quadratic optimization problems, as compared with the previously proposed ZNN models.
Collapse
|
39
|
Xiao L, Yi Q, Dai J, Li K, Hu Z. Design and analysis of new complex zeroing neural network for a set of dynamic complex linear equations. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.07.044] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
40
|
A Repeatable Motion Scheme for Kinematic Control of Redundant Manipulators. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:5426986. [PMID: 31641347 PMCID: PMC6769351 DOI: 10.1155/2019/5426986] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 07/28/2019] [Indexed: 11/18/2022]
Abstract
To achieve closed trajectory motion planning of redundant manipulators, each joint angle has to be returned to its initial position. Most of the repeatable motion schemes have been proposed to solve kinematic problems considering only the initial desired position of each joint at first. Actually, it is very difficult for various joint angles of the robot arms to be positioned in the expected trajectory before moving. To construct an effective kinematic model, a novel optimal programming index based on a recurrent neural network is designed and analyzed in this paper. The repetitiveness and timeliness are presented and analyzed. Combining the kinematic equation constraint of manipulators, a repeatable motion scheme is formulated. In addition, the Lagrange multiplier theorem is introduced to prove that such a repeatable motion scheme can be converted into a time-varying linear equation. A finite-time neural network solver is constructed for the solution of the motion scheme. Simulation results for two different trajectories illustrate the accuracy and timeliness of the proposed motion scheme. Finally, two different repetitive schemes are compared and verified the optimal time for the novelty of the proposed kinematic scheme.
Collapse
|
41
|
Miao P, Wu D, Shen Y, Zhang Z. Discrete-time neural network with two classes of bias noises for solving time-variant matrix inversion and application to robot tracking. Neural Comput Appl 2019. [DOI: 10.1007/s00521-018-03986-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
42
|
|
43
|
Jin L, Huang Z, Chen L, Liu M, Li Y, Chou Y, Yi C. Modified single-output Chebyshev-polynomial feedforward neural network aided with subset method for classification of breast cancer. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.03.046] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
44
|
Yu F, Liu L, Xiao L, Li K, Cai S. A robust and fixed-time zeroing neural dynamics for computing time-variant nonlinear equation using a novel nonlinear activation function. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.03.053] [Citation(s) in RCA: 100] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
45
|
A new noise-tolerant and predefined-time ZNN model for time-dependent matrix inversion. Neural Netw 2019; 117:124-134. [PMID: 31158644 DOI: 10.1016/j.neunet.2019.05.005] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/08/2019] [Accepted: 05/08/2019] [Indexed: 11/23/2022]
Abstract
In this work, a new zeroing neural network (ZNN) using a versatile activation function (VAF) is presented and introduced for solving time-dependent matrix inversion. Unlike existing ZNN models, the proposed ZNN model not only converges to zero within a predefined finite time but also tolerates several noises in solving the time-dependent matrix inversion, and thus called new noise-tolerant ZNN (NNTZNN) model. In addition, the convergence and robustness of this model are mathematically analyzed in detail. Two comparative numerical simulations with different dimensions are used to test the efficiency and superiority of the NNTZNN model to the previous ZNN models using other activation functions. In addition, two practical application examples (i.e., a mobile manipulator and a real Kinova JACO2 robot manipulator) are presented to validate the applicability and physical feasibility of the NNTZNN model in a noisy environment. Both simulative and experimental results demonstrate the effectiveness and tolerant-noise ability of the NNTZNN model.
Collapse
|
46
|
Zhang Z, Zheng L, Wang M. An exponential-enhanced-type varying-parameter RNN for solving time-varying matrix inversion. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.01.058] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
47
|
Terminal computing for Sylvester equations solving with application to intelligent control of redundant manipulators. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.01.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
48
|
|
49
|
Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.10.054] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
50
|
Bounded Z-type neurodynamics with limited-time convergence and noise tolerance for calculating time-dependent Lyapunov equation. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.10.031] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|