1
|
Xiao L, Zhang Y, Huang W, Jia L, Gao X. A Dynamic Parameter Noise-Tolerant Zeroing Neural Network for Time-Varying Quaternion Matrix Equation With Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:8205-8214. [PMID: 37015615 DOI: 10.1109/tnnls.2022.3225309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
As a common and significant problem in the field of industrial information, the time-varying quaternion matrix equation (TV-QME) is considered in this article and addressed by an improved zeroing neural network (ZNN) method based on the real representation of the quaternion. In the light of an improved dynamic parameter (IDP) and an innovative activation function (IAF), a dynamic parameter noise-tolerant ZNN (DPNTZNN) model is put forward for solving the TV-QME. The presented IDP with the character of changing with the residual error and the proposed IAF with the remarkable performance can strongly enhance the convergence and robustness of the DPNTZNN model. Therefore, the DPNTZNN model possesses fast predefined-time convergence and superior robustness under different noise environments, which are theoretically analyzed in detail. Besides, the provided simulative experiments verify the advantages of the DPNTZNN model for solving the TV-QME, especially compared with other ZNN models. Finally, the DPNTZNN model is applied to image restoration, which further illustrates the practicality of the DPNTZNN model.
Collapse
|
2
|
Guo L, Shi X, Cao J, Wang Z. Exponential Convergence of Primal-Dual Dynamics Under General Conditions and Its Application to Distributed Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5551-5565. [PMID: 36178998 DOI: 10.1109/tnnls.2022.3208086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
In this article, we establish the local and global exponential convergence of a primal-dual dynamics (PDD) for solving equality-constrained optimization problems without strong convexity and full row rank assumption on the equality constraint matrix. Under the metric subregularity of Karush-Kuhn-Tucker (KKT) mapping, we prove the local exponential convergence of the dynamics. Moreover, we establish the global exponential convergence of the dynamics in an invariant subspace under a technically designed condition which is weaker than strong convexity. As an application, the obtained theoretical results are used to show the exponential convergence of several existing state-of-the-art primal-dual algorithms for solving distributed optimization without strong convexity. Finally, we provide some experiments to demonstrate the effectiveness of our results.
Collapse
|
3
|
Xiao L, Cao P, Wang Z, Liu S. A novel fixed-time error-monitoring neural network for solving dynamic quaternion-valued Sylvester equations. Neural Netw 2024; 170:494-505. [PMID: 38039686 DOI: 10.1016/j.neunet.2023.11.058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 11/03/2023] [Accepted: 11/24/2023] [Indexed: 12/03/2023]
Abstract
This paper addresses the dynamic quaternion-valued Sylvester equation (DQSE) using the quaternion real representation and the neural network method. To transform the Sylvester equation in the quaternion field into an equivalent equation in the real field, three different real representation modes for the quaternion are adopted by considering the non-commutativity of quaternion multiplication. Based on the equivalent Sylvester equation in the real field, a novel recurrent neural network model with an integral design formula is proposed to solve the DQSE. The proposed model, referred to as the fixed-time error-monitoring neural network (FTEMNN), achieves fixed-time convergence through the action of a state-of-the-art nonlinear activation function. The fixed-time convergence of the FTEMNN model is theoretically analyzed. Two examples are presented to verify the performance of the FTEMNN model with a specific focus on fixed-time convergence. Furthermore, the chattering phenomenon of the FTEMNN model is discussed, and a saturation function scheme is designed. Finally, the practical value of the FTEMNN model is demonstrated through its application to image fusion denoising.
Collapse
Affiliation(s)
- Lin Xiao
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| | - Penglin Cao
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge, Middlesex, UB8 3PH, United Kingdom.
| | - Sai Liu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| |
Collapse
|
4
|
Xia Z, Liu Y, Kou KI, Wang J. Clifford-Valued Distributed Optimization Based on Recurrent Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7248-7259. [PMID: 35030085 DOI: 10.1109/tnnls.2021.3139865] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this paper, we address the Clifford-valued distributed optimization subject to linear equality and inequality constraints. The objective function of the optimization problems is composed of the sum of convex functions defined in the Clifford domain. Based on the generalized Clifford gradient, a system of multiple Clifford-valued recurrent neural networks (RNNs) is proposed for solving the distributed optimization problems. Each Clifford-valued RNN minimizes a local objective function individually, with local interactions with others. The convergence of the neural system is rigorously proved based on the Lyapunov theory. Two illustrative examples are delineated to demonstrate the viability of the results in this article.
Collapse
|
5
|
Zhang Z, Yang S, Xu W. Decentralized ADMM with compressed and event-triggered communication. Neural Netw 2023; 165:472-482. [PMID: 37336032 DOI: 10.1016/j.neunet.2023.06.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 04/12/2023] [Accepted: 06/01/2023] [Indexed: 06/21/2023]
Abstract
This paper considers the decentralized optimization problem, where agents in a network cooperate to minimize the sum of their local objective functions by communication and local computation. We propose a decentralized second-order communication-efficient algorithm called communication-censored and communication-compressed quadratically approximated alternating direction method of multipliers (ADMM), termed as CC-DQM, by combining event-triggered communication with compressed communication. In CC-DQM, agents are allowed to transmit the compressed message only when the current primal variables have changed greatly compared to its last estimate. Moreover, to relieve the computation cost, the update of Hessian is also scheduled by the trigger condition. Theoretical analysis shows that the proposed algorithm can still maintain an exact linear convergence, despite the existence of compression error and intermittent communication, if the local objective functions are strongly convex and smooth. Finally, numerical experiments demonstrate its satisfactory communication efficiency.
Collapse
Affiliation(s)
- Zhen Zhang
- School of Computer Science and Engineering, Southeast University, 211189, Nanjing, PR China.
| | - Shaofu Yang
- School of Computer Science and Engineering, Southeast University, 211189, Nanjing, PR China.
| | - Wenying Xu
- School of Mathematics, Southeast University, 211189, Nanjing, PR China.
| |
Collapse
|
6
|
Qin S, Zhang X, Xu H, Xu Y. Fast Quaternion Product Units for Learning Disentangled Representations in [Formula: see text]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:4504-4520. [PMID: 36037459 DOI: 10.1109/tpami.2022.3202217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Real-world 3D structured data like point clouds and skeletons often can be represented as data in a 3D rotation group (denoted as [Formula: see text]). However, most existing neural networks are tailored for the data in the euclidean space, which makes the 3D rotation data not closed under their algebraic operations and leads to sub-optimal performance in 3D-related learning tasks. To resolve the issues caused by the above mismatching between data and model, we propose a novel non-real neuron model called quaternion product unit (QPU) to represent data on 3D rotation groups. The proposed QPU leverages quaternion algebra and the law of the 3D rotation group, representing 3D rotation data as quaternions and merging them via a weighted chain of Hamilton products. We demonstrate that the QPU mathematically maintains the [Formula: see text] structure of the 3D rotation data during the inference process and disentangles the 3D representations into "rotation-invariant" features and "rotation-equivariant" features, respectively. Moreover, we design a fast QPU to accelerate the computation of QPU. The fast QPU applies a tree-structured data indexing process, and accordingly, leverages the power of parallel computing, which reduces the computational complexity of QPU in a single thread from O(N) to O(logN). Taking the fast QPU as a basic module, we develop a series of quaternion neural networks (QNNs), including quaternion multi-layer perceptron (QMLP), quaternion message passing (QMP), and so on. In addition, we make the QNNs compatible with conventional real-valued neural networks and applicable for both skeletons and point clouds. Experiments on synthetic and real-world 3D tasks show that the QNNs based on our fast QPUs are superior to state-of-the-art real-valued models, especially in the scenarios requiring the robustness to random rotations. The code of this work is available at https://github.com/SuferQin/Fast-QPU.
Collapse
|
7
|
Wei R, Cao J, Alsaadi FE. Fixed/Prescribed-Time Bipartite Synchronization of Coupled Quaternion-Valued neural Networks with Competitive Interactions. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11225-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
8
|
Xia Z, Liu Y, Qiu J, Ruan Q, Cao J. An RNN-Based Algorithm for Decentralized-Partial-Consensus Constrained Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:534-542. [PMID: 34464262 DOI: 10.1109/tnnls.2021.3098668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This technical note proposes a decentralized-partial-consensus optimization (DPCO) problem with inequality constraints. The partial-consensus matrix originating from the Laplacian matrix is constructed to tackle the partial-consensus constraints. A continuous-time algorithm based on multiple interconnected recurrent neural networks (RNNs) is derived to solve the optimization problem. In addition, based on nonsmooth analysis and Lyapunov theory, the convergence of continuous-time algorithm is further proved. Finally, several examples demonstrate the effectiveness of main results.
Collapse
|
9
|
Cariow A, Cariowa G. Fast Algorithms for Deep Octonion Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:543-548. [PMID: 34739385 DOI: 10.1109/tnnls.2021.3124131] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This brief presents the results of a study of the possibilities of reducing the arithmetic complexity of computing basic operations in octonionic neural networks and also proposes new algorithmic solutions for efficiently performing these operations. Here, we primarily mean the operation of multiplying octonions, the operation of computing the dot product of two octonion-valued vectors, and the operation of multiple multiplications of an octonion by several other octonions. In order to reduce the computational complexity of these operations, it is proposed to use the fast Walsh-Hadamard transform, which is well known in digital signal processing. Using this transform reduces the number of multiplications and additions of real numbers required to perform computations. Thus, the use of the proposed algorithms will speed up computations in octonion-valued neural networks.
Collapse
|
10
|
Multistability of Quaternion-Valued Recurrent Neural Networks with Discontinuous Nonmonotonic Piecewise Nonlinear Activation Functions. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-11116-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
11
|
Tao J, Xiao Z, Li Z, Wu J, Lu R, Shi P, Wang X. Dynamic Event-Triggered State Estimation for Markov Jump Neural Networks With Partially Unknown Probabilities. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7438-7447. [PMID: 34111013 DOI: 10.1109/tnnls.2021.3085001] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article focuses on the investigation of finite-time dissipative state estimation for Markov jump neural networks. First, in view of the subsistent phenomenon that the state estimator cannot capture the system modes synchronously, the hidden Markov model with partly unknown probabilities is introduced in this article to describe such asynchronization constraint. For the upper limit of network bandwidth and computing resources, a novel dynamic event-triggered transmission mechanism, whose threshold parameter is constructed as an adjustable diagonal matrix, is set between the estimator and the original system to avoid data collision and save energy. Then, with the assistance of Lyapunov techniques, an event-based asynchronous state estimator is designed to ensure that the resulting system is finite-time bounded with a prescribed dissipation performance index. Ultimately, the effectiveness of the proposed estimator design approach combining with a dynamic event-triggered transmission mechanism is demonstrated by a numerical example.
Collapse
|
12
|
Peng T, Qiu J, Lu J, Tu Z, Cao J. Finite-Time and Fixed-Time Synchronization of Quaternion-Valued Neural Networks With/Without Mixed Delays: An Improved One-Norm Method. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7475-7487. [PMID: 34115597 DOI: 10.1109/tnnls.2021.3085253] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this article, the finite-time synchronization (FTSYN) of a class of quaternion-valued neural networks (QVNNs) with discrete and distributed time delays is studied. Furthermore, the FTSYN and fixed-time synchronization (FIXSYN) of the QVNNs without time delay are investigated. Different from the existing results, which used decomposition techniques, by introducing an improved one-norm, we use a direct analytical method to study the synchronization problems. Incidentally, several properties of one-norm of the quaternion are analyzed, and then, three effective controllers are proposed to synchronize the drive and response QVNNs within a finite time or fixed time. Moreover, efficient criteria are proposed to guarantee that the synchronization of QVNNs with or without mixed time delays can be realized within a finite and fixed time interval, respectively. In addition, the settling times are reckoned. Compared with the existing work, our advantages are mainly reflected in the simpler Lyapunov analytical process and more general activation function. Finally, the validity and practicability of the conclusions are illustrated via four numerical examples.
Collapse
|
13
|
Xia Z, Liu Y, Lu J, Qiu J, Cao J. A Distributed Optimization Problem Subject to Partial-Impact Cost Functions. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12612-12617. [PMID: 34236974 DOI: 10.1109/tcyb.2021.3086183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article focuses on a distributed optimization problem subject to partial-impact cost functions that relates to two decision variable vectors. To this end, two algorithms are presented with the aim of solving the considered optimization problem in a structure fashion and in a gradient fashion, respectively. Furthermore, a connection between the equilibrium of the induced algorithm and the involved optimization problem is established, with the aid of the tools from nonsmooth analysis and change of coordinate theorem. Two numerical examples with practical significance are given to demonstrate the efficiency of the designed algorithm.
Collapse
|
14
|
Fixed-time passivity of coupled quaternion-valued neural networks with multiple delayed couplings. Soft comput 2022. [DOI: 10.1007/s00500-022-07500-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
15
|
Fixed/Preassigned-time synchronization of high-dimension-valued fuzzy neural networks with time-varying delays via nonseparation approach. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
16
|
Zhang S, Xia Y, Xia Y, Wang J. Matrix-Form Neural Networks for Complex-Variable Basis Pursuit Problem With Application to Sparse Signal Reconstruction. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:7049-7059. [PMID: 33471773 DOI: 10.1109/tcyb.2020.3042519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this article, a continuous-time complex-valued projection neural network (CCPNN) in a matrix state space is first proposed for a general complex-variable basis pursuit problem. The proposed CCPNN is proved to be stable in the sense of Lyapunov and to be globally convergent to the optimal solution under the condition that the sensing matrix is not row full rank. Furthermore, an improved discrete-time complex projection neural network (IDCPNN) is proposed by discretizing the CCPNN model. The proposed IDCPNN consists of a two-step stop strategy to reduce the calculational cost. The proposed IDCPNN is theoretically guaranteed to be global convergent to the optimal solution. Finally, the proposed IDCPNN is applied to the reconstruction of sparse signals based on compressed sensing. Computed results show that the proposed IDCPNN is superior to related complex-valued neural networks and conventional basis pursuit algorithms in terms of solution quality and computation time.
Collapse
|
17
|
Leung MF, Wang J, Che H. Cardinality-constrained portfolio selection based on two-timescale duplex neurodynamic optimization. Neural Netw 2022; 153:399-410. [DOI: 10.1016/j.neunet.2022.06.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 05/13/2022] [Accepted: 06/16/2022] [Indexed: 11/26/2022]
|
18
|
Qi Y, Jin L, Luo X, Zhou M. Recurrent Neural Dynamics Models for Perturbed Nonstationary Quadratic Programs: A Control-Theoretical Perspective. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1216-1227. [PMID: 33449881 DOI: 10.1109/tnnls.2020.3041364] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recent decades have witnessed a trend that control-theoretical techniques are widely leveraged in various areas, e.g., design and analysis of computational models. Computational methods can be modeled as a controller and searching the equilibrium point of a dynamical system is identical to solving an algebraic equation. Thus, absorbing mature technologies in control theory and integrating it with neural dynamics models can lead to new achievements. This work makes progress along this direction by applying control-theoretical techniques to construct new recurrent neural dynamics for manipulating a perturbed nonstationary quadratic program (QP) with time-varying parameters considered. Specifically, to break the limitations of existing continuous-time models in handling nonstationary problems, a discrete recurrent neural dynamics model is proposed to robustly deal with noise. This work shows how iterative computational methods for solving nonstationary QP can be revisited, designed, and analyzed in a control framework. A modified Newton iteration model and an improved gradient-based neural dynamics are established by referring to the superior structural technology of the presented recurrent neural dynamics, where the chief breakthrough is their excellent convergence and robustness over the traditional models. Numerical experiments are conducted to show the eminence of the proposed models in solving perturbed nonstationary QP.
Collapse
|
19
|
Wei W, Yu J, Wang L, Hu C, Jiang H. Fixed/Preassigned-time synchronization of quaternion-valued neural networks via pure power-law control. Neural Netw 2021; 146:341-349. [PMID: 34929417 DOI: 10.1016/j.neunet.2021.11.023] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 10/30/2021] [Accepted: 11/23/2021] [Indexed: 11/19/2022]
Abstract
The fixed-time synchronization and preassigned-time synchronization of quaternion-valued neural networks are concerned in this article. By developing fixed-time stability and proposing a pure power-law control scheme, some simple conditions are obtained to realize fixed-time synchronization of quaternion-valued neural networks and the upper bound of the synchronized time is provided. Furthermore, the preassigned-time synchronization of quaternion-valued neural networks is investigated based on pure power-law control design, where the synchronization time is preassigned in advance and the control gains are finite. Note that the designed controllers in this paper are the pure power-law forms, which are simpler and more effective compared with the traditional design composed of the linear part and power-law part. Eventually, an example is given to illustrate the feasibility and validity of the results obtained.
Collapse
Affiliation(s)
- Wanlu Wei
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, China.
| | - Juan Yu
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, China.
| | - Leimin Wang
- School of Automation, China University of Geosciences, Wuhan 430074, China.
| | - Cheng Hu
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, China.
| | - Haijun Jiang
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, China.
| |
Collapse
|
20
|
Xia Z, Liu Y, Lu J, Cao J, Rutkowski L. Penalty Method for Constrained Distributed Quaternion-Variable Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:5631-5636. [PMID: 33206622 DOI: 10.1109/tcyb.2020.3031687] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This article studies the constrained optimization problems in the quaternion regime via a distributed fashion. We begin with presenting some differences for the generalized gradient between the real and quaternion domains. Then, an algorithm for the considered optimization problem is given, by which the desired optimization problem is transformed into an unconstrained setup. Using the tools from the Lyapunov-based technique and nonsmooth analysis, the convergence property associated with the devised algorithm is further guaranteed. In addition, the designed algorithm has the potential for solving distributed neurodynamic optimization problems as a recurrent neural network. Finally, a numerical example involving machine learning is given to illustrate the efficiency of the obtained results.
Collapse
|
21
|
Singh S, Kumar U, Das S, Alsaadi F, Cao J. Synchronization of Quaternion Valued Neural Networks with Mixed Time Delays Using Lyapunov Function Method. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10657-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
22
|
Stability analysis for delayed neural networks via an improved negative-definiteness lemma. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.08.055] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
23
|
Khan AT, Cao X, Li Z, Li S. Enhanced Beetle Antennae Search with Zeroing Neural Network for online solution of constrained optimization. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.027] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
24
|
Ren J, Song Q, Gao Y, Zhao M, Lu G. Leader-following consensus of delayed neural networks under multi-layer signed graphs. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
25
|
Li L, Sun Y, Wang M, Huang W. Synchronization of Coupled Memristor Neural Networks with Time Delay: Positive Effects of Stochastic Delayed Impulses. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10600-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
26
|
Bao Y, Zhang Y, Zhang B, Guo Y. Prescribed-Time Synchronization of Coupled Memristive Neural Networks with Heterogeneous Impulsive Effects. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10469-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
27
|
Huang C, Liu H, Shi X, Chen X, Xiao M, Wang Z, Cao J. Bifurcations in a fractional-order neural network with multiple leakage delays. Neural Netw 2020; 131:115-126. [DOI: 10.1016/j.neunet.2020.07.015] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 07/06/2020] [Accepted: 07/10/2020] [Indexed: 10/23/2022]
|
28
|
Song Q, Long L, Zhao Z, Liu Y, Alsaadi FE. Stability criteria of quaternion-valued neutral-type delayed neural networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.06.086] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
29
|
Mu G, Li L, Li X. Quasi-bipartite synchronization of signed delayed neural networks under impulsive effects. Neural Netw 2020; 129:31-42. [DOI: 10.1016/j.neunet.2020.05.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2020] [Revised: 04/19/2020] [Accepted: 05/11/2020] [Indexed: 10/24/2022]
|
30
|
Ouyang D, Shao J, Jiang H, Nguang SK, Shen HT. Impulsive synchronization of coupled delayed neural networks with actuator saturation and its application to image encryption. Neural Netw 2020; 128:158-171. [DOI: 10.1016/j.neunet.2020.05.016] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 04/27/2020] [Accepted: 05/11/2020] [Indexed: 11/26/2022]
|
31
|
State Estimation of Quaternion-Valued Neural Networks with Leakage Time Delay and Mixed Two Additive Time-Varying Delays. Neural Process Lett 2020. [DOI: 10.1007/s11063-019-10178-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
32
|
Abdelaziz M, Chérif F. Exponential Lag Synchronization and Global Dissipativity for Delayed Fuzzy Cohen–Grossberg Neural Networks with Discontinuous Activations. Neural Process Lett 2020. [DOI: 10.1007/s11063-019-10169-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
33
|
Finite-Time Mittag-Leffler Stability of Fractional-Order Quaternion-Valued Memristive Neural Networks with Impulses. Neural Process Lett 2019. [DOI: 10.1007/s11063-019-10154-1] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
34
|
Li Y, Meng X. Almost Automorphic Solutions in Distribution Sense of Quaternion-Valued Stochastic Recurrent Neural Networks with Mixed Time-Varying Delays. Neural Process Lett 2019. [DOI: 10.1007/s11063-019-10151-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|