1
|
Lee Y. Three-Dimensional Dense Reconstruction: A Review of Algorithms and Datasets. SENSORS (BASEL, SWITZERLAND) 2024; 24:5861. [PMID: 39338606 PMCID: PMC11435907 DOI: 10.3390/s24185861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Revised: 09/04/2024] [Accepted: 09/05/2024] [Indexed: 09/30/2024]
Abstract
Three-dimensional dense reconstruction involves extracting the full shape and texture details of three-dimensional objects from two-dimensional images. Although 3D reconstruction is a crucial and well-researched area, it remains an unsolved challenge in dynamic or complex environments. This work provides a comprehensive overview of classical 3D dense reconstruction techniques, including those based on geometric and optical models, as well as approaches leveraging deep learning. It also discusses the datasets used for deep learning and evaluates the performance and the strengths and limitations of deep learning methods on these datasets.
Collapse
Affiliation(s)
- Yangming Lee
- RoCAL Lab, Rochester Institute of Technology, Rochester, NY 14623, USA
| |
Collapse
|
2
|
Yan J, Jin L, Luo X, Li S. Modified RNN for Solving Comprehensive Sylvester Equation With TDOA Application. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12553-12563. [PMID: 37037242 DOI: 10.1109/tnnls.2023.3263565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The augmented Sylvester equation, as a comprehensive equation, is of great significance and its special cases (e.g., Lyapunov equation, Sylvester equation, Stein equation) are frequently encountered in various fields. It is worth pointing out that the current research on simultaneously eliminating the lagging error and handling noises in the nonstationary complex-valued field is rather rare. Therefore, this article focuses on solving a nonstationary complex-valued augmented Sylvester equation (NCASE) in real time and proposes two modified recurrent neural network (RNN) models. The first proposed modified RNN model possesses gradient search and velocity compensation, termed as RNN-GV model. The superiority of the proposed RNN-GV model to traditional algorithms including the complex-valued gradient-based RNN (GRNN) model lies in completely eliminating the lagging error when employed in the nonstationary problem. The second model named complex-valued integration enhanced RNN-GV with the nonlinear acceleration (IERNN-GVN) model is proposed to adapt to a noisy environment and accelerate the convergence process. Besides, the convergence and robustness of these two proposed models are proved via theoretical analysis. Simulative results on an illustrative example and an application to the moving source localization coincide with the theoretical analysis and illustrate the excellent performance of the proposed models.
Collapse
|
3
|
Li H, Liao B, Li J, Li S. A Survey on Biomimetic and Intelligent Algorithms with Applications. Biomimetics (Basel) 2024; 9:453. [PMID: 39194432 DOI: 10.3390/biomimetics9080453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 07/12/2024] [Accepted: 07/22/2024] [Indexed: 08/29/2024] Open
Abstract
The question "How does it work" has motivated many scientists. Through the study of natural phenomena and behaviors, many intelligence algorithms have been proposed to solve various optimization problems. This paper aims to offer an informative guide for researchers who are interested in tackling optimization problems with intelligence algorithms. First, a special neural network was comprehensively discussed, and it was called a zeroing neural network (ZNN). It is especially intended for solving time-varying optimization problems, including origin, basic principles, operation mechanism, model variants, and applications. This paper presents a new classification method based on the performance index of ZNNs. Then, two classic bio-inspired algorithms, a genetic algorithm and a particle swarm algorithm, are outlined as representatives, including their origin, design process, basic principles, and applications. Finally, to emphasize the applicability of intelligence algorithms, three practical domains are introduced, including gene feature extraction, intelligence communication, and the image process.
Collapse
Affiliation(s)
- Hao Li
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
- School of Communication and Electronic Engineering, Jishou University, Jishou 416000, China
| | - Bolin Liao
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
| | - Jianfeng Li
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
| | - Shuai Li
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
| |
Collapse
|
4
|
Yang M, Zhang Y, Hu H. Inverse-Free DZNN Models for Solving Time-Dependent Linear System via High-Precision Linear Six-Step Method. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:8597-8608. [PMID: 37015638 DOI: 10.1109/tnnls.2022.3230898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Time-dependent linear system (TDLS) is usually encountered in scientific research, which is the mathematical formulation of many practical applications. Different from conventional inverse-need models, by utilizing zeroing neural network (ZNN) method twice, an inverse-free continuous ZNN (CZNN) model is developed for solving TDLS. For conveniently practical use, a discrete model is naturally desired. Superior to conventional discretization methods, a general linear six-step (LSS) method with the seventh-order precision and five variable parameters is proposed for the first time. Constraints about five variable parameters are theoretically analyzed to guarantee the efficacy of the general LSS method. Within constraints, 12 specific LSS methods are further developed. Aided with the general LSS method, an inverse-free discrete ZNN (DZNN) is proposed and termed DZNN-LSS model, and its precision is greatly improved compared with conventional discrete models. For comparison, three conventional discretization methods are also utilized to generate DZNN models. Detailed theoretical analyses are provided to prove the efficacy of relevant models. In addition, a specific TDLS example is considered to show the effectiveness and superiority of the DZNN-LSS model. More than that, applications to manipulator control and sound source localization are conducted to illustrate the applicability of the DZNN-LSS model.
Collapse
|
5
|
Wu W, Zhang Y. Zeroing Neural Network With Coefficient Functions and Adjustable Parameters for Solving Time-Variant Sylvester Equation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:6757-6766. [PMID: 36256719 DOI: 10.1109/tnnls.2022.3212869] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
To solve the time-variant Sylvester equation, in 2013, Li et al. proposed the zeroing neural network with sign-bi-power function (ZNN-SBPF) model via constructing a nonlinear activation function. In this article, to further improve the convergence rate, the zeroing neural network with coefficient functions and adjustable parameters (ZNN-CFAP) model as a variation in zeroing neural network (ZNN) model is proposed. On the basis of the introduced coefficient functions, an appropriate ZNN-CFAP model can be chosen according to the error function. The high convergence rate of the ZNN-CFAP model can be achieved by choosing appropriate adjustable parameters. Moreover, the finite-time convergence property and convergence time upper bound of the ZNN-CFAP model are proved in theory. Computer simulations and numerical experiments are performed to illustrate the efficacy and validity of the ZNN-CFAP model in time-variant Sylvester equation solving. Comparative experiments among the ZNN-CFAP, ZNN-SBPF, and ZNN with linear function (ZNN-LF) models further substantiate the superiority of the ZNN-CFAP model in view of the convergence rate. Finally, the proposed ZNN-CFAP model is successfully applied to the tracking control of robot manipulator to verify its practicability.
Collapse
|
6
|
Xiao L, Cao P, Wang Z, Liu S. A novel fixed-time error-monitoring neural network for solving dynamic quaternion-valued Sylvester equations. Neural Netw 2024; 170:494-505. [PMID: 38039686 DOI: 10.1016/j.neunet.2023.11.058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 11/03/2023] [Accepted: 11/24/2023] [Indexed: 12/03/2023]
Abstract
This paper addresses the dynamic quaternion-valued Sylvester equation (DQSE) using the quaternion real representation and the neural network method. To transform the Sylvester equation in the quaternion field into an equivalent equation in the real field, three different real representation modes for the quaternion are adopted by considering the non-commutativity of quaternion multiplication. Based on the equivalent Sylvester equation in the real field, a novel recurrent neural network model with an integral design formula is proposed to solve the DQSE. The proposed model, referred to as the fixed-time error-monitoring neural network (FTEMNN), achieves fixed-time convergence through the action of a state-of-the-art nonlinear activation function. The fixed-time convergence of the FTEMNN model is theoretically analyzed. Two examples are presented to verify the performance of the FTEMNN model with a specific focus on fixed-time convergence. Furthermore, the chattering phenomenon of the FTEMNN model is discussed, and a saturation function scheme is designed. Finally, the practical value of the FTEMNN model is demonstrated through its application to image fusion denoising.
Collapse
Affiliation(s)
- Lin Xiao
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| | - Penglin Cao
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge, Middlesex, UB8 3PH, United Kingdom.
| | - Sai Liu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| |
Collapse
|
7
|
Dai J, Yang X, Xiao L, Jia L, Liu X, Wang Y. Design and Analysis of a Self-Adaptive Zeroing Neural Network for Solving Time-Varying Quadratic Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7135-7144. [PMID: 35015652 DOI: 10.1109/tnnls.2021.3138900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In order to solve the time-varying quadratic programming (TVQP) problem more effectively, a new self-adaptive zeroing neural network (ZNN) is designed and analyzed in this article by using the Takagi-Sugeno fuzzy logic system (TSFLS) and thus called the Takagi-Sugeno (T-S) fuzzy ZNN (TSFZNN). Specifically, a multiple-input-single-output TSFLS is designed to generate a self-adaptive convergence factor to construct the TSFZNN model. In order to obtain finite- or predefined-time convergence, four novel activation functions (AFs) [namely, power-bi-sign AF (PBSAF), tanh-bi-sign AF (TBSAF), exp-bi-sign AF (EBSAF), and sinh-bi-sign AF (SBSAF)] are developed and applied in the TSFZNN model for solving the TVQP problem. Both theoretical proofs and experimental simulations show that the TSFZNN model using PBSAF or TBSAF has the property of converging in a finite time, and the TSFZNN model using EBSAF or SBSAF has the property of converging in a predefined time, which have superior convergence performance compared to the traditional ZNN model.
Collapse
|
8
|
Xiao L, He Y, Wang Y, Dai J, Wang R, Tang W. A Segmented Variable-Parameter ZNN for Dynamic Quadratic Minimization With Improved Convergence and Robustness. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2413-2424. [PMID: 34464280 DOI: 10.1109/tnnls.2021.3106640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
As a category of the recurrent neural network (RNN), zeroing neural network (ZNN) can effectively handle time-variant optimization issues. Compared with the fixed-parameter ZNN that needs to be adjusted frequently to achieve good performance, the conventional variable-parameter ZNN (VPZNN) does not require frequent adjustment, but its variable parameter will tend to infinity as time grows. Besides, the existing noise-tolerant ZNN model is not good enough to deal with time-varying noise. Therefore, a new-type segmented VPZNN (SVPZNN) for handling the dynamic quadratic minimization issue (DQMI) is presented in this work. Unlike the previous ZNNs, the SVPZNN includes an integral term and a nonlinear activation function, in addition to two specially constructed time-varying piecewise parameters. This structure keeps the time-varying parameters stable and makes the model have strong noise tolerance capability. Besides, theoretical analysis on SVPZNN is proposed to determine the upper bound of convergence time in the absence or presence of noise interference. Numerical simulations verify that SVPZNN has shorter convergence time and better robustness than existing ZNN models when handling DQMI.
Collapse
|
9
|
Chen W, Jin J, Gerontitis D, Qiu L, Zhu J. Improved Recurrent Neural Networks for Text Classification and Dynamic Sylvester Equation Solving. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11176-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
10
|
Yang M, Zhang Y, Tan N, Hu H. Explicit Linear Left-and-Right 5-Step Formulas With Zeroing Neural Network for Time-Varying Applications. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1133-1143. [PMID: 34464284 DOI: 10.1109/tcyb.2021.3104138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, being different from conventional time-discretization (simply called discretization) formulas, explicit linear left-and-right 5-step (ELLR5S) formulas with sixth-order precision are proposed. The general sixth-order ELLR5S formula with four variable parameters is developed first, and constraints of these four parameters are displayed to guarantee the zero stability, consistence, and convergence of the formula. Then, by choosing specific parameter values within constraints, eight specific sixth-order ELLR5S formulas are developed. The general sixth-order ELLR5S formula is further utilized to generate discrete zeroing neural network (DZNN) models for solving time-varying linear and nonlinear systems. For comparison, three conventional discretization formulas are also utilized. Theoretical analyses are presented to show the performance of ELLR5S formulas and DZNN models. Furthermore, abundant experiments, including three practical applications, that is, angle-of-arrival (AoA) localization and two redundant manipulators (PUMA560 manipulator and Kinova manipulator) control, are conducted. The synthesized results substantiate the efficacy and superiority of sixth-order ELLR5S formulas as well as the corresponding DZNN models.
Collapse
|
11
|
Xiao X, Jiang C, Mei Q, Zhang Y. Noise‐tolerate and adaptive coefficient zeroing neural network for solving dynamic matrix square root. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Affiliation(s)
- Xiuchun Xiao
- School of Electronics and Information Engineering Guangdong Ocean University Zhanjiang China
| | - Chengze Jiang
- School of Cyber Science and Engineering Southeast University Nanjing China
| | - Qixiang Mei
- School of Electronics and Information Engineering Guangdong Ocean University Zhanjiang China
| | - Yudong Zhang
- School of Computing and Mathematical Sciences University of Leicester Leicester UK
| |
Collapse
|
12
|
A predefined-time and anti-noise varying-parameter ZNN model for solving time-varying complex Stein equations. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
13
|
Hua C, Cao X, Liao B, Li S. Advances on intelligent algorithms for scientific computing: an overview. Front Neurorobot 2023; 17:1190977. [PMID: 37152414 PMCID: PMC10161734 DOI: 10.3389/fnbot.2023.1190977] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 04/06/2023] [Indexed: 05/09/2023] Open
Abstract
The field of computer science has undergone rapid expansion due to the increasing interest in improving system performance. This has resulted in the emergence of advanced techniques, such as neural networks, intelligent systems, optimization algorithms, and optimization strategies. These innovations have created novel opportunities and challenges in various domains. This paper presents a thorough examination of three intelligent methods: neural networks, intelligent systems, and optimization algorithms and strategies. It discusses the fundamental principles and techniques employed in these fields, as well as the recent advancements and future prospects. Additionally, this paper analyzes the advantages and limitations of these intelligent approaches. Ultimately, it serves as a comprehensive summary and overview of these critical and rapidly evolving fields, offering an informative guide for novices and researchers interested in these areas.
Collapse
Affiliation(s)
- Cheng Hua
- College of Computer Science and Engineering, Jishou University, Jishou, China
| | - Xinwei Cao
- School of Business, Jiangnan University, Wuxi, China
| | - Bolin Liao
- College of Computer Science and Engineering, Jishou University, Jishou, China
- *Correspondence: Bolin Liao
| | - Shuai Li
- Faculty of Information Technology and Electrical Engineering, University of Oulu, Oulu, Finland
- VTT Technical Research Centre of Finland, Oulu, Finland
- Shuai Li
| |
Collapse
|
14
|
Zuo Q, Li K, Xiao L, Li K. Robust Finite-Time Zeroing Neural Networks With Fixed and Varying Parameters for Solving Dynamic Generalized Lyapunov Equation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7695-7705. [PMID: 34143744 DOI: 10.1109/tnnls.2021.3086500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
For solving dynamic generalized Lyapunov equation, two robust finite-time zeroing neural network (RFTZNN) models with stationary and nonstationary parameters are generated through the usage of an improved sign-bi-power (SBP) activation function (AF). Taking differential errors and model implementation errors into account, two corresponding perturbed RFTZNN models are derived to facilitate the analyses of robustness on the two RFTZNN models. Theoretical analysis gives the quantitatively estimated upper bounds for the convergence time (UBs-CT) of the two derived models, implying a superiority of the convergence that varying parameter RFTZNN (VP-RFTZNN) possesses over the fixed parameter RFTZNN (FP-RFTZNN). When the coefficient matrices and perturbation matrices are uniformly bounded, residual error of FP-RFTZNN is bounded, whereas that of VP-RFTZNN monotonically decreases at a super-exponential rate after a finite time, and eventually converges to 0. When these matrices are bounded but not uniform, residual error of FP-RFTZNN is no longer bounded, but that of VP-RFTZNN still converges. These superiorities of VP-RFTZNN are illustrated by abundant comparative experiments, and its application value is further proved by an application to robot.
Collapse
|
15
|
Gerontitis D, Behera R, Shi Y, Stanimirović PS. A robust noise tolerant zeroing neural network for solving time-varying linear matrix equations. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.08.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
16
|
Katsikis VN, Mourtas SD, Stanimirovic PS, Zhang Y. Solving Complex-Valued Time-Varying Linear Matrix Equations via QR Decomposition With Applications to Robotic Motion Tracking and on Angle-of-Arrival Localization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3415-3424. [PMID: 33513117 DOI: 10.1109/tnnls.2021.3052896] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The problem of solving linear equations is considered as one of the fundamental problems commonly encountered in science and engineering. In this article, the complex-valued time-varying linear matrix equation (CVTV-LME) problem is investigated. Then, by employing a complex-valued, time-varying QR (CVTVQR) decomposition, the zeroing neural network (ZNN) method, equivalent transformations, Kronecker product, and vectorization techniques, we propose and study a CVTVQR decomposition-based linear matrix equation (CVTVQR-LME) model. In addition to the usage of the QR decomposition, the further advantage of the CVTVQR-LME model is reflected in the fact that it can handle a linear system with square or rectangular coefficient matrix in both the matrix and vector cases. Its efficacy in solving the CVTV-LME problems have been tested in a variety of numerical simulations as well as in two applications, one in robotic motion tracking and the other in angle-of-arrival localization.
Collapse
|
17
|
Yang M, Zhang Y, Tan N, Mao M, Hu H. 7-Instant Discrete-Time Synthesis Model Solving Future Different-Level Linear Matrix System via Equivalency of Zeroing Neural Network. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8366-8375. [PMID: 33544686 DOI: 10.1109/tcyb.2021.3051035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Differing from the common linear matrix equation, the future different-level linear matrix system is considered, which is much more interesting and challenging. Because of its complicated structure and future-computation characteristic, traditional methods for static and same-level systems may not be effective on this occasion. For solving this difficult future different-level linear matrix system, the continuous different-level linear matrix system is first considered. On the basis of the zeroing neural network (ZNN), the physical mathematical equivalency is thus proposed, which is called ZNN equivalency (ZE), and it is compared with the traditional concept of mathematical equivalence. Then, on the basis of ZE, the continuous-time synthesis (CTS) model is further developed. To satisfy the future-computation requirement of the future different-level linear matrix system, the 7-instant discrete-time synthesis (DTS) model is further attained by utilizing the high-precision 7-instant Zhang et al. discretization (ZeaD) formula. For a comparison, three different DTS models using three conventional ZeaD formulas are also presented. Meanwhile, the efficacy of the 7-instant DTS model is testified by the theoretical analyses. Finally, experimental results verify the brilliant performance of the 7-instant DTS model in solving the future different-level linear matrix system.
Collapse
|
18
|
Improved ZND model for solving dynamic linear complex matrix equation and its application. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07581-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
19
|
Double Features Zeroing Neural Network Model for Solving the Pseudoninverse of a Complex-Valued Time-Varying Matrix. MATHEMATICS 2022. [DOI: 10.3390/math10122122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The solution of a complex-valued matrix pseudoinverse is one of the key steps in various science and engineering fields. Owing to its important roles, researchers had put forward many related algorithms. With the development of research, a time-varying matrix pseudoinverse received more attention than a time-invarying one, as we know that a zeroing neural network (ZNN) is an efficient method to calculate a pseudoinverse of a complex-valued time-varying matrix. Due to the initial ZNN (IZNN) and its extensions lacking a mechanism to deal with both convergence and robustness, that is, most existing research on ZNN models only studied the convergence and robustness, respectively. In order to simultaneously improve the double features (i.e., convergence and robustness) of ZNN in solving a complex-valued time-varying pseudoinverse, this paper puts forward a double features ZNN (DFZNN) model by adopting a specially designed time-varying parameter and a novel nonlinear activation function. Moreover, two nonlinear activation types of complex number are investigated. The global convergence, predefined time convergence, and robustness are proven in theory, and the upper bound of the predefined convergence time is formulated exactly. The results of the numerical simulation verify the theoretical proof, in contrast to the existing complex-valued ZNN models, the DFZNN model has shorter predefined convergence time in a zero noise state, and enhances robustness in different noise states. Both the theoretical and the empirical results show that the DFZNN model has better ability in solving a time-varying complex-valued matrix pseudoinverse. Finally, the proposed DFZNN model is used to track the trajectory of a manipulator, which further verifies the reliability of the model.
Collapse
|
20
|
A review on varying-parameter convergence differential neural network. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
21
|
Qiu B, Guo J, Li X, Zhang Z, Zhang Y. Discrete-Time Advanced Zeroing Neurodynamic Algorithm Applied to Future Equality-Constrained Nonlinear Optimization With Various Noises. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:3539-3552. [PMID: 32759087 DOI: 10.1109/tcyb.2020.3009110] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This research first proposes the general expression of Zhang et al. discretization (ZeaD) formulas to provide an effective general framework for finding various ZeaD formulas by the idea of high-order derivative simultaneous elimination. Then, to solve the problem of future equality-constrained nonlinear optimization (ECNO) with various noises, a specific ZeaD formula originating from the general ZeaD formula is further studied for the discretization of a noise-perturbed continuous-time advanced zeroing neurodynamic model. Subsequently, the resulting noise-perturbed discrete-time advanced zeroing neurodynamic (NP-DTAZN) algorithm is proposed for the real-time solution to the future ECNO problem with various noises suppressed simultaneously. Moreover, theoretical and numerical results are presented to show the convergence and precision of the proposed NP-DTAZN algorithm in the perturbation of various noises. Finally, comparative numerical and physical experiments based on a Kinova JACO2 robot manipulator are conducted to further substantiate the efficacy, superiority, and practicability of the proposed NP-DTAZN algorithm for solving the future ECNO problem with various noises.
Collapse
|
22
|
Xiao L, Huang W, Jia L, Li X. Two discrete ZNN models for solving time-varying augmented complex Sylvester equation. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.11.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
23
|
Cheng X, Zhang W, Wenzel A, Chen J. Stacked ResNet-LSTM and CORAL model for multi-site air quality prediction. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07175-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractAs the global economy is booming, and the industrialization and urbanization are being expedited, particulate matter 2.5 (PM2.5) turns out to be a major air pollutant jeopardizing public health. Numerous researchers are committed to employing various methods to address the problem of the nonlinear correlation between PM2.5 concentration and several factors to achieve more effective forecasting. However, a considerable space remains for the improvement of forecasting accuracy, and the problem of missing air pollution data on certain target areas also needs to be solved. Our research work is divided into two parts. First, this study presents a novel stacked ResNet-LSTM model to enhance prediction accuracy for PM2.5 concentration level forecast. As revealed from the experimental results, the proposed model outperforms other models such as boosting algorithms or general recurrent neural networks, and the advantage of feature extraction through residual network (ResNet) combined with a model stacking strategy is shown. Second, to solve the problem of insufficient air quality and meteorological data on some research areas, this study proposes the use of a correlation alignment (CORAL) method to carry out a prediction on the target area by aligning the second-order statistics between source area and target area. As indicated from the results, this model exhibits a considerable accuracy even in the absence of historical PM2.5 data in the target forecast area.
Collapse
|
24
|
A fuzzy adaptive zeroing neural network with superior finite-time convergence for solving time-variant linear matrix equations. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108405] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
25
|
Wang X, Mo C, Qiao S, Wei Y. Predefined-time convergent neural networks for solving the time-varying nonsingular multi-linear tensor equations. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.11.108] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
26
|
|
27
|
Ma M, Yang J. A novel finite-time q-power recurrent neural network and its application to uncertain portfolio model. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.07.036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
28
|
Liu M, Ma D, Li S. Neural dynamics for adaptive attitude tracking control of a flapping wing micro aerial vehicle. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.088] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
29
|
Gong J, Jin J. A faster and better robustness zeroing neural network for solving dynamic Sylvester equation. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10516-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
30
|
Dai J, Li Y, Xiao L, Jia L, Liao Q, Li J. Comprehensive study on complex-valued ZNN models activated by novel nonlinear functions for dynamic complex linear equations. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.12.078] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
31
|
|
32
|
Tan Z, Li W, Xiao L, Hu Y. New Varying-Parameter ZNN Models With Finite-Time Convergence and Noise Suppression for Time-Varying Matrix Moore-Penrose Inversion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2980-2992. [PMID: 31536017 DOI: 10.1109/tnnls.2019.2934734] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article aims to solve the Moore-Penrose inverse of time-varying full-rank matrices in the presence of various noises in real time. For this purpose, two varying-parameter zeroing neural networks (VPZNNs) are proposed. Specifically, VPZNN-R and VPZNN-L models, which are based on a new design formula, are designed to solve the right and left Moore-Penrose inversion problems of time-varying full-rank matrices, respectively. The two VPZNN models are activated by two novel varying-parameter nonlinear activation functions. Detailed theoretical derivations are presented to show the desired finite-time convergence and outstanding robustness of the proposed VPZNN models under various kinds of noises. In addition, existing neural models, such as the original ZNN (OZNN) and the integration-enhanced ZNN (IEZNN), are compared with the VPZNN models. Simulation observations verify the advantages of the VPZNN models over the OZNN and IEZNN models in terms of convergence and robustness. The potential of the VPZNN models for robotic applications is then illustrated by an example of robot path tracking.
Collapse
|
33
|
Li W, Xiao L, Liao B. A Finite-Time Convergent and Noise-Rejection Recurrent Neural Network and Its Discretization for Dynamic Nonlinear Equations Solving. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:3195-3207. [PMID: 31021811 DOI: 10.1109/tcyb.2019.2906263] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The so-called zeroing neural network (ZNN) is an effective recurrent neural network for solving dynamic problems including the dynamic nonlinear equations. There exist numerous unperturbed ZNN models that can converge to the theoretical solution of solvable nonlinear equations in infinity long or finite time. However, when these ZNN models are perturbed by external disturbances, the convergence performance would be dramatically deteriorated. To overcome this issue, this paper for the first time proposes a finite-time convergent ZNN with the noise-rejection capability to endure disturbances and solve dynamic nonlinear equations in finite time. In theory, the finite-time convergence and noise-rejection properties of the finite-time convergent and noise-rejection ZNN (FTNRZNN) are rigorously proved. For potential digital hardware realization, the discrete form of the FTNRZNN model is established based on a recently developed five-step finite difference rule to guarantee a high computational accuracy. The numerical results demonstrate that the discrete-time FTNRZNN can reject constant external noises. When perturbed by dynamic bounded or unbounded linear noises, the discrete-time FTNRZNN achieves the smallest steady-state errors in comparison with those generated by other discrete-time ZNN models that have no or limited ability to handle these noises. Discrete models of the FTNRZNN and the other ZNNs are comparatively applied to redundancy resolution of a robotic arm with superior positioning accuracy of the FTNRZNN verified.
Collapse
|
34
|
Xiao X, Jiang C, Lu H, Jin L, Liu D, Huang H, Pan Y. A parallel computing method based on zeroing neural networks for time-varying complex-valued matrix Moore-Penrose inversion. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2020.03.043] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
35
|
Yang M, Zhang Y, Hu H. Discrete ZNN models of Adams-Bashforth (AB) type solving various future problems with motion control of mobile manipulator. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
36
|
Liao S, Liu J, Xiao X, Fu D, Wang G, Jin L. Modified gradient neural networks for solving the time-varying Sylvester equation with adaptive coefficients and elimination of matrix inversion. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.080] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
37
|
Zhou M, Chen J, Stanimirović PS, Katsikis VN, Ma H. Complex Varying-Parameter Zhang Neural Networks for Computing Core and Core-EP Inverse. Neural Process Lett 2019. [DOI: 10.1007/s11063-019-10141-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
38
|
Zhang Z, Zheng L. A Complex Varying-Parameter Convergent-Differential Neural-Network for Solving Online Time-Varying Complex Sylvester Equation. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:3627-3639. [PMID: 29994668 DOI: 10.1109/tcyb.2018.2841970] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
A novel recurrent neural network, which is named as complex varying-parameter convergent-differential neural network (CVP-CDNN), is proposed in this paper for solving the time-varying complex Sylvester equation. Two kinds of CVP-CDNNs (i.e., CVP-CDNN Type I and Type II) are illustrated and proved to be effective. The proposed CVP-CDNNs can achieve super-exponential performance if the linear activation function is used. Some activation functions are considered for searching the better performance of the CVP-CDNN and the finite time convergence property of the CVP-CDNN with sign-bi-power activation function is testified. The convergence time of the CVP-CDNN with sign-bi-power activation function is shorter than complex fixed-parameter convergent-differential neural network (CFP-CDNN). Moreover, compared with traditional CFP-CDNN, better convergence performances of novel CVP-CDNN are verified by computer simulation comparisons.
Collapse
|
39
|
Xiao L, Yi Q, Dai J, Li K, Hu Z. Design and analysis of new complex zeroing neural network for a set of dynamic complex linear equations. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.07.044] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
40
|
|
41
|
Zhang Z, Kong LD, Zheng L. Power-Type Varying-Parameter RNN for Solving TVQP Problems: Design, Analysis, and Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2419-2433. [PMID: 30596590 DOI: 10.1109/tnnls.2018.2885042] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Many practical problems can be solved by being formulated as time-varying quadratic programing (TVQP) problems. In this paper, a novel power-type varying-parameter recurrent neural network (VPNN) is proposed and analyzed to effectively solve the resulting TVQP problems, as well as the original practical problems. For a clear understanding, we introduce this model from three aspects: design, analysis, and applications. Specifically, the reason why and the method we use to design this neural network model for solving online TVQP problems subject to time-varying linear equality/inequality are described in detail. The theoretical analysis confirms that when activated by six commonly used activation functions, VPNN achieves a superexponential convergence rate. In contrast to the traditional zeroing neural network with fixed design parameters, the proposed VPNN has better convergence performance. Comparative simulations with state-of-the-art methods confirm the advantages of VPNN. Furthermore, the application of VPNN to a robot motion planning problem verifies the feasibility, applicability, and efficiency of the proposed method.
Collapse
|
42
|
Improved Zhang neural network with finite-time convergence for time-varying linear system of equations solving. INFORM PROCESS LETT 2019. [DOI: 10.1016/j.ipl.2019.03.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
43
|
Qiu B, Zhang Y. Two New Discrete-Time Neurodynamic Algorithms Applied to Online Future Matrix Inversion With Nonsingular or Sometimes-Singular Coefficient. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:2032-2045. [PMID: 29993939 DOI: 10.1109/tcyb.2018.2818747] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this paper, a high-precision general discretization formula using six time instants is first proposed to approximate the first-order derivative. Then, such a formula is studied to discretize two continuous-time neurodynamic models, both of which are derived by applying the neurodynamic approaches based on neural networks (i.e., zeroing neurodynamics and gradient neurodynamics). Originating from the general six-instant discretization (6ID) formula, a specific 6ID formula is further presented. Subsequently, two new discrete-time neurodynamic algorithms, i.e., 6ID-type discrete-time zeroing neurodynamic (DTZN) algorithm and 6ID-type discrete-time gradient neurodynamic (DTGN) algorithm, are proposed and investigated for online future matrix inversion (OFMI). In addition to analyzing the usual nonsingular situation of the coefficient, this paper investigates the sometimes-singular situation of the coefficient for OFMI. Finally, two illustrative numerical examples, including an application to the inverse-kinematic control of a PUMA560 robot manipulator, are provided to show respective characteristics and advantages of the proposed 6ID-type DTZN and DTGN algorithms for OFMI in different situations, where the coefficient matrix to be inverted is always-nonsingular or sometimes-singular during time evolution.
Collapse
|
44
|
A new noise-tolerant and predefined-time ZNN model for time-dependent matrix inversion. Neural Netw 2019; 117:124-134. [PMID: 31158644 DOI: 10.1016/j.neunet.2019.05.005] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/08/2019] [Accepted: 05/08/2019] [Indexed: 11/23/2022]
Abstract
In this work, a new zeroing neural network (ZNN) using a versatile activation function (VAF) is presented and introduced for solving time-dependent matrix inversion. Unlike existing ZNN models, the proposed ZNN model not only converges to zero within a predefined finite time but also tolerates several noises in solving the time-dependent matrix inversion, and thus called new noise-tolerant ZNN (NNTZNN) model. In addition, the convergence and robustness of this model are mathematically analyzed in detail. Two comparative numerical simulations with different dimensions are used to test the efficiency and superiority of the NNTZNN model to the previous ZNN models using other activation functions. In addition, two practical application examples (i.e., a mobile manipulator and a real Kinova JACO2 robot manipulator) are presented to validate the applicability and physical feasibility of the NNTZNN model in a noisy environment. Both simulative and experimental results demonstrate the effectiveness and tolerant-noise ability of the NNTZNN model.
Collapse
|
45
|
Zhang Z, Zheng L, Wang M. An exponential-enhanced-type varying-parameter RNN for solving time-varying matrix inversion. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.01.058] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
46
|
Li Y, Li S, Hannaford B. A Model-Based Recurrent Neural Network With Randomness for Efficient Control With Applications. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS 2019; 15:2054-2063. [PMID: 31885525 PMCID: PMC6934362 DOI: 10.1109/tii.2018.2869588] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recently, Recurrent Neural Network (RNN) control schemes for redundant manipulators have been extensively studied. These control schemes demonstrate superior computational efficiency, control precision, and control robustness. However, they lack planning completeness. This paper explains why RNN control schemes suffer from the problem. Based on the analysis, this work presents a new random RNN control scheme, which 1) introduces randomness into RNN to address the planning completeness problem, 2) improves control precision with a new optimization target, 3) improves planning efficiency through learning from exploration. Theoretical analyses are used to prove the global stability, the planning completeness, and the computational complexity of the proposed method. Software simulation is provided to demonstrate the improved robustness against noise, the planning completeness and the improved planning efficiency of the proposed method over benchmark RNN control schemes. Real-world experiments are presented to demonstrate the application of the proposed method.
Collapse
Affiliation(s)
- Yangming Li
- College of Engineering Technology, Rochester Institute of Technology, Rochester, NY, USA 14623. The major part of this work was done when he was with the BioRobotics Lab at University of Washington, Seattle, WA, USA 98195
| | - Shuai Li
- Department of Computing, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Blake Hannaford
- Department of Electrical Engineering, University of Washington, Seattle, WA, USA 98195
| |
Collapse
|
47
|
Li J, Zhang Y, Mao M. Five-instant type discrete-time ZND solving discrete time-varying linear system, division and quadratic programming. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.11.064] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
48
|
|
49
|
Improved Gradient Neural Networks for Solving Moore–Penrose Inverse of Full-Rank Matrix. Neural Process Lett 2019. [DOI: 10.1007/s11063-019-09983-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
50
|
Rejecting Chaotic Disturbances Using a Super-Exponential-Zeroing Neurodynamic Approach for Synchronization of Chaotic Sensor Systems. SENSORS 2018; 19:s19010074. [PMID: 30585244 PMCID: PMC6339062 DOI: 10.3390/s19010074] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 12/15/2018] [Accepted: 12/21/2018] [Indexed: 11/17/2022]
Abstract
Due to the existence of time-varying chaotic disturbances in complex applications, the chaotic synchronization of sensor systems becomes a tough issue in industry electronics fields. To accelerate the synchronization process of chaotic sensor systems, this paper proposes a super-exponential-zeroing neurodynamic (SEZN) approach and its associated controller. Unlike the conventional zeroing neurodynamic (CZN) approach with exponential convergence property, the controller designed by the proposed SEZN approach inherently possesses the advantage of super-exponential convergence property, which makes the synchronization process faster and more accurate. Theoretical analyses on the stability and convergence advantages in terms of both faster convergence speed and lower error bound within the task duration are rigorously presented. Moreover, three synchronization examples substantiate the validity of the SEZN approach and the related controller for synchronization of chaotic sensor systems. Comparisons with other approaches such as the CZN approach, show the convergence superiority of the proposed SEZN approach. Finally, extensive tests further investigate the impact on convergence performance by choosing different values of design parameter and initial state.
Collapse
|