1
|
Mukherjee A, Bhattacharyya D. Hybrid Series/Parallel All-Nonlinear Dynamic-Static Neural Networks: Development, Training, and Application to Chemical Processes. Ind Eng Chem Res 2023. [DOI: 10.1021/acs.iecr.2c03339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Affiliation(s)
- Angan Mukherjee
- Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, West Virginia 26506, United States
| | - Debangsu Bhattacharyya
- Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, West Virginia 26506, United States
| |
Collapse
|
2
|
Time series signal forecasting using artificial neural networks: An application on ECG signal. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103705] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
3
|
High temporal resolution rainfall–runoff modeling using long-short-term-memory (LSTM) networks. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05010-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
4
|
|
5
|
Second Order Training of a Smoothed Piecewise Linear Network. Neural Process Lett 2017. [DOI: 10.1007/s11063-017-9618-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
|
7
|
Rivals I, Personnaz L. Nonlinear internal model control using neural networks: application to processes with delay and design issues. ACTA ACUST UNITED AC 2012; 11:80-90. [PMID: 18249741 DOI: 10.1109/72.822512] [Citation(s) in RCA: 100] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
We propose a design procedure of neural internal model control systems for stable processes with delay. We show that the design of such nonadaptive indirect control systems necessitates only the training of the inverse of the model deprived from its delay, and that the presence of the delay thus does not increase the order of the inverse. The controller is then obtained by cascading this inverse with a rallying model which imposes the regulation dynamic behavior and ensures the robustness of the stability. A change in the desired regulation dynamic behavior, or an improvement of the stability, can be obtained by simply tuning the rallying model, without retraining the whole model reference controller. The robustness properties of internal model control systems being obtained when the inverse is perfect, we detail the precautions which must be taken for the training of the inverse so that it is accurate in the whole space visited during operation with the process. In the same spirit, we make an emphasis on neural models affine in the control input, whose perfect inverse is derived without training. The control of simulated processes illustrates the proposed design procedure and the properties of the neural internal model control system for processes without and with delay.
Collapse
Affiliation(s)
- I Rivals
- Ecole Supérieure de Physique et de Chimie Industrielles, Laboratoire d'Electronique, 75231 Paris Cedex 05, France.
| | | |
Collapse
|
8
|
Zhao H, Zeng X, Zhang J, Li T, Liu Y, Ruan D. Pipelined functional link artificial recurrent neural network with the decision feedback structure for nonlinear channel equalization. Inf Sci (N Y) 2011. [DOI: 10.1016/j.ins.2011.04.033] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
9
|
Saad Saoud L, Khellaf A. Nonlinear dynamic systems identification based on dynamic wavelet neural units. Neural Comput Appl 2010. [DOI: 10.1007/s00521-010-0438-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
10
|
Sun GZ, Giles CL, Chen HH. The neural network pushdown automaton: Architecture, dynamics and training. ACTA ACUST UNITED AC 2006. [DOI: 10.1007/bfb0054003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
11
|
Recurrent neural network architectures: An overview. ACTA ACUST UNITED AC 2006. [DOI: 10.1007/bfb0053993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
12
|
Samuelides M. Closed-Loop Control Learning. Neural Netw 2005. [DOI: 10.1007/3-540-28847-3_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
13
|
Leung CS, Tsoi AC. Combined learning and pruning for recurrent radial basis function networks based on recursive least square algorithms. Neural Comput Appl 2005. [DOI: 10.1007/s00521-005-0009-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
14
|
|
15
|
Abstract
A complex-valued real-time recurrent learning (CRTRL) algorithm for the class of nonlinear adaptive filters realized as fully connected recurrent neural networks is introduced. The proposed CRTRL is derived for a general complex activation function of a neuron, which makes it suitable for nonlinear adaptive filtering of complex-valued nonlinear and nonstationary signals and complex signals with strong component correlations. In addition, this algorithm is generic and represents a natural extension of the real-valued RTRL. Simulations on benchmark and real-world complex-valued signals support the approach.
Collapse
Affiliation(s)
- Su Lee Goh
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK.
| | | |
Collapse
|
16
|
Abstract
A class of data-reusing learning algorithms for real-time recurrent neural networks (RNNs) is analyzed. The analysis is undertaken for a general sigmoid nonlinear activation function of a neuron for the real time recurrent learning training algorithm. Error bounds and convergence conditions for such data-reusing algorithms are provided for both contractive and expansive activation functions. The analysis is undertaken for various configurations that are generalizations of a linear structure infinite impulse response adaptive filter.
Collapse
Affiliation(s)
- Danilo P. Mandic
- School of Information Systems, University of East Anglia, Norwich, NR4 7TJ, U.K
| |
Collapse
|
17
|
Mastorocostas P, Theocharis J. A recurrent fuzzy-neural model for dynamic system identification. ACTA ACUST UNITED AC 2002; 32:176-90. [DOI: 10.1109/3477.990874] [Citation(s) in RCA: 199] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
18
|
Abstract
A general methodology for gray-box, or semi-physical, modeling is presented. This technique is intended to combine the best of two worlds: knowledge-based modeling, whereby mathematical equations are derived in order to describe a process, based on a physical (or chemical, biological, etc.) analysis, and black-box modeling, whereby a parameterized model is designed, whose parameters are estimated solely from measurements made on the process. The gray-box modeling technique is very valuable whenever a knowledge-based model exists, but is not fully satisfactory and cannot be improved by further analysis (or can only be improved at a very large computational cost). We describe the design methodology of a gray-box model, and illustrate it on a didactic example. We emphasize the importance of the choice of the discretization scheme used for transforming the differential equations of the knowledge-based model into a set of discrete-time recurrent equations. Finally, an application to a real, complex industrial process is presented.
Collapse
Affiliation(s)
- Y Oussar
- Ecole Supérieure de Physique et de Chimie Industrielles de la Ville de Paris, Laboratoire d'Electronique, France.
| | | |
Collapse
|
19
|
Abstract
This article reviews connectionist network architectures and training algorithms that are capable of dealing with patterns distributed across both space and time—spatiotemporal patterns. It provides common mathematical, algorithmic, and illustrative frameworks for describing spatiotemporal networks, making it easier to compare and contrast their representational and operational characteristics. Computational power, representational issues, and learning are discussed. In additional references to the relevant source publications are provided. This article can serve as a guide to prospective users of spatiotemporal networks by providing an overview of the operational and representational alternatives available.
Collapse
Affiliation(s)
- Stefan C. Kremer
- Guelph Natural Computation Group, Department of Computing and Information Science, University of Guelph, Guelph, Ontario, N1G 2W1 Canada
| |
Collapse
|
20
|
Abstract
A large class of nonlinear dynamic adaptive systems such as dynamic recurrent neural networks can be effectively represented by signal flow graphs (SFGs). By this method, complex systems are described as a general connection of many simple components, each of them implementing a simple one-input, one-output transformation, as in an electrical circuit. Even if graph representations are popular in the neural network community, they are often used for qualitative description rather than for rigorous representation and computational purposes. In this article, a method for both on-line and batch-backward gradient computation of a system output or cost function with respect to system parameters is derived by the SFG representation theory and its known properties. The system can be any causal, in general nonlinear and time-variant, dynamic system represented by an SFG, in particular any feedforward, time-delay, or recurrent neural network. In this work, we use discrete-time notation, but the same theory holds for the continuous-time case. The gradient is obtained in a straightforward way by the analysis of two SFGs, the original one and its adjoint (obtained from the first by simple transformations), without the complex chain rule expansions of derivatives usually employed. This method can be used for sensitivity analysis and for learning both off-line and on-line. On-line learning is particularly important since it is required by many real applications, such as digital signal processing, system identification and control, channel equalization, and predistortion.
Collapse
Affiliation(s)
- P Campolucci
- Dipartimento di Elettronica ed Automatica, Università di Ancona, 60121 Ancona, Italy.
| | | | | |
Collapse
|
21
|
Campolucci P, Uncini A, Piazza F, Rao B. On-line learning algorithms for locally recurrent neural networks. ACTA ACUST UNITED AC 1999; 10:253-71. [DOI: 10.1109/72.750549] [Citation(s) in RCA: 117] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
22
|
|
23
|
Oussar Y, Rivals I, Personnaz L, Dreyfus G. Training wavelet networks for nonlinear dynamic input–output modeling. Neurocomputing 1998. [DOI: 10.1016/s0925-2312(98)00010-1] [Citation(s) in RCA: 84] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
24
|
Acuña G, Latrille E, Béal C, Corrieu G. Static and dynamic neural network models for estimating biomass concentration during thermophilic lactic acid bacteria batch cultures. ACTA ACUST UNITED AC 1998. [DOI: 10.1016/s0922-338x(98)80015-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
25
|
Abstract
Discrete-time models of complex nonlinear processes, whether physical, biological, or economical, are usually under the form of systems of coupled difference equations. In analyzing such systems, one of the first tasks is to find a state-space description of the process—that is, a set of state variables and the associated state equations. We present a methodology for finding a set of state variables and a canonical representation of a class of systems described by a set of recurrent discrete-time, time-invariant equations. In the field of neural networks, this is of special importance since the application of standard training algorithms requires the network to be in a canonical form. Several illustrative examples are presented.
Collapse
Affiliation(s)
| | - Yizhak Idan
- ESPCI, Laboratoire d'Électronique, 75005 Paris, France
| |
Collapse
|
26
|
Fredman TP, Saxén H. On a recurrent neural network producing oscillations. Int J Neural Syst 1997; 8:499-508. [PMID: 10065832 DOI: 10.1142/s0129065797000483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
A recurrent two-node neural network producing oscillations is analyzed. The network has no true inputs and the outputs from the network exhibit a circular phase portrait. The weight configuration of the network is investigated, resulting in analytical weight expressions, which are compared with numerical weight estimates obtained by training the network on the desired trajectories. The values predicted by the analytical expressions agree well with the findings from the numerical study, and can also explain the asymptotic properties of the networks studied.
Collapse
Affiliation(s)
- T P Fredman
- Heat Engineering Laboratory, Abo Akademi University, Finland.
| | | |
Collapse
|
27
|
A hybrid recurrent neural network model for yeast production monitoring and control in a wine base medium. J Biotechnol 1997. [DOI: 10.1016/s0168-1656(97)00065-5] [Citation(s) in RCA: 24] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
28
|
|
29
|
|
30
|
Lin T, Horne BG, Tino P, Giles CL. Learning long-term dependencies in NARX recurrent neural networks. ACTA ACUST UNITED AC 1996; 7:1329-38. [PMID: 18263528 DOI: 10.1109/72.548162] [Citation(s) in RCA: 122] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- T Lin
- NEC Res. Inst., Princeton, NJ
| | | | | | | |
Collapse
|
31
|
|
32
|
Abstract
Deriving gradient algorithms for time-dependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to derive such algorithms via a set of simple block diagram manipulation rules. The approach provides a common framework to derive popular algorithms including backpropagation and backpropagation-through-time without a single chain rule expansion. Additional examples are provided for a variety of complicated architectures to illustrate both the generality and the simplicity of the approach.
Collapse
Affiliation(s)
- Eric A. Wan
- Department of Electrical Engineering and Applied Physics, Oregon Graduate Institute of Science & Technology, P.O. Box 91000, Portland, OR 97291 USA
| | - Françoise Beaufays
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305-4055 USA
| |
Collapse
|
33
|
Giles C, Dong Chen, Guo-Zheng Sun, Hsing-Hen Chen, Yee-Chung Lee, Goudreau M. Constructive learning of recurrent neural networks: limitations of recurrent cascade correlation and a simple solution. ACTA ACUST UNITED AC 1995; 6:829-36. [DOI: 10.1109/72.392247] [Citation(s) in RCA: 45] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
34
|
|
35
|
Pearlmutter BA. Gradient calculations for dynamic recurrent neural networks: a survey. ACTA ACUST UNITED AC 1995; 6:1212-28. [PMID: 18263409 DOI: 10.1109/72.410363] [Citation(s) in RCA: 358] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- B A Pearlmutter
- Learning Syst. Dept., Siemens Corp. Res. Inc., Princeton, NJ
| |
Collapse
|
36
|
|
37
|
Nerrand O, Roussel-Ragot P, Urbani D, Personnaz L, Dreyfus G. Training recurrent neural networks: why and how? An illustration in dynamical process modeling. ACTA ACUST UNITED AC 1994; 5:178-84. [DOI: 10.1109/72.279183] [Citation(s) in RCA: 91] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
38
|
Ah Chung Tsoi, Back A. Locally recurrent globally feedforward networks: a critical review of architectures. ACTA ACUST UNITED AC 1994; 5:229-39. [DOI: 10.1109/72.279187] [Citation(s) in RCA: 198] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
39
|
Memory neuron networks for identification and control of dynamical systems. ACTA ACUST UNITED AC 1994; 5:306-19. [DOI: 10.1109/72.279193] [Citation(s) in RCA: 244] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
40
|
Bersini H, Saerens M, Sotelino L. Hopfield net generation, encoding and classification of temporal trajectories. ACTA ACUST UNITED AC 1994; 5:945-53. [DOI: 10.1109/72.329692] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|