1
|
Simulation of Biochemical Reactions with ANN-Dependent Kinetic Parameter Extraction Method. ELECTRONICS 2022. [DOI: 10.3390/electronics11020216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
The measurement of thermodynamic properties of chemical or biological reactions were often confined to experimental means, which produced overall measurements of properties being investigated, but were usually susceptible to pitfalls of being too general. Among the thermodynamic properties that are of interest, reaction rates hold the greatest significance, as they play a critical role in reaction processes where speed is of essence, especially when fast association may enhance binding affinity of reaction molecules. Association reactions with high affinities often involve the formation of a intermediate state, which can be demonstrated by a hyperbolic reaction curve, but whose low abundance in reaction mixture often preclude the possibility of experimental measurement. Therefore, we resorted to computational methods using predefined reaction models that model the intermediate state as the reaction progresses. Here, we present a novel method called AKPE (ANN-Dependent Kinetic Parameter Extraction), our goal is to investigate the association/dissociation rate constants and the concentration dynamics of lowly-populated states (intermediate states) in the reaction landscape. To reach our goal, we simulated the chemical or biological reactions as system of differential equations, employed artificial neural networks (ANN) to model experimentally measured data, and utilized Particle Swarm Optimization (PSO) algorithm to obtain the globally optimum parameters in both the simulation and data fitting. In the Results section, we have successfully modeled a protein association reaction using AKPE, obtained the kinetic rate constants of the reaction, and constructed a full concentration versus reaction time curve of the intermediate state during the reaction. Furthermore, judging from the various validation methods that the method proposed in this paper has strong robustness and accuracy.
Collapse
|
2
|
Saab SS, Shen D. Multidimensional Gains for Stochastic Approximation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1602-1615. [PMID: 31265420 DOI: 10.1109/tnnls.2019.2920930] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper deals with iterative Jacobian-based recursion technique for the root-finding problem of the vector-valued function, whose evaluations are contaminated by noise. Instead of a scalar step size, we use an iterate-dependent matrix gain to effectively weigh the different elements associated with the noisy observations. The analytical development of the matrix gain is built on an iterative-dependent linear function interfered by additive zero-mean white noise, where the dimension of the function is M ≥ 1 and the dimension of the unknown variable is N ≥ 1 . Necessary and sufficient conditions for M ≥ N algorithms are presented pertaining to algorithm stability and convergence of the estimate error covariance matrix. Two algorithms are proposed: one for the case where M ≥ N and the second one for the antithesis. The two algorithms assume full knowledge of the Jacobian. The recursive algorithms are proposed for generating the optimal iterative-dependent matrix gain. The proposed algorithms here aim for per-iteration minimization of the mean square estimate error. We show that the proposed algorithm satisfies the presented conditions for stability and convergence of the covariance. In addition, the convergence rate of the estimation error covariance is shown to be inversely proportional to the number of iterations. For the antithesis , contraction of the error covariance is guaranteed. This underdetermined system of equations can be helpful in training neural networks. Numerical examples are presented to illustrate the performance capabilities of the proposed multidimensional gain while considering nonlinear functions.
Collapse
|
3
|
Lyapunov stability-Dynamic Back Propagation-based comparative study of different types of functional link neural networks for the identification of nonlinear systems. Soft comput 2020. [DOI: 10.1007/s00500-019-04496-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
4
|
Chen D, Hu F, Nian G, Yang T. Deep Residual Learning for Nonlinear Regression. ENTROPY 2020; 22:e22020193. [PMID: 33285968 PMCID: PMC7516619 DOI: 10.3390/e22020193] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Revised: 01/31/2020] [Accepted: 02/04/2020] [Indexed: 11/16/2022]
Abstract
Deep learning plays a key role in the recent developments of machine learning. This paper develops a deep residual neural network (ResNet) for the regression of nonlinear functions. Convolutional layers and pooling layers are replaced by fully connected layers in the residual block. To evaluate the new regression model, we train and test neural networks with different depths and widths on simulated data, and we find the optimal parameters. We perform multiple numerical tests of the optimal regression model on multiple simulated data, and the results show that the new regression model behaves well on simulated data. Comparisons are also made between the optimal residual regression and other linear as well as nonlinear approximation techniques, such as lasso regression, decision tree, and support vector machine. The optimal residual regression model has better approximation capacity compared to the other models. Finally, the residual regression is applied into the prediction of a relative humidity series in the real world. Our study indicates that the residual regression model is stable and applicable in practice.
Collapse
Affiliation(s)
- Dongwei Chen
- School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC 29641, USA; (D.C.); (T.Y.)
| | - Fei Hu
- State Key Laboratory of Atmospheric Boundary Layer Physics and Atmospheric Chemistry, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China
- College of Earth Science, University of Chinese Academy of Sciences, Beijing 100049, China
- Correspondence: (F.H.); (G.N.); Tel.: +86-10-82995222 (F.H.); +86-15691032668 (G.N.)
| | - Guokui Nian
- College of Earth Science, University of Chinese Academy of Sciences, Beijing 100049, China
- State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China
- Forecast Weather (Suzhou) Technology Co., Ltd., Suzhou 215000, China
- Correspondence: (F.H.); (G.N.); Tel.: +86-10-82995222 (F.H.); +86-15691032668 (G.N.)
| | - Tiantian Yang
- School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC 29641, USA; (D.C.); (T.Y.)
| |
Collapse
|
5
|
Majumdar A, Gupta M. Recurrent transform learning. Neural Netw 2019; 118:271-279. [PMID: 31326661 DOI: 10.1016/j.neunet.2019.07.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 05/30/2019] [Accepted: 07/09/2019] [Indexed: 11/25/2022]
Abstract
Recurrent neural networks (RNN) model time series by feeding back the representation from the previous time instant as an input for the current instant along with exogenous inputs. Two main shortcomings of RNN are - 1. The problem of vanishing gradients while backpropagating through time, and 2. Inability to learn in an unsupervised manner. Variants like long-short term memory (LSTM) network and gated recurrent units (GRU) have partially circumvented the first issue; the second issue still remains. In this work we propose a new variant of RNN based on the transform learning model - named recurrent transform learning (RTL). It can learn in an unsupervised, supervised and semi-supervised fashion; it does not require backpropagation and hence do not suffer from the pitfalls of vanishing gradients. The proposed model is applied on a real-life example of short-term load forecasting, where we show that RTL improves over existing variants of RNN as well as on a state-of-the-art technique in load forecasting based on sparse coding.
Collapse
Affiliation(s)
- Angshul Majumdar
- A 606, New Academic Building Indraprastha Institute of Information Technology, Delhi Okhla Phase 3, New Delhi, 110020, India.
| | - Megha Gupta
- A 606, New Academic Building Indraprastha Institute of Information Technology, Delhi Okhla Phase 3, New Delhi, 110020, India.
| |
Collapse
|
6
|
Delay-dependent H ∞ and generalized H 2 filtering for stochastic neural networks with time-varying delay and noise disturbance. Neural Comput Appl 2013. [DOI: 10.1007/s00521-013-1531-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
7
|
Chen Y, Zheng WX. Stability analysis of time-delay neural networks subject to stochastic perturbations. IEEE TRANSACTIONS ON CYBERNETICS 2013; 43:2122-2134. [PMID: 23757521 DOI: 10.1109/tcyb.2013.2240451] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This paper is concerned with the problem of mean-square exponential stability of uncertain neural networks with time-varying delay and stochastic perturbation. Both linear and nonlinear stochastic perturbations are considered. The main features of this paper are twofold: 1) Based on generalized Finsler lemma, some improved delay-dependent stability criteria are established, which are more efficient than the existing ones in terms of less conservatism and lower computational complexity; and 2) when the nonlinear stochastic perturbation acting on the system satisfies a class of Lipschitz linear growth conditions, the restrictive condition P < δI (or the similar ones) in the existing results can be relaxed under some assumptions. The usefulness of the proposed method is demonstrated by illustrative examples.
Collapse
|
8
|
Chen Y, Zheng WX. Stability and L2 performance analysis of stochastic delayed neural networks. IEEE TRANSACTIONS ON NEURAL NETWORKS 2011; 22:1662-8. [PMID: 21843984 DOI: 10.1109/tnn.2011.2163319] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This brief focuses on the robust mean-square exponential stability and L(2) performance analysis for a class of uncertain time-delay neural networks perturbed by both additive and multiplicative stochastic noises. New mean-square exponential stability and L(2) performance criteria are developed based on the delay partition Lyapunov-Krasovskii functional method and generalized Finsler lemma which is applicable to stochastic systems. The analytical results are established without involving any model transformation, estimation for cross terms, additional free-weighting matrices, or tuning parameters. Numerical examples are presented to verify that the proposed approach is both less conservative and less computationally complex than the existing ones.
Collapse
Affiliation(s)
- Yun Chen
- School of Computing and Mathematics, University of Western Sydney, Penrith NSW 2751, Australia.
| | | |
Collapse
|
9
|
Dehuri S, Cho SB. A comprehensive survey on functional link neural networks and an adaptive PSO–BP learning for CFLNN. Neural Comput Appl 2009. [DOI: 10.1007/s00521-009-0288-5] [Citation(s) in RCA: 110] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
10
|
Turchetti C, Crippa P, Pirani M, Biagetti G. Representation of nonlinear random transformations by non-gaussian stochastic neural networks. ACTA ACUST UNITED AC 2008; 19:1033-60. [PMID: 18541503 DOI: 10.1109/tnn.2007.2000055] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The learning capability of neural networks is equivalent to modeling physical events that occur in the real environment. Several early works have demonstrated that neural networks belonging to some classes are universal approximators of input-output deterministic functions. Recent works extend the ability of neural networks in approximating random functions using a class of networks named stochastic neural networks (SNN). In the language of system theory, the approximation of both deterministic and stochastic functions falls within the identification of nonlinear no-memory systems. However, all the results presented so far are restricted to the case of Gaussian stochastic processes (SPs) only, or to linear transformations that guarantee this property. This paper aims at investigating the ability of stochastic neural networks to approximate nonlinear input-output random transformations, thus widening the range of applicability of these networks to nonlinear systems with memory. In particular, this study shows that networks belonging to a class named non-Gaussian stochastic approximate identity neural networks (SAINNs) are capable of approximating the solutions of large classes of nonlinear random ordinary differential transformations. The effectiveness of this approach is demonstrated and discussed by some application examples.
Collapse
Affiliation(s)
- Claudio Turchetti
- DEIT-Dipartimento di Elettronica, Intelligenza Artificiale e Telecomunicazioni, Università Politecnica delle Marche, I-60131 Ancona, Italy.
| | | | | | | |
Collapse
|
11
|
|
12
|
Liu Y, Wang Z, Liu X. On global exponential stability of generalized stochastic neural networks with mixed time-delays. Neurocomputing 2006. [DOI: 10.1016/j.neucom.2006.01.031] [Citation(s) in RCA: 81] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
13
|
|