751
|
Brown M, Lightbody G, Irwin G. Nonlinear internal model control using local model networks. ACTA ACUST UNITED AC 1997. [DOI: 10.1049/ip-cta:19971541] [Citation(s) in RCA: 52] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
752
|
Roy A, Govil S, Miranda R. A neural-network learning theory and a polynomial time RBF algorithm. ACTA ACUST UNITED AC 1997; 8:1301-13. [DOI: 10.1109/72.641453] [Citation(s) in RCA: 59] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
753
|
Stiles B, Sandberg I, Ghosh J. Complete memory structures for approximating nonlinear discrete-time mappings. ACTA ACUST UNITED AC 1997; 8:1397-409. [DOI: 10.1109/72.641463] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
754
|
Karayiannis N, Mi G. Growing radial basis neural networks: merging supervised and unsupervised learning with network growth techniques. ACTA ACUST UNITED AC 1997; 8:1492-506. [DOI: 10.1109/72.641471] [Citation(s) in RCA: 220] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
755
|
Ikonomopoulos A, van der Hagen T. A novel signal validation method applied to a stochastic process. ANN NUCL ENERGY 1997. [DOI: 10.1016/s0306-4549(97)00023-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
756
|
Tin-Yan Kwok, Dit-Yan Yeung. Objective functions for training new hidden units in constructive neural networks. ACTA ACUST UNITED AC 1997; 8:1131-48. [DOI: 10.1109/72.623214] [Citation(s) in RCA: 152] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
757
|
|
758
|
|
759
|
Abstract
We construct generalized translation networks to approximate uniformly a class of nonlinear, continuous functionals defined on Lp ([-1, 1]s) for integer s > or = 1, 1 < or = p < infinity, or C ([-1, 1]s). We obtain lower bounds on the possible order of approximation for such functionals in terms of any approximation process depending continuously on a given number of parameters. Our networks almost achieve this order of approximation in terms of the number of parameters (neurons) involved in the network. The training is simple and noniterative; in particular, we avoid any optimization such as that involved in the usual backpropagation.
Collapse
Affiliation(s)
- H N Mhaskar
- Department of Mathematics, California State University, Los Angeles 90032, USA
| | | |
Collapse
|
760
|
Feng G. A new stable tracking control scheme for robotic manipulators. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS. PART B, CYBERNETICS : A PUBLICATION OF THE IEEE SYSTEMS, MAN, AND CYBERNETICS SOCIETY 1997; 27:510-6. [PMID: 18255889 DOI: 10.1109/3477.584957] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The paper considers tracking control of robots in joint space. A new control algorithm is proposed based on the well known computed torque method and a compensating controller. The compensating controller is realized by using a switch type structure and an RBF neural network. It is shown that stability of the closed loop system and better tracking performance can be established based on Lyapunov theory. Simulation results are also provided to support our analysis.
Collapse
Affiliation(s)
- G Feng
- Sch. of Electr. Eng., New South Wales Univ., Kensington, NSW
| |
Collapse
|
761
|
Meir R, Zeevi AJ. Density Estimation Through Convex Combinations of Densities: Approximation and Estimation Bounds. Neural Netw 1997; 10:99-109. [PMID: 12662890 DOI: 10.1016/s0893-6080(96)00037-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
We consider the problem of estimating a density function from a sequence identically distributed observations x(i) taking value in X subset R(d). The estimation procedure constructs a convex mixture of "basis" densities and estimates the parameters using the maximum likelihood method. Viewing the error as a combination of two terms, the approximation error measuring the adequacy of the model, and the estimation error resulting from the finiteness of the sample size, we derive upper bounds to the expected total error, thus obtaining bounds for the rate of convergence. These results then allow us to derive explicit expressions relating the sample complexity and model complexity. Copyright 1996 Elsevier Science Ltd.
Collapse
|
762
|
Downs J, Harrison RF, Kennedy RL, Cross SS. Application of the fuzzy ARTMAP neural network model to medical pattern classification tasks. Artif Intell Med 1996; 8:403-28. [PMID: 8870968 DOI: 10.1016/0933-3657(95)00044-5] [Citation(s) in RCA: 45] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
This paper presents research into the application of the fuzzy ARTMAP neural network model to medical pattern classification tasks. A number of domains, both diagnostic and prognostic, are considered. Each such domain highlights a particularly useful aspect of the model. The first coronary care patient prognosis, demonstrates the ARTMAP voting strategy involving 'pooled' decision-making using a number of networks, each of which has learned a slightly different mapping of input features to pattern classes. The second domain, breast cancer diagnosis, demonstrates the model's symbolic rule extraction capabilities which support the validation and explanation of a network's predictions. The final domain, diagnosis of acute myocardial infarction, demonstrates a novel category pruning technique allowing the performance of a trained network to be altered so as to favour predictions of one class over another (e.g. trading sensitivity for specificity or vice versa). It also introduces a 'cascaded' variant of the voting strategy intended to allow identification of a subset of cases which the network has a very high certainty of classifying correctly.
Collapse
Affiliation(s)
- J Downs
- Department of Automatic Control and Systems Engineering, University of Sheffield, UK
| | | | | | | |
Collapse
|
763
|
Abstract
N-tuple neural networks (NTNNs) have been successfully applied to both pattern recognition and function approximation tasks. Their main advantages include a single layer structure, capability of realizing highly non-linear mappings and simplicity of operation. In this work a modification of the basic network architecture is presented, which allows it to operate as a non-parametric kernel regression estimator. This type of network is inherently capable of approximating complex probability density functions (pdfs) and, in the limiting sense, deterministic arbitrary function mappings. At the same time, the regression network features a powerful one-pass training procedure and its learning is statistically consistent. The major advantage of utilizing the N-tuple architecture as a regression estimator is the fact that in this realization the training set points are stored by the network implicitly, rather than explicitly, and thus the operation speed remains constant and independent of the training set size. Therefore, the network performance can be guaranteed in practical implementations. Copyright 1996 Elsevier Science Ltd
Collapse
|
764
|
|
765
|
|
766
|
Lewis F, Yesildirek A, Kai Liu. Multilayer neural-net robot controller with guaranteed tracking performance. ACTA ACUST UNITED AC 1996; 7:388-99. [DOI: 10.1109/72.485674] [Citation(s) in RCA: 789] [Impact Index Per Article: 27.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
767
|
Krzyzak A, Linder T, Lugosi C. Nonparametric estimation and classification using radial basis function nets and empirical risk minimization. ACTA ACUST UNITED AC 1996; 7:475-87. [DOI: 10.1109/72.485681] [Citation(s) in RCA: 59] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
768
|
Jagannathan S, Lewis F. Multilayer discrete-time neural-net controller with guaranteed performance. ACTA ACUST UNITED AC 1996; 7:107-30. [DOI: 10.1109/72.478396] [Citation(s) in RCA: 139] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
769
|
Gradient radial basis function networks for nonlinear and nonstationary time series prediction. ACTA ACUST UNITED AC 1996; 7:190-4. [DOI: 10.1109/72.478403] [Citation(s) in RCA: 96] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
770
|
Abstract
We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation function evaluated by each principal element satisfies certain technical conditions. Under these conditions, it is also possible to construct networks that provide a geometric order of approximation for analytic target functions. The permissible activation functions include the squashing function (1 − e−x)−1 as well as a variety of radial basis functions. Our proofs are constructive. The weights and thresholds of our networks are chosen independently of the target function; we give explicit formulas for the coefficients as simple, continuous, linear functionals of the target function.
Collapse
Affiliation(s)
- H. N. Mhaskar
- Department of Mathematics, California State University, Los Angeles, CA 90032, USA
| |
Collapse
|
771
|
Abstract
Radial basis functions (RBFs) consist of a two-layer neural network, where each hidden unit implements a kernel function. Each kernel is associated with an activation region from the input space and its output is fed to an output unit. In order to find the parameters of a neural network which embeds this structure we take into consideration two different statistical approaches. The first approach uses classical estimation in the learning stage and it is based on the learning vector quantization algorithm and its second-order statistics extension. After the presentation of this approach, we introduce the median radial basis function (MRBF) algorithm based on robust estimation of the hidden unit parameters. The proposed algorithm employs the marginal median for kernel location estimation and the median of the absolute deviations for the scale parameter estimation. A histogram-based fast implementation is provided for the MRBF algorithm. The theoretical performance of the two training algorithms is comparatively evaluated when estimating the network weights. The network is applied in pattern classification problems and in optical flow segmentation.
Collapse
Affiliation(s)
- A G Bors
- Dept. of Inf., Thessaloniki Univ
| | | |
Collapse
|
772
|
Annaswamy AM, Yu SH. theta-adaptive neural networks: a new approach to parameter estimation. IEEE TRANSACTIONS ON NEURAL NETWORKS 1996; 7:907-18. [PMID: 18263486 DOI: 10.1109/72.508934] [Citation(s) in RCA: 24] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
A novel use of neural networks for parameter estimation in nonlinear systems is proposed. The approximating ability of the neural network is used to identify the relation between system variables and parameters of a dynamic system. Two different algorithms, a block estimation method and a recursive estimation method, are proposed. The block estimation method consists of the training of a neural network to approximate the mapping between the system response and the system parameters which in turn is used to identify the parameters of the nonlinear system. In the second method, the neural network is used to determine a recursive algorithm to update the parameter estimate. Both methods are useful for parameter estimation in systems where either the structure of the nonlinearities present are unknown or when the parameters occur nonlinearly. Analytical conditions under which successful estimation can be carried but and several illustrative examples verifying the behavior of the algorithms through simulations are presented.
Collapse
|
773
|
Choi JY, Van Landingham HF, Bingulac S. A constructive approach for nonlinear system identification using multilayer perceptrons. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS. PART B, CYBERNETICS : A PUBLICATION OF THE IEEE SYSTEMS, MAN, AND CYBERNETICS SOCIETY 1996; 26:307-12. [PMID: 18263032 DOI: 10.1109/3477.485881] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
This paper combines a conventional method of multivariable system identification with a dynamic multi-layer perceptron (MLP) to achieve a constructive method of nonlinear system identification. The class of nonlinear systems is assumed to operate nominally around an equilibrium point in the neighborhood of which a linearized model exists to represent the system, although normal operation is not limited to the linear region. The result is an accurate discrete-time nonlinear model, extended from a MIMO linear model, which captures the nonlinear behavior of the system.
Collapse
Affiliation(s)
- J Y Choi
- Bradley Dept. of Electr. Eng., Virginia Polytech. Inst. & State Univ., Blacksburg, VA
| | | | | |
Collapse
|
774
|
Cha I, Kassam SA. RBFN restoration of nonlinearly degraded images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1996; 5:964-975. [PMID: 18285184 DOI: 10.1109/83.503912] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We investigate a technique for image restoration using nonlinear networks based on radial basis functions. The technique is also based on the concept of training or learning by examples. When trained properly, these networks are used as spatially invariant feedforward nonlinear filters that can perform restoration of images degraded by nonlinear degradation mechanisms. We examine a number of network structures including the Gaussian radial basis function network (RBFN) and some extensions of it, as well as a number of training algorithms including the stochastic gradient (SG) algorithm that we have proposed earlier. We also propose a modified structure based on the Gaussian-mixture model and a learning algorithm for the modified network. Experimental results indicate that the radial basis function network and its extensions can be very useful in restoring images degraded by nonlinear distortion and noise.
Collapse
Affiliation(s)
- I Cha
- Dept. of Electr. Eng., Pennsylvania Univ., Philadelphia, PA
| | | |
Collapse
|
775
|
Kwok TY, Yeung DY. Use of bias term in projection pursuit learning improves approximation and convergence properties. IEEE TRANSACTIONS ON NEURAL NETWORKS 1996; 7:1168-83. [PMID: 18263512 DOI: 10.1109/72.536312] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In a regression problem, one is given a multidimensional random vector X, the components of which are called predictor variables, and a random variable, Y, called response. A regression surface describes a general relationship between X and Y. A nonparametric regression technique that has been successfully applied to high-dimensional data is projection pursuit regression (PPR). The regression surface is approximated by a sum of empirically determined univariate functions of linear combinations of the predictors. Projection pursuit learning (PPL) formulates PPR using a 2-layer feedforward neural network. The smoothers in PPR are nonparametric, whereas those in PPL are based on Hermite functions of some predefined highest order R. We demonstrate that PPL networks in the original form do not have the universal approximation property for any finite R, and thus cannot converge to the desired function even with an arbitrarily large number of hidden units. But, by including a bias term in each linear projection of the predictor variables, PPL networks can regain these capabilities, independent of the exact choice of R. Experimentally, it is shown in this paper that this modification increases the rate of convergence with respect to the number of hidden units, improves the generalization performance, and makes it less sensitive to the setting of R. Finally, we apply PPL to chaotic time series prediction, and obtain superior results compared with the cascade-correlation architecture.
Collapse
Affiliation(s)
- T Y Kwok
- Dept. of Comput. Sci., Hong Kong Univ. of Sci. and Technol., Kowloon
| | | |
Collapse
|
776
|
Heiss M. Error-minimizing dead zone for basis function networks. IEEE TRANSACTIONS ON NEURAL NETWORKS 1996; 7:1503-6. [PMID: 18263544 DOI: 10.1109/72.548178] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The incorporation of dead zones in the error signal of basis function networks avoids the networks' overtraining and guarantees the convergence of the normalized least mean square (LMS) algorithm and related algorithms. A new so-called error-minimizing dead zone is presented providing the least a posteriori error out of the set of all convergence assuring dead zones. A general convergence proof is developed for LMS algorithms with dead zones, and the error-minimizing dead zone is derived from the resulting convergence condition. The performance is compared with the performance of classical dead zones.
Collapse
Affiliation(s)
- M Heiss
- Inst. fur Allgemeine Elektrotechnik Automobilelektronik, Tech. Univ. of Vienna
| |
Collapse
|
777
|
Fast evolutionary learning of minimal radial basis function neural networks using a genetic algorithm. EVOLUTIONARY COMPUTING 1996. [DOI: 10.1007/bfb0032769] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
|
778
|
|
779
|
|
780
|
Tan S, Hao J, Vandewalle J. Efficient identification of RBF neural net models for nonlinear discrete-time multivariable dynamical systems. Neurocomputing 1995. [DOI: 10.1016/0925-2312(95)00042-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
781
|
Abstract
Several of the major classes of artificial neural networks' output functions are linear combinations of elements of approximately flat sets. This gives a tool for understanding the precision problem as well as providing a rationale for mixing types of networks. Approximate flatness also helps explain the power of artificial neural network techniques relative to series regressions—series regressions take linear combinations of flat sets, while neural networks take linear combinations of the much larger class of approximately flat sets.
Collapse
|
782
|
Improving the approximation and convergence capabilities of projection pursuit learning. Neural Process Lett 1995. [DOI: 10.1007/bf02311575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
783
|
Abstract
Subset selection and regularization are two well-known techniques that can improve the generalization performance of nonparametric linear regression estimators, such as radial basis function networks. This paper examines regularized forward selection (RFS)—a combination of forward subset selection and zero-order regularization. An efficient implementation of RFS into which either delete-1 or generalized cross-validation can be incorporated and a reestimation formula for the regularization parameter are also discussed. Simulation studies are presented that demonstrate improved generalization performance due to regularization in the forward selection of radial basis function centers.
Collapse
Affiliation(s)
- Mark J. L. Orr
- Centre for Cognitive Science, University of Edinburgh, 2, Buccleuch Place, Edinburgh EH8 9LW, UK
| |
Collapse
|
784
|
Bianchini M, Frasconi P, Gori M. Learning without local minima in radial basis function networks. ACTA ACUST UNITED AC 1995; 6:749-56. [DOI: 10.1109/72.377979] [Citation(s) in RCA: 146] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
785
|
|
786
|
Lewis F, Liu K, Yesildirek A. Neural net robot controller with guaranteed tracking performance. ACTA ACUST UNITED AC 1995; 6:703-15. [DOI: 10.1109/72.377975] [Citation(s) in RCA: 469] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
787
|
Graña M, D'Anjou A, Gonzalez A, Albizuri F, Cottrell M. Competitive stochastic neural networks for Vector Quantization of images. Neurocomputing 1995. [DOI: 10.1016/0925-2312(94)00072-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
788
|
|
789
|
|
790
|
Jenison RL, Fissell K. A comparison of the von Mises and Gaussian basis functions for approximating spherical acoustic scatter. IEEE TRANSACTIONS ON NEURAL NETWORKS 1995; 6:1284-7. [PMID: 18263419 DOI: 10.1109/72.410375] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper compares the approximation accuracy of two basis functions that share a common radial basis function (RBF) neural network architecture used for approximating a known function on the unit sphere. The basis function types considered are that of a new spherical basis function, the von Mises function, and the now well-known Gaussian basis function. Gradient descent learning rules were applied to optimize (learn) the solution for both approximating basis functions. A benchmark approximation problem was used to compare the performance of the two types of basis functions, in this case the mathematical expression for the scattering of an acoustic wave striking a rigid sphere.
Collapse
Affiliation(s)
- R L Jenison
- Dept. of Psychol., Wisconsin Univ., Madison, WI
| | | |
Collapse
|
791
|
Bahrami M. Issues on representational capabilities of artificial neural networks and their implementation. INT J INTELL SYST 1995. [DOI: 10.1002/int.4550100604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
792
|
Kavli T, Weyer E. On ASMOD — An Algorithm for Empirical Modelling using Spline Functions. ACTA ACUST UNITED AC 1995. [DOI: 10.1007/978-1-4471-3066-6_5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023]
|
793
|
|
794
|
Gorinevsky D. On the persistency of excitation in radial basis function network identification of nonlinear systems. ACTA ACUST UNITED AC 1995; 6:1237-44. [DOI: 10.1109/72.410365] [Citation(s) in RCA: 133] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
795
|
Quality assurance and increased efficiency in medical projects with neural networks by using a structured development method for feedforward neural networks (SENN). Artif Intell Med 1995. [DOI: 10.1007/3-540-60025-6_150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
796
|
A neural network-based proportional integral derivative controller. Neural Comput Appl 1994. [DOI: 10.1007/bf01415009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
797
|
Abstract
In the mid-1980s, widespread interest in research into artificial neural networks re-emerged following a period of reduced research funding. The much wider availability and the increased power of computing systems, together with new areas of research, is expanding the range of potential application. The main reason for this is that the potential to describe the characteristics of extremely complex systems accurately has been attributed to this methodology. This article examines the contribution of various network methodologies to bioprocess modelling, control and pattern recognition. Industrial processes can benefit from the application of feedforward networks with sigmoidal activation functions, radial basis function networks and autoassociative networks. The contribution that neural networks can make to biochemical and microbiological scientific research is also reviewed briefly.
Collapse
Affiliation(s)
- G Montague
- Department of Chemical and Process Engineering, University of Newcastle, Newcastle upon Tyne, UK
| | | |
Collapse
|
798
|
Truyen B, Langloh N, Cornelis J. An adiabatic neural network for RBF approximation. Neural Comput Appl 1994. [DOI: 10.1007/bf01414351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
799
|
Gorinevsky D, Connolly TH. Comparison of Some Neural Network and Scattered Data Approximations: The Inverse Manipulator Kinematics Example. Neural Comput 1994. [DOI: 10.1162/neco.1994.6.3.521] [Citation(s) in RCA: 40] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This paper compares the application of five different methods for the approximation of the inverse kinematics of a manipulator arm from a number of joint angle/Cartesian coordinate training pairs. The first method is a standard feedforward neural network with error backpropagation learning. The next two methods are derived from an extended Kohonen Map algorithm that we combine with Shepard interpolation for the forward computation. We compare the method of Ritter et al. for the learning of the extended Kohonen Map to our own scheme based on gradient descent optimization. We also study three scattered data approximation algorithms. They include two variants of the Radial Basis Function (RBF) method: Hardy's multiquadrics and gaussian RBF. We further develop our own Local Polynomial Fit method that could be considered as a modification of McLain's method. We propose extensions to the considered scattered data approximation algorithms to make them suitable for vector-valued multivariable functions, such as the mapping of Cartesian coordinates into joint angle coordinates.
Collapse
Affiliation(s)
- Dimitry Gorinevsky
- Lehrstuhl B für Mechanik, Technische Universität München, D-80333 Munich 2, Germany
| | - Thomas H. Connolly
- Lehrstuhl B für Mechanik, Technische Universität München, D-80333 Munich 2, Germany
| |
Collapse
|
800
|
Abstract
Feedforward neural networks with a single hidden layer using normalized gaussian units are studied. It is proved that such neural networks are capable of universal approximation in a satisfactory sense. Then, a hybrid learning rule as per Moody and Darken that combines unsupervised learning of hidden units and supervised learning of output units is considered. By using the method of ordinary differential equations for adaptive algorithms (ODE method) it is shown that the asymptotic properties of the learning rule may be studied in terms of an autonomous cascade of dynamical systems. Some recent results from Hirsch about cascades are used to show the asymptotic stability of the learning rule.
Collapse
Affiliation(s)
- Michel Benaim
- Department of Mathematics, University of California at Berkeley, Berkeley, CA 94720 USA
| |
Collapse
|