• Reference Citation Analysis
  • v
  • v
  • Find an Article
Find an Article PDF (4632649)   Today's Articles (4798)   Subscriber (49907)
For: Sum J, Leung CS, Ho K. Convergence analyses on on-line weight noise injection-based training algorithms for MLPs. IEEE Trans Neural Netw Learn Syst 2012;23:1827-1840. [PMID: 24808076 DOI: 10.1109/tnnls.2012.2210243] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Number Cited by Other Article(s)
1
Sum J, Leung CS. Regularization Effect of Random Node Fault/Noise on Gradient Descent Learning Algorithm. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023;34:2619-2632. [PMID: 34487503 DOI: 10.1109/tnnls.2021.3107051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
2
Wong HT, Leung CS, Kwong S. Convergence analysis on the deterministic mini-batch learning algorithm for noise resilient radial basis function networks. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01550-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
3
Wang X, Wang J, Zhang K, Lin F, Chang Q. Convergence and objective functions of noise-injected multilayer perceptrons with hidden multipliers. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.03.119] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
4
Zhang H, Zhang Y, Zhu S, Xu D. Deterministic convergence of complex mini-batch gradient learning algorithm for fully complex-valued neural networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.114] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
5
Sum J, Leung CS, Ho K. A Limitation of Gradient Descent Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020;31:2227-2232. [PMID: 31398136 DOI: 10.1109/tnnls.2019.2927689] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
6
Xiao S, Zhang Y, Zhang B. ℓ1-gain filter design of discrete-time positive neural networks with mixed delays. Neural Netw 2020;122:152-162. [PMID: 31683143 DOI: 10.1016/j.neunet.2019.10.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2019] [Revised: 08/28/2019] [Accepted: 10/07/2019] [Indexed: 10/25/2022]
7
Wang J, Chang Q, Chang Q, Liu Y, Pal NR. Weight Noise Injection-Based MLPs With Group Lasso Penalty: Asymptotic Convergence and Application to Node Pruning. IEEE TRANSACTIONS ON CYBERNETICS 2019;49:4346-4364. [PMID: 30530381 DOI: 10.1109/tcyb.2018.2864142] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
8
Sum J, Leung CS. Learning Algorithm for Boltzmann Machines With Additive Weight and Bias Noise. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019;30:3200-3204. [PMID: 30668482 DOI: 10.1109/tnnls.2018.2889072] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
9
Wang J, Xu C, Yang X, Zurada JM. A Novel Pruning Algorithm for Smoothing Feedforward Neural Networks Based on Group Lasso Method. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018;29:2012-2024. [PMID: 28961129 DOI: 10.1109/tnnls.2017.2748585] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
10
Convergence analysis of BP neural networks via sparse response regularization. Appl Soft Comput 2017. [DOI: 10.1016/j.asoc.2017.07.059] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
11
Wang J, Cai Q, Chang Q, Zurada JM. Convergence analyses on sparse feedforward neural networks via group lasso regularization. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2016.11.020] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
12
Yeung DS, Li JC, Ng WWY, Chan PPK. MLPNN Training via a Multiobjective Optimization of Training Error and Stochastic Sensitivity. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016;27:978-992. [PMID: 26054075 DOI: 10.1109/tnnls.2015.2431251] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
13
Zhang H, Zhang Y, Xu D, Liu X. Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks. Cogn Neurodyn 2015;9:331-40. [PMID: 25972981 DOI: 10.1007/s11571-014-9323-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Revised: 11/26/2014] [Accepted: 12/10/2014] [Indexed: 02/03/2023]  Open
14
Zhang H, Tang Y, Liu X. Batch gradient training method with smoothing $$\boldsymbol{\ell}_{\bf 0}$$ ℓ 0 regularization for feedforward neural networks. Neural Comput Appl 2014. [DOI: 10.1007/s00521-014-1730-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
15
Hierarchical extreme learning machine for feedforward neural network. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.01.057] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
PrevPage 1 of 1 1Next
© 2004-2024 Baishideng Publishing Group Inc. All rights reserved. 7041 Koll Center Parkway, Suite 160, Pleasanton, CA 94566, USA