1
|
Chen J, Pan Y, Zhang Y, Li S, Tan N. Inverse-free zeroing neural network for time-variant nonlinear optimization with manipulator applications. Neural Netw 2024; 178:106462. [PMID: 38901094 DOI: 10.1016/j.neunet.2024.106462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 05/10/2024] [Accepted: 06/11/2024] [Indexed: 06/22/2024]
Abstract
In this paper, the problem of time-variant optimization subject to nonlinear equation constraint is studied. To solve the challenging problem, methods based on the neural networks, such as zeroing neural network and gradient neural network, are commonly adopted due to their performance on handling nonlinear problems. However, the traditional zeroing neural network algorithm requires computing the matrix inverse during the solving process, which is a complicated and time-consuming operation. Although the gradient neural network algorithm does not require computing the matrix inverse, its accuracy is not high enough. Therefore, a novel inverse-free zeroing neural network algorithm without matrix inverse is proposed in this paper. The proposed algorithm not only avoids the matrix inverse, but also avoids matrix multiplication, greatly reducing the computational complexity. In addition, detailed theoretical analyses of the convergence performance of the proposed algorithm is provided to guarantee its excellent capability in solving time-variant optimization problems. Numerical simulations and comparative experiments with traditional zeroing neural network and gradient neural network algorithms substantiate the accuracy and superiority of the novel inverse-free zeroing neural network algorithm. To further validate the performance of the novel inverse-free zeroing neural network algorithm in practical applications, path tracking tasks of three manipulators (i.e., Universal Robot 5, Franka Emika Panda, and Kinova JACO2 manipulators) are conducted, and the results verify the applicability of the proposed algorithm.
Collapse
Affiliation(s)
- Jielong Chen
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China.
| | - Yan Pan
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China.
| | - Yunong Zhang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China.
| | - Shuai Li
- Faculty of Information Technology and Electrical Engineering, University of Oulu, Oulu 905706, Finland; VTT-Technology Research Center of Finland, Oulu 905706, Finland.
| | - Ning Tan
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
2
|
Li H, Wang J, Zhang N, Zhang W. Binary matrix factorization via collaborative neurodynamic optimization. Neural Netw 2024; 176:106348. [PMID: 38735099 DOI: 10.1016/j.neunet.2024.106348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 03/19/2024] [Accepted: 04/25/2024] [Indexed: 05/14/2024]
Abstract
Binary matrix factorization is an important tool for dimension reduction for high-dimensional datasets with binary attributes and has been successfully applied in numerous areas. This paper presents a collaborative neurodynamic optimization approach to binary matrix factorization based on the original combinatorial optimization problem formulation and quadratic unconstrained binary optimization problem reformulations. The proposed approach employs multiple discrete Hopfield networks operating concurrently in search of local optima. In addition, a particle swarm optimization rule is used to reinitialize neuronal states iteratively to escape from local minima toward better ones. Experimental results on eight benchmark datasets are elaborated to demonstrate the superior performance of the proposed approach against six baseline algorithms in terms of factorization error. Additionally, the viability of the proposed approach is demonstrated for pattern discovery on three datasets.
Collapse
Affiliation(s)
- Hongzong Li
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong.
| | - Jun Wang
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong; School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| | - Nian Zhang
- Department of Electrical & Computer Engineering, University of the District of Columbia, Washington, DC, USA.
| | - Wei Zhang
- Chongqing Engineering Research Center of Internet of Things and Intelligent Control Technology, Chongqing Three Gorges University, Chongqing, China.
| |
Collapse
|
3
|
Tuan TA, Dung NV, Thang TN. A Hyper-Transformer model for Controllable Pareto Front Learning with Split Feasibility Constraints. Neural Netw 2024; 179:106571. [PMID: 39121789 DOI: 10.1016/j.neunet.2024.106571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 07/06/2024] [Accepted: 07/23/2024] [Indexed: 08/12/2024]
Abstract
Controllable Pareto front learning (CPFL) approximates the Pareto optimal solution set and then locates a non-dominated point with respect to a given reference vector. However, decision-maker objectives were limited to a constraint region in practice, so instead of training on the entire decision space, we only trained on the constraint region. Controllable Pareto front learning with Split Feasibility Constraints (SFC) is a way to find the best Pareto solutions to a split multi-objective optimization problem that meets certain constraints. In the previous study, CPFL used a Hypernetwork model comprising multi-layer perceptron (Hyper-MLP) blocks. Transformer can be more effective than previous architectures on numerous modern deep learning tasks in certain situations due to their distinctive advantages. Therefore, we have developed a hyper-transformer (Hyper-Trans) model for CPFL with SFC. We use the theory of universal approximation for the sequence-to-sequence function to show that the Hyper-Trans model makes MED errors smaller in computational experiments than the Hyper-MLP model.
Collapse
Affiliation(s)
- Tran Anh Tuan
- Faculty of Mathematics and Informatics, Hanoi University of Science and Technology; Center for Digital Technology and Economy (BK Fintech), Hanoi University of Science and Technology, Hanoi, Vietnam.
| | - Nguyen Viet Dung
- Faculty of Mathematics and Informatics, Hanoi University of Science and Technology; Center for Digital Technology and Economy (BK Fintech), Hanoi University of Science and Technology, Hanoi, Vietnam.
| | - Tran Ngoc Thang
- Faculty of Mathematics and Informatics, Hanoi University of Science and Technology; Center for Digital Technology and Economy (BK Fintech), Hanoi University of Science and Technology, Hanoi, Vietnam.
| |
Collapse
|
4
|
Liu J, Liao X, Dong JS. A Recurrent Neural Network Approach for Constrained Distributed Fuzzy Convex Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:9743-9757. [PMID: 37022084 DOI: 10.1109/tnnls.2023.3236607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article investigates a class of constrained distributed fuzzy convex optimization problems, where the objective function is the sum of a set of local fuzzy convex objective functions, and the constraints include partial order relation and closed convex set constraints. In undirected connected node communication network, each node only knows its own objective function and constraints, and the local objective function and partial order relation functions may be nonsmooth. To solve this problem, a recurrent neural network approach based on differential inclusion framework is proposed. The network model is constructed with the help of the idea of penalty function, and the estimation of penalty parameters in advance is eliminated. Through theoretical analysis, it is proven that the state solution of the network enters the feasible region in finite time and does not escape again, and finally reaches consensus at an optimal solution of the distributed fuzzy optimization problem. Furthermore, the stability and global convergence of the network do not depend on the selection of the initial state. A numerical example and an intelligent ship output power optimization problem are given to illustrate the feasibility and effectiveness of the proposed approach.
Collapse
|
5
|
Xiao L, Cao P, Wang Z, Liu S. A novel fixed-time error-monitoring neural network for solving dynamic quaternion-valued Sylvester equations. Neural Netw 2024; 170:494-505. [PMID: 38039686 DOI: 10.1016/j.neunet.2023.11.058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 11/03/2023] [Accepted: 11/24/2023] [Indexed: 12/03/2023]
Abstract
This paper addresses the dynamic quaternion-valued Sylvester equation (DQSE) using the quaternion real representation and the neural network method. To transform the Sylvester equation in the quaternion field into an equivalent equation in the real field, three different real representation modes for the quaternion are adopted by considering the non-commutativity of quaternion multiplication. Based on the equivalent Sylvester equation in the real field, a novel recurrent neural network model with an integral design formula is proposed to solve the DQSE. The proposed model, referred to as the fixed-time error-monitoring neural network (FTEMNN), achieves fixed-time convergence through the action of a state-of-the-art nonlinear activation function. The fixed-time convergence of the FTEMNN model is theoretically analyzed. Two examples are presented to verify the performance of the FTEMNN model with a specific focus on fixed-time convergence. Furthermore, the chattering phenomenon of the FTEMNN model is discussed, and a saturation function scheme is designed. Finally, the practical value of the FTEMNN model is demonstrated through its application to image fusion denoising.
Collapse
Affiliation(s)
- Lin Xiao
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| | - Penglin Cao
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge, Middlesex, UB8 3PH, United Kingdom.
| | - Sai Liu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan 410081, China.
| |
Collapse
|
6
|
Huang B, Liu Y, Jiang YL, Wang J. Two-timescale projection neural networks in collaborative neurodynamic approaches to global optimization and distributed optimization. Neural Netw 2024; 169:83-91. [PMID: 37864998 DOI: 10.1016/j.neunet.2023.10.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 09/15/2023] [Accepted: 10/10/2023] [Indexed: 10/23/2023]
Abstract
In this paper, we propose a two-timescale projection neural network (PNN) for solving optimization problems with nonconvex functions. We prove the convergence of the PNN with sufficiently different timescales to a local optimal solution. We develop a collaborative neurodynamic approach with multiple such PNNs to search for global optimal solutions. In addition, we develop a collaborative neurodynamic approach with multiple PNNs connected via a directed graph for distributed global optimization. We elaborate on four numerical examples to illustrate the characteristics of the approaches.
Collapse
Affiliation(s)
- Banghua Huang
- School of Mathematical Sciences, Zhejiang Normal University, JinhuaZhejiang 321004, China
| | - Yang Liu
- School of Mathematical Sciences, Zhejiang Normal University, JinhuaZhejiang 321004, China; Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, Zhejiang, 321004, China.
| | - Yun-Liang Jiang
- School of Computer Science and Technology, Zhejiang Normal University, Jinhua, Zhejiang, 321004, China; School of Information Engineering, Huzhou University, Huzhou, Zhejiang, 313000, China
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
7
|
Tuan TA, Hoang LP, Le DD, Thang TN. A framework for controllable Pareto front learning with completed scalarization functions and its applications. Neural Netw 2024; 169:257-273. [PMID: 37913657 DOI: 10.1016/j.neunet.2023.10.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 08/14/2023] [Accepted: 10/20/2023] [Indexed: 11/03/2023]
Abstract
Pareto Front Learning (PFL) was recently introduced as an efficient method for approximating the entire Pareto front, the set of all optimal solutions to a Multi-Objective Optimization (MOO) problem. In the previous work, the mapping between a preference vector and a Pareto optimal solution is still ambiguous, rendering its results. This study demonstrates the convergence and completion aspects of solving MOO with pseudoconvex scalarization functions and combines them into Hypernetwork in order to offer a comprehensive framework for PFL, called Controllable Pareto Front Learning. Extensive experiments demonstrate that our approach is highly accurate and significantly less computationally expensive than prior methods in term of inference time.
Collapse
Affiliation(s)
- Tran Anh Tuan
- School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Ha Noi, Viet Nam.
| | - Long P Hoang
- College of Engineering and Computer Science, VinUniversity, Ha Noi, Viet Nam.
| | - Dung D Le
- College of Engineering and Computer Science, VinUniversity, Ha Noi, Viet Nam.
| | - Tran Ngoc Thang
- School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, Ha Noi, Viet Nam.
| |
Collapse
|
8
|
Wu D, Lisser A. Enhancing neurodynamic approach with physics-informed neural networks for solving non-smooth convex optimization problems. Neural Netw 2023; 168:419-430. [PMID: 37804745 DOI: 10.1016/j.neunet.2023.08.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 06/20/2023] [Accepted: 08/09/2023] [Indexed: 10/09/2023]
Abstract
This paper proposes a deep learning approach for solving non-smooth convex optimization problems (NCOPs), which have broad applications in computer science, engineering, and physics. Our approach combines neurodynamic optimization with physics-informed neural networks (PINNs) to provide an efficient and accurate solution. We first use neurodynamic optimization to formulate an initial value problem (IVP) that involves a system of ordinary differential equations for the NCOP. We then introduce a modified PINN as an approximate state solution to the IVP. Finally, we develop a dedicated algorithm to train the model to solve the IVP and minimize the NCOP objective simultaneously. Unlike existing numerical integration methods, a key advantage of our approach is that it does not require the computation of a series of intermediate states to produce a prediction of the NCOP. Our experimental results show that this computational feature results in fewer iterations being required to produce more accurate prediction solutions. Furthermore, our approach is effective in finding feasible solutions that satisfy the NCOP constraint.
Collapse
Affiliation(s)
- Dawen Wu
- Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des signaux et systèmes, 91190, Gif-sur-Yvette, France.
| | - Abdel Lisser
- Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des signaux et systèmes, 91190, Gif-sur-Yvette, France.
| |
Collapse
|
9
|
Wu W, Zhang Y. Novel adaptive zeroing neural dynamics schemes for temporally-varying linear equation handling applied to arm path following and target motion positioning. Neural Netw 2023; 165:435-450. [PMID: 37331233 DOI: 10.1016/j.neunet.2023.05.056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 04/19/2023] [Accepted: 05/29/2023] [Indexed: 06/20/2023]
Abstract
While the handling for temporally-varying linear equation (TVLE) has received extensive attention, most methods focused on trading off the conflict between computational precision and convergence rate. Different from previous studies, this paper proposes two complete adaptive zeroing neural dynamics (ZND) schemes, including a novel adaptive continuous ZND (ACZND) model, two general variable time discretization techniques, and two resultant adaptive discrete ZND (ADZND) algorithms, to essentially eliminate the conflict. Specifically, an error-related varying-parameter ACZND model with global and exponential convergence is first designed and proposed. To further adapt to the digital hardware, two novel variable time discretization techniques are proposed to discretize the ACZND model into two ADZND algorithms. The convergence properties with respect to the convergence rate and precision of ADZND algorithms are proved via rigorous mathematical analyses. By comparing with the traditional discrete ZND (TDZND) algorithms, the superiority of ADZND algorithms in convergence rate and computational precision is shown theoretically and experimentally. Finally, simulative experiments, including numerical experiments on a specific TVLE solving as well as four application experiments on arm path following and target motion positioning are successfully conducted to substantiate the efficacy, superiority, and practicability of ADZND algorithms.
Collapse
Affiliation(s)
- Wenqi Wu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China; Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Guangzhou 510006, China.
| | - Yunong Zhang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China; Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Guangzhou 510006, China.
| |
Collapse
|
10
|
Ju X, Yang X, Feng G, Che H. Neurodynamic optimization approaches with finite/fixed-time convergence for absolute value equations. Neural Netw 2023; 165:971-981. [PMID: 37454612 DOI: 10.1016/j.neunet.2023.06.041] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/22/2023] [Accepted: 06/27/2023] [Indexed: 07/18/2023]
Abstract
This paper proposes three novel accelerated inverse-free neurodynamic approaches to solve absolute value equations (AVEs). The first two are finite-time converging approaches and the third one is a fixed-time converging approach. It is shown that the proposed first two neurodynamic approaches converge to the solution of the concerned AVEs in a finite-time while, under some mild conditions, the third one converges to the solution in a fixed-time. It is also shown that the settling time for the proposed fixed-time converging approach has an uniform upper bound for all initial conditions, while the settling times for the proposed finite-time converging approaches are dependent on initial conditions. The proposed neurodynamic approaches have the advantage that they are all robust against bounded vanishing perturbations. The theoretical results are validated by means of a numerical example and an application in boundary value problems.
Collapse
Affiliation(s)
- Xingxing Ju
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China.
| | - Xinsong Yang
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China.
| | - Gang Feng
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong.
| | - Hangjun Che
- School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| |
Collapse
|
11
|
CCGnet: A deep learning approach to predict Nash equilibrium of chance-constrained games. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.01.064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
12
|
Xie X, Pu YF, Wang J. A fractional gradient descent algorithm robust to the initial weights of multilayer perceptron. Neural Netw 2023; 158:154-170. [PMID: 36450188 DOI: 10.1016/j.neunet.2022.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 09/19/2022] [Accepted: 11/11/2022] [Indexed: 11/19/2022]
Abstract
For multilayer perceptron (MLP), the initial weights will significantly influence its performance. Based on the enhanced fractional derivative extend from convex optimization, this paper proposes a fractional gradient descent (RFGD) algorithm robust to the initial weights of MLP. We analyze the effectiveness of the RFGD algorithm. The convergence of the RFGD algorithm is also analyzed. The computational complexity of the RFGD algorithm is generally larger than that of the gradient descent (GD) algorithm but smaller than that of the Adam, Padam, AdaBelief, and AdaDiff algorithms. Numerical experiments show that the RFGD algorithm has strong robustness to the order of fractional calculus which is the only added parameter compared to the GD algorithm. More importantly, compared to the GD, Adam, Padam, AdaBelief, and AdaDiff algorithms, the experimental results show that the RFGD algorithm has the best robust performance for the initial weights of MLP. Meanwhile, the correctness of the theoretical analysis is verified.
Collapse
Affiliation(s)
- Xuetao Xie
- College of Computer Science, Sichuan University, Chengdu, 610065, China.
| | - Yi-Fei Pu
- College of Computer Science, Sichuan University, Chengdu, 610065, China.
| | - Jian Wang
- College of Science, China University of Petroleum (East China), Qingdao, 266580, China.
| |
Collapse
|
13
|
Wang Y, Wang J. Neurodynamics-driven holistic approaches to semi-supervised feature selection. Neural Netw 2022; 157:377-386. [DOI: 10.1016/j.neunet.2022.10.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 10/25/2022] [Accepted: 10/27/2022] [Indexed: 11/06/2022]
|
14
|
Wang J, Gan X. Neurodynamics-driven portfolio optimization with targeted performance criteria. Neural Netw 2022; 157:404-421. [DOI: 10.1016/j.neunet.2022.10.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 08/29/2022] [Accepted: 10/14/2022] [Indexed: 11/07/2022]
|
15
|
Xu C, Wang M, Chi G, Liu Q. An inertial neural network approach for loco-manipulation trajectory tracking of mobile robot with redundant manipulator. Neural Netw 2022; 155:215-223. [DOI: 10.1016/j.neunet.2022.08.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 07/20/2022] [Accepted: 08/11/2022] [Indexed: 10/31/2022]
|
16
|
Che H, Wang J, Cichocki A. Sparse signal reconstruction via collaborative neurodynamic optimization. Neural Netw 2022; 154:255-269. [PMID: 35908375 DOI: 10.1016/j.neunet.2022.07.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 07/09/2022] [Accepted: 07/12/2022] [Indexed: 10/17/2022]
Abstract
In this paper, we formulate a mixed-integer problem for sparse signal reconstruction and reformulate it as a global optimization problem with a surrogate objective function subject to underdetermined linear equations. We propose a sparse signal reconstruction method based on collaborative neurodynamic optimization with multiple recurrent neural networks for scattered searches and a particle swarm optimization rule for repeated repositioning. We elaborate on experimental results to demonstrate the outperformance of the proposed approach against ten state-of-the-art algorithms for sparse signal reconstruction.
Collapse
Affiliation(s)
- Hangjun Che
- College of Electronic and Information Engineering and Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing 400715, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| | - Andrzej Cichocki
- Skolkovo Institute of Science and Technology, Moscow 143026, Russia.
| |
Collapse
|
17
|
Boolean matrix factorization based on collaborative neurodynamic optimization with Boltzmann machines. Neural Netw 2022; 153:142-151. [PMID: 35728336 DOI: 10.1016/j.neunet.2022.06.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 05/01/2022] [Accepted: 06/02/2022] [Indexed: 11/22/2022]
Abstract
This paper presents a collaborative neurodynamic approach to Boolean matrix factorization. Based on a binary optimization formulation to minimize the Hamming distance between a given data matrix and its low-rank reconstruction, the proposed approach employs a population of Boltzmann machines operating concurrently for scatter search of factorization solutions. In addition, a particle swarm optimization rule is used to re-initialize the neuronal states of Boltzmann machines upon their local convergence to escape from local minima toward global solutions. Experimental results demonstrate the superior convergence and performance of the proposed approach against six baseline methods on ten benchmark datasets.
Collapse
|
18
|
Leung MF, Wang J, Che H. Cardinality-constrained portfolio selection based on two-timescale duplex neurodynamic optimization. Neural Netw 2022; 153:399-410. [DOI: 10.1016/j.neunet.2022.06.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 05/13/2022] [Accepted: 06/16/2022] [Indexed: 11/26/2022]
|