1
|
Zhong J, Feng Y, Tang S, Xiong J, Dai X, Zhang N. A collaborative neurodynamic optimization algorithm to traveling salesman problem. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00884-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractThis paper proposed a collaborative neurodynamic optimization (CNO) method to solve traveling salesman problem (TSP). First, we construct a Hopfield neural network (HNN) with $$n \times n$$
n
×
n
neurons for the n cities. Second, to ensure the convergence of continuous HNN (CHNN), we reformulate TSP to satisfy the convergence condition of CHNN and solve TSP by CHNN. Finally, a population of CHNNs is used to search for local optimal solutions of TSP and the globally optimal solution is obtained using particle swarm optimization. Experimental results show the effectiveness of the CNO approach for solving TSP.
Collapse
|
2
|
Neural Networks without Training for Optimization. Neural Netw 2005. [DOI: 10.1007/3-540-28847-3_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
3
|
Choosing and using a neural net. ACTA ACUST UNITED AC 2005. [DOI: 10.1007/bfb0027034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
4
|
Cochrane EM, Beasley JE. The co-adaptive neural network approach to the Euclidean Travelling Salesman Problem. Neural Netw 2004; 16:1499-525. [PMID: 14622879 DOI: 10.1016/s0893-6080(03)00056-x] [Citation(s) in RCA: 65] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In this paper we consider the Euclidean Travelling Salesman Problem (ETSP). This is the problem of finding the shortest tour around a number of cities where the cities correspond to points in the Euclidean plane and the distances between cities are given by the usual Euclidean distance metric. We present a review of the literature with respect to neural network (NN) approaches for the ETSP, and the computational results that have been reported. Based upon this review we highlight two areas that are, in our judgement, currently neglected/lacking in the literature. These are: failure to make significant use of publicly available ETSP test problems in computational work, failure to address co-operation between neurons. Drawing upon our literature survey this paper presents a new Self-Organising NN approach, called the Co-Adaptive Net, which involves not just unsupervised learning to train neurons, but also allows neurons to co-operate and compete amongst themselves depending on their situation. Our Co-Adaptive Net algorithm also includes a number of algorithmic mechanisms that, based upon our literature review, we consider to have contributed to the computational success of previous algorithms. Results for 91 publicly available standard ETSP's are presented in this paper. The largest of these problems involves 85,900 cities. This paper presents: the most extensive computational evaluation of any NN approach on publicly available ETSP test problems that has been made to date in the literature, a NN approach that performs better, with respect to solution quality and/or computation time, than other NN approaches given previously in the literature. Drawing upon computational results produced as a result of the DIMACS TSP Challenge, we highlight the fact that none of the current NN approaches for the ETSP can compete with state of the art Operations Research heuristics. We discuss why we consider continuing to study and develop NN approaches for the ETSP to be of value.
Collapse
Affiliation(s)
- E M Cochrane
- Department of Computing, Glasgow Caledonian University, Glasgow G4 0BA, Scotland, UK.
| | | |
Collapse
|
5
|
Xu ZB, Jin HD, Leung KS, Leung Y, Wong CK. An automata network for performing combinatorial optimization. Neurocomputing 2002. [DOI: 10.1016/s0925-2312(01)00580-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
6
|
|
7
|
Chang S, Wong KW, Zhang W, Zhang Y. Algorithm for optimizing bipolar interconnection weights with applications in associative memories and multitarget classification. APPLIED OPTICS 1999; 38:5032-5038. [PMID: 18323995 DOI: 10.1364/ao.38.005032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.
Collapse
Affiliation(s)
- S Chang
- Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China.
| | | | | | | |
Collapse
|
8
|
Abstract
In this paper we first show some of the bifurcation properties of Potts mean-field-theory annealing applied to traveling salesman problems. Due to these bifurcation properties, this approach, in general, produces non-optimal and non-unique solutions. As an alternative approach, we propose a nonequilibrium version of the Potts spin neural network, called chaotic Potts spin (CPS). CPS has several parameters, and bifurcations over each parameter are investigated. Next, experimental results are shown comparing CPS with several related approaches. CPS is good at obtaining optimal solutions for small-scale problems and semi-optimal solutions for relatively large-scale problems. We also describe a couple of CPS modifications: CPS with a heuristic method and CPS with a "chaotic annealing" method. These modified algorithms can produce even better CPS solutions. Copyright 1997 Elsevier Science Ltd.
Collapse
Affiliation(s)
- Masa aki Sato
- ATR Human Information Processing Research Laboratories, Kyoto 619-02, Japan
| | | |
Collapse
|
9
|
Sato M, Ishii S. Bifurcations in mean-field-theory annealing. PHYSICAL REVIEW. E, STATISTICAL PHYSICS, PLASMAS, FLUIDS, AND RELATED INTERDISCIPLINARY TOPICS 1996; 53:5153-5168. [PMID: 9964848 DOI: 10.1103/physreve.53.5153] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
|
10
|
Peterson C, Sommelius O, Söderberg B. Variational approach for minimizing Lennard-Jones energies. PHYSICAL REVIEW. E, STATISTICAL PHYSICS, PLASMAS, FLUIDS, AND RELATED INTERDISCIPLINARY TOPICS 1996; 53:1725-1731. [PMID: 9964432 DOI: 10.1103/physreve.53.1725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
|
11
|
Gee A, Prager R. Limitations of neural networks for solving traveling salesman problems. ACTA ACUST UNITED AC 1995; 6:280-2. [DOI: 10.1109/72.363424] [Citation(s) in RCA: 27] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
12
|
Abstract
In recent years there has been significant interest in adapting techniques from statistical physics, in particular mean field theory, to provide deterministic heuristic algorithms for obtaining approximate solutions to optimization problems. Although these algorithms have been shown experimentally to be successful there has been little theoretical analysis of them. In this paper we demonstrate connections between mean field theory methods and other approaches, in particular, barrier function and interior point methods. As an explicit example, we summarize our work on the linear assignment problem. In this previous work we defined a number of algorithms, including deterministic annealing, for solving the assignment problem. We proved convergence, gave bounds on the convergence times, and showed relations to other optimization algorithms.
Collapse
Affiliation(s)
- A. L. Yuille
- Division of Applied Sciences, Harvard University, Cambridge, MA 02138 USA
| | - J. J. Kosowsky
- Division of Applied Sciences, Harvard University, Cambridge, MA 02138 USA
| |
Collapse
|
13
|
Yuille AL, Stolorz P, Utans J. Statistical Physics, Mixtures of Distributions, and the EM Algorithm. Neural Comput 1994. [DOI: 10.1162/neco.1994.6.2.334] [Citation(s) in RCA: 69] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We show that there are strong relationships between approaches to optmization and learning based on statistical physics or mixtures of experts. In particular, the EM algorithm can be interpreted as converging either to a local maximum of the mixtures model or to a saddle point solution to the statistical physics system. An advantage of the statistical physics approach is that it naturally gives rise to a heuristic continuation method, deterministic annealing, for finding good solutions.
Collapse
Affiliation(s)
- Alan L. Yuille
- Division of Applied Sciences, Harvard University, Cambridge, MA 02138 USA
| | - Paul Stolorz
- Jet Propulsion Laboratory, MS 198-219, Pasadena, CA 91109 and Santa Fe Institute, Santa Fe, NM 87501 USA
| | - Joachim Utans
- International Computer Science Institute, 1947 Center Street, Suite 600, Berkeley, CA 94704 USA
| |
Collapse
|
14
|
Ohlsson M, Peterson C, Söderberg B. Neural Networks for Optimization Problems with Inequality Constraints: The Knapsack Problem. Neural Comput 1993. [DOI: 10.1162/neco.1993.5.2.331] [Citation(s) in RCA: 63] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A strategy for finding approximate solutions to discrete optimization problems with inequality constraints using mean field neural networks is presented. The constraints x ≤ 0 are encoded by x⊖(x) terms in the energy function. A careful treatment of the mean field approximation for the self-coupling parts of the energy is crucial, and results in an essentially parameter-free algorithm. This methodology is extensively tested on the knapsack problem of size up to 103 items. The algorithm scales like NM for problems with N items and M constraints. Comparisons are made with an exact branch and bound algorithm when this is computationally possible (N ≤ 30). The quality of the neural network solutions consistently lies above 95% of the optimal ones at a significantly lower CPU expense. For the larger problem sizes the algorithm is compared with simulated annealing and a modified linear programming approach. For "nonhomogeneous" problems these produce good solutions, whereas for the more difficult "homogeneous" problems the neural approach is a winner with respect to solution quality and/or CPU time consumption. The approach is of course also applicable to other problems of similar structure, like set covering.
Collapse
Affiliation(s)
- Mattias Ohlsson
- Department of Theoretical Physics, University of Lund, Sölvegatan 14A, S-22362 Lund, Sweden
| | - Carsten Peterson
- Department of Theoretical Physics, University of Lund, Sölvegatan 14A, S-22362 Lund, Sweden
| | - Bo Söderberg
- Department of Theoretical Physics, University of Lund, Sölvegatan 14A, S-22362 Lund, Sweden
| |
Collapse
|
15
|
Abstract
In a recent paper (Gislén et al. 1989) a convenient encoding and an efficient mean field algorithm for solving scheduling problems using a Potts neural network was developed and numerically explored on simplified and synthetic problems. In this work the approach is extended to realistic applications both with respect to problem complexity and size. This extension requires among other things the interaction of Potts neurons with different number of components. We analyze the corresponding linearized mean field equations with respect to estimating the phase transition temperature. Also a brief comparison with the linear programming approach is given. Testbeds consisting of generated problems within the Swedish high school system are solved efficiently with high quality solutions as results.
Collapse
Affiliation(s)
- Lars Gislén
- Department of Theoretical Physics, University of Lund, Sölvegatan 14A, S-22362 Lund, Sweden
| | - Carsten Peterson
- Department of Theoretical Physics, University of Lund, Sölvegatan 14A, S-22362 Lund, Sweden
| | - Bo Söderberg
- Department of Theoretical Physics, University of Lund, Sölvegatan 14A, S-22362 Lund, Sweden
| |
Collapse
|
16
|
Abstract
Rotor neurons are introduced to encode states living on the surface of a sphere in D dimensions. Such rotors can be regarded as continuous generalizations of binary (Ising) neurons. The corresponding mean field equations are derived, and phase transition properties based on linearized dynamics are given. The power of this approach is illustrated with an optimization problem—placing N identical charges on a sphere such that the overall repulsive energy is minimized. The rotor approach appears superior to other methods for this problem both with respect to solution quality and computational effort needed.
Collapse
Affiliation(s)
- Lars Gislén
- Department of Theoretical Physics, University of Lund, Sölvegatan 14A, S-22362 Lund, Sweden
| | - Carsten Peterson
- Department of Theoretical Physics, University of Lund, Sölvegatan 14A, S-22362 Lund, Sweden
| | - Bo Söderberg
- Department of Theoretical Physics, University of Lund, Sölvegatan 14A, S-22362 Lund, Sweden
| |
Collapse
|
17
|
|
18
|
Roysam B, Bhattacharjya A. Hierarchically structured unit-simplex transformations for parallel distributed optimization problems. ACTA ACUST UNITED AC 1992; 3:108-14. [DOI: 10.1109/72.105423] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|