1
|
Wang X, Li Y, Liang Y, Wu B, Xuan Y. A novel ensemble estimation of distribution algorithm with distribution modification strategies. COMPLEX INTELL SYST 2023. [DOI: 10.1007/s40747-023-00975-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023]
Abstract
AbstractThe canonical estimation of distribution algorithm (EDA) easily falls into a local optimum with an ill-shaped population distribution, which leads to weak convergence performance and less stability when solving global optimization problems. To overcome this defect, we explore a novel EDA variant with an ensemble of three distribution modification strategies, i.e., archive-based population updating (APU), multileader-based search diversification (MSD), and the triggered distribution shrinkage (TDS) strategy, named E3-EDA. The APU strategy utilizes historical population information to rebuild the search scope and avoid ill-shaped distributions. Moreover, it continuously updates the archive to avoid overfitting the distribution model. The MSD makes full use of the location differences among populations to evolve the sampling toward promising regions. TDS is triggered when the search stagnates, shrinking the distribution scope to achieve local exploitation. Additionally, the E3-EDA performance is evaluated using the CEC 2014 and CEC 2018 test suites on 10-dimensional, 30-dimensional, 50-dimensional and 100-dimensional problems. Moreover, several prominent EDA variants and other top methods from CEC competitions are comprehensively compared with the proposed method. The competitive performance of E3-EDA in solving complex problems is supported by the nonparametric test results.
Collapse
|
2
|
Predominant Cognitive Learning Particle Swarm Optimization for Global Numerical Optimization. MATHEMATICS 2022. [DOI: 10.3390/math10101620] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Particle swarm optimization (PSO) has witnessed giant success in problem optimization. Nevertheless, its optimization performance seriously degrades when coping with optimization problems with a lot of local optima. To alleviate this issue, this paper designs a predominant cognitive learning particle swarm optimization (PCLPSO) method to effectively tackle complicated optimization problems. Specifically, for each particle, a new promising exemplar is constructed by letting its personal best position cognitively learn from a better personal experience randomly selected from those of others based on a novel predominant cognitive learning strategy. As a result, different particles preserve different guiding exemplars. In this way, the learning effectiveness and the learning diversity of particles are expectedly improved. To eliminate the dilemma that PCLPSO is sensitive to the involved parameters, we propose dynamic adjustment strategies, so that different particles preserve different parameter settings, which is further beneficial to promote the learning diversity of particles. With the above techniques, the proposed PCLPSO could expectedly compromise the search intensification and diversification in a good way to search the complex solution space properly to achieve satisfactory performance. Comprehensive experiments are conducted on the commonly adopted CEC 2017 benchmark function set to testify the effectiveness of the devised PCLPSO. Experimental results show that PCLPSO obtains considerably competitive or even much more promising performance than several representative and state-of-the-art peer methods.
Collapse
|
3
|
A Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer for Large-Scale Optimization. MATHEMATICS 2022. [DOI: 10.3390/math10071072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
High-dimensional optimization problems are more and more common in the era of big data and the Internet of things (IoT), which seriously challenge the optimization performance of existing optimizers. To solve these kinds of problems effectively, this paper devises a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) by integrating valuable evolutionary information in different elite particles in the swarm to guide the updating of inferior ones. Specifically, the swarm is first separated into two exclusive sets, namely the elite set (ES) containing the top best individuals, and the non-elite set (NES), consisting of the remaining individuals. Then, the dimensions of each particle in NES are randomly divided into several groups with equal sizes. Subsequently, each dimension group of each non-elite particle is guided by two different elites randomly selected from ES. In this way, each non-elite particle in NES is comprehensively guided by multiple elite particles in ES. Therefore, not only could high diversity be maintained, but fast convergence is also likely guaranteed. To alleviate the sensitivity of DGCELSO to the associated parameters, we further devise dynamic adjustment strategies to change the parameter settings during the evolution. With the above mechanisms, DGCELSO is expected to explore and exploit the solution space properly to find the optimum solutions for optimization problems. Extensive experiments conducted on two commonly used large-scale benchmark problem sets demonstrate that DGCELSO achieves highly competitive or even much better performance than several state-of-the-art large-scale optimizers.
Collapse
|
4
|
Stochastic Triad Topology Based Particle Swarm Optimization for Global Numerical Optimization. MATHEMATICS 2022. [DOI: 10.3390/math10071032] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Particle swarm optimization (PSO) has exhibited well-known feasibility in problem optimization. However, its optimization performance still encounters challenges when confronted with complicated optimization problems with many local areas. In PSO, the interaction among particles and utilization of the communication information play crucial roles in improving the learning effectiveness and learning diversity of particles. To promote the communication effectiveness among particles, this paper proposes a stochastic triad topology to allow each particle to communicate with two random ones in the swarm via their personal best positions. Then, unlike existing studies that employ the personal best positions of the updated particle and the neighboring best position of the topology to direct its update, this paper adopts the best one and the mean position of the three personal best positions in the associated triad topology as the two guiding exemplars to direct the update of each particle. To further promote the interaction diversity among particles, an archive is maintained to store the obsolete personal best positions of particles and is then used to interact with particles in the triad topology. To enhance the chance of escaping from local regions, a random restart strategy is probabilistically triggered to introduce initialized solutions to the archive. To alleviate sensitivity to parameters, dynamic adjustment strategies are designed to dynamically adjust the associated parameter settings during the evolution. Integrating the above mechanism, a stochastic triad topology-based PSO (STTPSO) is developed to effectively search complex solution space. With the above techniques, the learning diversity and learning effectiveness of particles are largely promoted and thus the developed STTPSO is expected to explore and exploit the solution space appropriately to find high-quality solutions. Extensive experiments conducted on the commonly used CEC 2017 benchmark problem set with different dimension sizes substantiate that the proposed STTPSO achieves highly competitive or even much better performance than state-of-the-art and representative PSO variants.
Collapse
|
5
|
Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems. MATHEMATICS 2022. [DOI: 10.3390/math10050761] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Optimization problems become increasingly complicated in the era of big data and Internet of Things, which significantly challenges the effectiveness and efficiency of existing optimization methods. To effectively solve this kind of problems, this paper puts forward a stochastic cognitive dominance leading particle swarm optimization algorithm (SCDLPSO). Specifically, for each particle, two personal cognitive best positions are first randomly selected from those of all particles. Then, only when the cognitive best position of the particle is dominated by at least one of the two selected ones, this particle is updated by cognitively learning from the better personal positions; otherwise, this particle is not updated and directly enters the next generation. With this stochastic cognitive dominance leading mechanism, it is expected that the learning diversity and the learning efficiency of particles in the proposed optimizer could be promoted, and thus the optimizer is expected to explore and exploit the solution space properly. At last, extensive experiments are conducted on a widely acknowledged benchmark problem set with different dimension sizes to evaluate the effectiveness of the proposed SCDLPSO. Experimental results demonstrate that the devised optimizer achieves highly competitive or even much better performance than several state-of-the-art PSO variants.
Collapse
|