1
|
Marot J, Zidane F, El-Abed M, Lanteri J, Dauvignac JY, Migliaccio C. GWO-Based Joint Optimization of Millimeter-Wave System and Multilayer Perceptron for Archaeological Application. SENSORS (BASEL, SWITZERLAND) 2024; 24:2749. [PMID: 38732855 PMCID: PMC11086245 DOI: 10.3390/s24092749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/19/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024]
Abstract
Recently, low THz radar-based measurement and classification for archaeology emerged as a new imaging modality. In this paper, we investigate the classification of pottery shards, a key enabler to understand how the agriculture was introduced from the Fertile Crescent to Europe. Our purpose is to jointly design the measuring radar system and the classification neural network, seeking the maximal compactness and the minimal cost, both directly related to the number of sensors. We aim to select the least possible number of sensors and place them adequately, while minimizing the false recognition rate. For this, we propose a novel version of the Binary Grey Wolf Optimizer, designed to reduce the number of sensors, and a Ternary Grey Wolf Optimizer. Together with the Continuous Grey Wolf Optimizer, they yield the CBTGWO (Continuous Binary Ternary Grey Wolf Optimizer). Working with 7 frequencies and starting with 37 sensors, the CBTGWO selects a single sensor and yields a 0-valued false recognition rate. In a single-frequency scenario, starting with 217 sensors, the CBTGWO selects 2 sensors. The false recognition rate is 2%. The acquisition time is 3.2 s, outperforming the GWO and adaptive mixed GWO, which yield 86.4 and 396.6 s.
Collapse
Affiliation(s)
- Julien Marot
- Centrale Mediterrannée, CNRS, Aix Marseille Université, Institut Fresnel, 13397 Marseille, France;
| | - Flora Zidane
- Universite Cote d’Azur, Laboratoire d’Electronique, Antennes et Telecommunications (LEAT), Campus SophiaTech, Bât. Forum, 930 Route des Colles— BP 145, 06903 Sophia Antipolis, France; (F.Z.); (M.E.-A.); (J.L.); (J.-Y.D.)
| | - Maha El-Abed
- Universite Cote d’Azur, Laboratoire d’Electronique, Antennes et Telecommunications (LEAT), Campus SophiaTech, Bât. Forum, 930 Route des Colles— BP 145, 06903 Sophia Antipolis, France; (F.Z.); (M.E.-A.); (J.L.); (J.-Y.D.)
| | - Jerome Lanteri
- Universite Cote d’Azur, Laboratoire d’Electronique, Antennes et Telecommunications (LEAT), Campus SophiaTech, Bât. Forum, 930 Route des Colles— BP 145, 06903 Sophia Antipolis, France; (F.Z.); (M.E.-A.); (J.L.); (J.-Y.D.)
| | - Jean-Yves Dauvignac
- Universite Cote d’Azur, Laboratoire d’Electronique, Antennes et Telecommunications (LEAT), Campus SophiaTech, Bât. Forum, 930 Route des Colles— BP 145, 06903 Sophia Antipolis, France; (F.Z.); (M.E.-A.); (J.L.); (J.-Y.D.)
| | - Claire Migliaccio
- Universite Cote d’Azur, Laboratoire d’Electronique, Antennes et Telecommunications (LEAT), Campus SophiaTech, Bât. Forum, 930 Route des Colles— BP 145, 06903 Sophia Antipolis, France; (F.Z.); (M.E.-A.); (J.L.); (J.-Y.D.)
| |
Collapse
|
2
|
Kaveh M, Mesgari MS. Application of Meta-Heuristic Algorithms for Training Neural Networks and Deep Learning Architectures: A Comprehensive Review. Neural Process Lett 2022; 55:1-104. [PMID: 36339645 PMCID: PMC9628382 DOI: 10.1007/s11063-022-11055-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 12/02/2022]
Abstract
The learning process and hyper-parameter optimization of artificial neural networks (ANNs) and deep learning (DL) architectures is considered one of the most challenging machine learning problems. Several past studies have used gradient-based back propagation methods to train DL architectures. However, gradient-based methods have major drawbacks such as stucking at local minimums in multi-objective cost functions, expensive execution time due to calculating gradient information with thousands of iterations and needing the cost functions to be continuous. Since training the ANNs and DLs is an NP-hard optimization problem, their structure and parameters optimization using the meta-heuristic (MH) algorithms has been considerably raised. MH algorithms can accurately formulate the optimal estimation of DL components (such as hyper-parameter, weights, number of layers, number of neurons, learning rate, etc.). This paper provides a comprehensive review of the optimization of ANNs and DLs using MH algorithms. In this paper, we have reviewed the latest developments in the use of MH algorithms in the DL and ANN methods, presented their disadvantages and advantages, and pointed out some research directions to fill the gaps between MHs and DL methods. Moreover, it has been explained that the evolutionary hybrid architecture still has limited applicability in the literature. Also, this paper classifies the latest MH algorithms in the literature to demonstrate their effectiveness in DL and ANN training for various applications. Most researchers tend to extend novel hybrid algorithms by combining MHs to optimize the hyper-parameters of DLs and ANNs. The development of hybrid MHs helps improving algorithms performance and capable of solving complex optimization problems. In general, the optimal performance of the MHs should be able to achieve a suitable trade-off between exploration and exploitation features. Hence, this paper tries to summarize various MH algorithms in terms of the convergence trend, exploration, exploitation, and the ability to avoid local minima. The integration of MH with DLs is expected to accelerate the training process in the coming few years. However, relevant publications in this way are still rare.
Collapse
Affiliation(s)
- Mehrdad Kaveh
- Department of Geodesy and Geomatics, K. N. Toosi University of Technology, Tehran, 19967-15433 Iran
| | - Mohammad Saadi Mesgari
- Department of Geodesy and Geomatics, K. N. Toosi University of Technology, Tehran, 19967-15433 Iran
| |
Collapse
|
5
|
A Multi-User Interactive Coral Reef Optimization Algorithm for Considering Expert Knowledge in the Unequal Area Facility Layout Problem. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11156676] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The problem of Unequal Area Facility Layout Planning (UA-FLP) has been addressed by a large number of approaches considering a set of quantitative criteria. Moreover, more recently, the personal qualitative preferences of an expert designer or decision-maker (DM) have been taken into account too. This article deals with capturing more than a single DM’s personal preferences to obtain a common and collaborative design including the whole set of preferences from all the DMs to obtain more complex, complete, and realistic solutions. To the best of our knowledge, this is the first time that the preferences of more than one expert designer have been considered in the UA-FLP. The new strategy has been implemented on a Coral Reef Optimization (CRO) algorithm using two techniques to acquire the DMs’ evaluations. The first one demands the simultaneous presence of all the DMs, while the second one does not. Both techniques have been tested over three well-known problem instances taken from the literature and the results show that it is possible to obtain sufficient designs capturing all the DMs’ personal preferences and maintaining low values of the quantitative fitness function.
Collapse
|
7
|
Abstract
Convolutional neural networks have a broad spectrum of practical applications in computer vision. Currently, much of the data come from images, and it is crucial to have an efficient technique for processing these large amounts of data. Convolutional neural networks have proven to be very successful in tackling image processing tasks. However, the design of a network structure for a given problem entails a fine-tuning of the hyperparameters in order to achieve better accuracy. This process takes much time and requires effort and expertise from the domain. Designing convolutional neural networks’ architecture represents a typical NP-hard optimization problem, and some frameworks for generating network structures for a specific image classification tasks have been proposed. To address this issue, in this paper, we propose the hybridized monarch butterfly optimization algorithm. Based on the observed deficiencies of the original monarch butterfly optimization approach, we performed hybridization with two other state-of-the-art swarm intelligence algorithms. The proposed hybrid algorithm was firstly tested on a set of standard unconstrained benchmark instances, and later on, it was adapted for a convolutional neural network design problem. Comparative analysis with other state-of-the-art methods and algorithms, as well as with the original monarch butterfly optimization implementation was performed for both groups of simulations. Experimental results proved that our proposed method managed to obtain higher classification accuracy than other approaches, the results of which were published in the modern computer science literature.
Collapse
|
8
|
Optimizing Convolutional Neural Network Hyperparameters by Enhanced Swarm Intelligence Metaheuristics. ALGORITHMS 2020. [DOI: 10.3390/a13030067] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Computer vision is one of the most frontier technologies in computer science. It is used to build artificial systems to extract valuable information from images and has a broad range of applications in various areas such as agriculture, business, and healthcare. Convolutional neural networks represent the key algorithms in computer vision, and in recent years, they have attained notable advances in many real-world problems. The accuracy of the network for a particular task profoundly relies on the hyperparameters’ configuration. Obtaining the right set of hyperparameters is a time-consuming process and requires expertise. To approach this concern, we propose an automatic method for hyperparameters’ optimization and structure design by implementing enhanced metaheuristic algorithms. The aim of this paper is twofold. First, we propose enhanced versions of the tree growth and firefly algorithms that improve the original implementations. Second, we adopt the proposed enhanced algorithms for hyperparameters’ optimization. First, the modified metaheuristics are evaluated on standard unconstrained benchmark functions and compared to the original algorithms. Afterward, the improved algorithms are employed for the network design. The experiments are carried out on the famous image classification benchmark dataset, the MNIST dataset, and comparative analysis with other outstanding approaches that were tested on the same problem is conducted. The experimental results show that both proposed improved methods establish higher performance than the other existing techniques in terms of classification accuracy and the use of computational resources.
Collapse
|