1
|
Yousaf A, Kayvanfar V, Mazzoni A, Elomri A. Artificial intelligence-based decision support systems in smart agriculture: Bibliometric analysis for operational insights and future directions. FRONTIERS IN SUSTAINABLE FOOD SYSTEMS 2023. [DOI: 10.3389/fsufs.2022.1053921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
As the world population is expected to touch 9.73 billion by 2050, according to the Food and Agriculture Organization (FAO), the demand for agricultural needs is increasing proportionately. Smart Agriculture is replacing conventional farming systems, employing advanced technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning (ML) to ensure higher productivity and precise agriculture management to overcome food demand. In recent years, there has been an increased interest in researchers within Smart Agriculture. Previous literature reviews have also conducted similar bibliometric analyses; however, there is a lack of research in Operations Research (OR) insights into Smart Agriculture. This paper conducts a Bibliometric Analysis of past research work in OR knowledge which has been done over the last two decades in Agriculture 4.0, to understand the trends and the gaps. Biblioshiny, an advanced data mining tool, was used in conducting bibliometric analysis on a total number of 1,305 articles collected from the Scopus database between the years 2000–2022. Researchers and decision makers will be able to visualize how newer advanced OR theories are being applied and how they can contribute toward some research gaps highlighted in this review paper. While governments and policymakers will benefit through understanding how Unmanned Aerial Vehicles (UAV) and robotic units are being used in farms to optimize resource allocation. Nations that have arid climate conditions would be informed how satellite imagery and mapping can assist them in detecting newer irrigation lands to assist their scarce agriculture resources.
Collapse
|
2
|
Comparative Analysis between Two Operational Irrigation Mapping Models over Study Sites in Mediterranean and Semi-Oceanic Regions. WATER 2022. [DOI: 10.3390/w14091341] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Accurate information about the irrigated surface is essential to help assess the impact of irrigation on water consumption, the hydrological cycle and regional climate. In this study, we compare recently developed operational and spatially transferrable classification models proposed for irrigation mapping. The first model suggests the use of spatio-temporal soil moisture indices derived from the Sentinel-1/2 soil moisture product (S2MP) at plot scale to map irrigated areas using the unsupervised K-means clustering algorithm (Dari model). The second model called the Sentinel-1/2 Irrigation mapping (S2IM) is a classification model based on the use the Sentinel-1 (S1) and Sentinel-2 (S2) time series data. Five study cases were examined including four studied years in a semi-oceanic area in north-central France (between 2017 and 2020) and one year (2020) in a Mediterranean context in south France. Main results showed that the soil-moisture based model using K-means clustering (Dari model) performs well for irrigation mapping but remains less accurate than the S2IM model. The overall accuracy of the Dari model ranged between 72.1% and 78.4% across the five study cases. The Dari model was found to be limited over humid conditions as it fails to correctly distinguish rain-fed plots from irrigated plots with an accuracy of the rain-fed class reaching 24.2% only. The S2IM showed the best accuracy in the five study cases with an overall accuracy ranging between 72.8% and 93.0%. However, for humid climatic conditions, the S2IM had an accuracy of the rain-fed class reaching 62.0%. The S2IM is thus superior in terms of accuracy but with higher complexity for application than the Dari model that remains simple yet effective for irrigation mapping.
Collapse
|
3
|
Irrigation Mapping on Two Contrasted Climatic Contexts Using Sentinel-1 and Sentinel-2 Data. WATER 2022. [DOI: 10.3390/w14050804] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study aims to propose an operational approach to map irrigated areas based on the synergy of Sentinel-1 (S1) and Sentinel-2 (S2) data. An application is proposed at two study sites in Europe—in Spain and in Italy—with two climatic contexts (semiarid and humid, respectively), with the objective of proving the essential role of multi-site training for a robust application of the proposed methodologies. Several classifiers are proposed to separate irrigated and rainfed areas. They are based on statistical variables from Sentinel-1 and Sentinel-2 time series data at the agricultural field scale, as well as on the contrasted behavior between the field scale and the 5 km surroundings. The support vector machine (SVM) classification approach was tested with different options to evaluate the robustness of the proposed methodologies. The optimal number of metrics found is five. These metrics illustrate the importance of optical/radar synergy and the consideration of multi-scale spatial information. The highest accuracy of the classifications, approximately equal to 85%, is based on training dataset with mixed reference fields from the two study sites. In addition, the accuracy is consistent at the two study sites. These results confirm the potential of the proposed approaches towards the most general use on sites with different climatic and agricultural contexts.
Collapse
|
4
|
Edge-Preserving Convolutional Generative Adversarial Networks for SAR-to-Optical Image Translation. REMOTE SENSING 2021. [DOI: 10.3390/rs13183575] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the ability for all-day, all-weather acquisition, synthetic aperture radar (SAR) remote sensing is an important technique in modern Earth observation. However, the interpretation of SAR images is a highly challenging task, even for well-trained experts, due to the imaging principle of SAR images and the high-frequency speckle noise. Some image-to-image translation methods are used to convert SAR images into optical images that are closer to what we perceive through our eyes. There exist two weaknesses in these methods: (1) these methods are not designed for an SAR-to-optical translation task, thereby losing sight of the complexity of SAR images and the speckle noise. (2) The same convolution filters in a standard convolution layer are utilized for the whole feature maps, which ignore the details of SAR images in each window and generate images with unsatisfactory quality. In this paper, we propose an edge-preserving convolutional generative adversarial network (EPCGAN) to enhance the structure and aesthetics of the output image by leveraging the edge information of the SAR image and implementing content-adaptive convolution. The proposed edge-preserving convolution (EPC) decomposes the content of the convolution input into texture components and content components and then generates a content-adaptive kernel to modify standard convolutional filter weights for the content components. Based on the EPC, the EPCGAN is presented for SAR-to-optical image translation. It uses a gradient branch to assist in the recovery of structural image information. Experiments on the SEN1-2 dataset demonstrated that the proposed method can outperform other SAR-to-optical methods by recovering more structures and yielding a superior evaluation index.
Collapse
|