1
|
Khatibi SMH, Ali J. Harnessing the power of machine learning for crop improvement and sustainable production. FRONTIERS IN PLANT SCIENCE 2024; 15:1417912. [PMID: 39188546 PMCID: PMC11346375 DOI: 10.3389/fpls.2024.1417912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 07/15/2024] [Indexed: 08/28/2024]
Abstract
Crop improvement and production domains encounter large amounts of expanding data with multi-layer complexity that forces researchers to use machine-learning approaches to establish predictive and informative models to understand the sophisticated mechanisms underlying these processes. All machine-learning approaches aim to fit models to target data; nevertheless, it should be noted that a wide range of specialized methods might initially appear confusing. The principal objective of this study is to offer researchers an explicit introduction to some of the essential machine-learning approaches and their applications, comprising the most modern and utilized methods that have gained widespread adoption in crop improvement or similar domains. This article explicitly explains how different machine-learning methods could be applied for given agricultural data, highlights newly emerging techniques for machine-learning users, and lays out technical strategies for agri/crop research practitioners and researchers.
Collapse
Affiliation(s)
| | - Jauhar Ali
- Rice Breeding Platform, International Rice Research Institute, Los Baños, Laguna, Philippines
| |
Collapse
|
2
|
Ballesta P, Maldonado C, Mora-Poblete F, Mieres-Castro D, del Pozo A, Lobos GA. Spectral-Based Classification of Genetically Differentiated Groups in Spring Wheat Grown under Contrasting Environments. PLANTS (BASEL, SWITZERLAND) 2023; 12:440. [PMID: 36771526 PMCID: PMC9920124 DOI: 10.3390/plants12030440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/06/2023] [Accepted: 01/16/2023] [Indexed: 06/18/2023]
Abstract
The global concern about the gap between food production and consumption has intensified the research on the genetics, ecophysiology, and breeding of cereal crops. In this sense, several genetic studies have been conducted to assess the effectiveness and sustainability of collections of germplasm accessions of major crops. In this study, a spectral-based classification approach for the assignment of wheat cultivars to genetically differentiated subpopulations (genetic structure) was carried out using a panel of 316 spring bread cultivars grown in two environments with different water regimes (rainfed and fully irrigated). For that, different machine-learning models were trained with foliar spectral and genetic information to assign the wheat cultivars to subpopulations. The results revealed that, in general, the hyperparameters ReLU (as the activation function), adam (as the optimizer), and a size batch of 10 give neural network models better accuracy. Genetically differentiated groups showed smaller differences in mean wavelengths under rainfed than under full irrigation, which coincided with a reduction in clustering accuracy in neural network models. The comparison of models indicated that the Convolutional Neural Network (CNN) was significantly more accurate in classifying individuals into their respective subpopulations, with 92 and 93% of correct individual assignments in water-limited and fully irrigated environments, respectively, whereas 92% (full irrigation) and 78% (rainfed) of cultivars were correctly assigned to their respective classes by the multilayer perceptron method and partial least squares discriminant analysis, respectively. Notably, CNN did not show significant differences between both environments, which indicates stability in the prediction independent of the different water regimes. It is concluded that foliar spectral variation can be used to accurately infer the belonging of a cultivar to its respective genetically differentiated group, even considering radically different environments, which is highly desirable in the context of crop genetic resources management.
Collapse
Affiliation(s)
- Paulina Ballesta
- Instituto de Nutrición y Tecnología de Los Alimentos, Universidad de Chile, Santiago 7830490, Chile
| | - Carlos Maldonado
- Centro de Genómica y Bioinformática, Facultad de Ciencias, Universidad Mayor, Santiago 8580745, Chile
| | | | | | - Alejandro del Pozo
- Plant Breeding and Phenomic Center, Faculty of Agricultural Sciences, University of Talca, Talca 3460000, Chile
| | - Gustavo A. Lobos
- Plant Breeding and Phenomic Center, Faculty of Agricultural Sciences, University of Talca, Talca 3460000, Chile
| |
Collapse
|
3
|
Havivi S, Rotman SR, Blumberg DG, Maman S. Damage Assessment in Rural Environments Following Natural Disasters Using Multi-Sensor Remote Sensing Data. SENSORS (BASEL, SWITZERLAND) 2022; 22:9998. [PMID: 36560367 PMCID: PMC9788353 DOI: 10.3390/s22249998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 12/01/2022] [Accepted: 12/02/2022] [Indexed: 06/17/2023]
Abstract
The damage caused by natural disasters in rural areas differs in nature extent, landscape, and structure, from the damage caused in urban environments. Previous and current studies have focused mainly on mapping damaged structures in urban areas after catastrophic events such as earthquakes or tsunamis. However, research focusing on the level of damage or its distribution in rural areas is lacking. This study presents a methodology for mapping, characterizing, and assessing the damage in rural environments following natural disasters, both in built-up and vegetation areas, by combining synthetic-aperture radar (SAR) and optical remote sensing data. As a case study, we applied the methodology to characterize the rural areas affected by the Sulawesi earthquake and the subsequent tsunami event in Indonesia that occurred on 28 September 2018. High-resolution COSMO-SkyMed images obtained pre- and post-event, alongside Sentinel-2 images, were used as inputs. This study's results emphasize that remote sensing data from rural areas must be treated differently from that of urban areas following a disaster. Additionally, the analysis must include the surrounding features, not only the damaged structures. Furthermore, the results highlight the applicability of the methodology for a variety of disaster events, as well as multiple hazards, and can be adapted using a combination of different optical and SAR sensors.
Collapse
Affiliation(s)
- Shiran Havivi
- Geography and Environmental Development, Ben-Gurion University of the Negev, Beer Sheva 8410501, Israel
| | - Stanley R. Rotman
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer Sheva 8410501, Israel
| | - Dan G. Blumberg
- Geography and Environmental Development, Ben-Gurion University of the Negev, Beer Sheva 8410501, Israel
- Homeland Security Institute, Ben-Gurion University of the Negev, Beer Sheva 8410501, Israel
| | - Shimrit Maman
- Homeland Security Institute, Ben-Gurion University of the Negev, Beer Sheva 8410501, Israel
| |
Collapse
|
4
|
CerealNet: A Hybrid Deep Learning Architecture for Cereal Crop Mapping Using Sentinel-2 Time-Series. INFORMATICS 2022. [DOI: 10.3390/informatics9040096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Remote sensing-based crop mapping has continued to grow in economic importance over the last two decades. Given the ever-increasing rate of population growth and the implications of multiplying global food production, the necessity for timely, accurate, and reliable agricultural data is of the utmost importance. When it comes to ensuring high accuracy in crop maps, spectral similarities between crops represent serious limiting factors. Crops that display similar spectral responses are notorious for being nearly impossible to discriminate using classical multi-spectral imagery analysis. Chief among these crops are soft wheat, durum wheat, oats, and barley. In this paper, we propose a unique multi-input deep learning approach for cereal crop mapping, called “CerealNet”. Two time-series used as input, from the Sentinel-2 bands and NDVI (Normalized Difference Vegetation Index), were fed into separate branches of the LSTM-Conv1D (Long Short-Term Memory Convolutional Neural Networks) model to extract the temporal and spectral features necessary for the pixel-based crop mapping. The approach was evaluated using ground-truth data collected in the Gharb region (northwest of Morocco). We noted a categorical accuracy and an F1-score of 95% and 94%, respectively, with minimal confusion between the four cereal classes. CerealNet proved insensitive to sample size, as the least-represented crop, oats, had the highest F1-score. This model was compared with several state-of-the-art crop mapping classifiers and was found to outperform them. The modularity of CerealNet could possibly allow for injecting additional data such as Synthetic Aperture Radar (SAR) bands, especially when optical imagery is not available.
Collapse
|
5
|
Determination of Wheat Heading Stage Using Convolutional Neural Networks on Multispectral UAV Imaging Data. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3655804. [DOI: 10.1155/2022/3655804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 10/18/2022] [Accepted: 11/03/2022] [Indexed: 11/25/2022]
Abstract
The heading and flowering stages are crucial for wheat growth and should be used for fusarium head blight (FHB) and other plant prevention operations. Rapid and accurate monitoring of wheat growth in hilly areas is critical for determining plant protection operations and strategies. Currently, the operation time for FHB prevention and plant protection is primarily determined by manual tour inspection of plant growth, which has the disadvantages of low information gathering and subjectivity. In this study, an unmanned aerial vehicle (UAV) equipped with a multispectral camera was used to collect wheat canopy multispectral images and heading rate information during the heading and flowering stages in order to develop a method for detecting the appropriate time for preventive control of FHB. A 1D convolutional neural network + decision tree model (1D CNN + DT) was designed. All the multispectral information was input into the model for feature extraction and result regression. The regression revealed that the coefficient of determination (R2) between multispectral information in the wheat canopy and the heading rate was 0.95, and the root mean square error of prediction (RMSE) was 0.24. This result was superior to that obtained by directly inputting multispectral data into neural networks (NN) or by inputting multispectral data into NN via traditional VI calculation, support vector machines regression (SVR), or decision tree (DT). On the basis of FHB prevention and control production guidelines and field research results, a discrimination model for FHB prevention and plant protection operation time was developed. After the output values of the regression model were input into the discrimination model, a 97.50% precision was obtained. The method proposed in this study can efficiently monitor the growth status of wheat during the heading and flowering stages and provide crop growth information for determining the timing and strategy of FHB prevention and plant protection operations.
Collapse
|
6
|
Crop Classification Based on the Physically Constrained General Model-Based Decomposition Using Multi-Temporal RADARSAT-2 Data. REMOTE SENSING 2022. [DOI: 10.3390/rs14112668] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Crop identification and classification are of great significance to agricultural land use management. The physically constrained general model-based decomposition (PCGMD) has proven to be a promising method in comparison with the typical four-component decomposition methods in scattering mechanism interpretation and identifying vegetation types. However, the robustness of PCGMD requires further investigation from the perspective of final applications. This paper aims to validate the efficiency of the PCGMD method on crop classification for the first time. Seven C-band time-series RADARSAT-2 images were exploited, covering the entire growing season over an agricultural region near London, Ontario, Canada. Firstly, the response and temporal evolution of the four scattering components obtained by PCGMD were analyzed. Then, a forward selection approach was applied to achieve the highest classification accuracy by searching an optimum combination of multi-temporal SAR data with the random forest (RF) algorithm. For comparison, the general model-based decomposition method (GMD), the original and its three improved Yamaguchi four-component decomposition approaches (Y4O, Y4R, S4R, G4U), were used in all tests. The results reveal that the PCGMD method is highly sensitive to seasonal crop changes and matches well with the real physical characteristics of the crops. Among all test methods used, the PCGMD method using six images obtained the optimum classification performance, reaching an overall accuracy of 91.83%.
Collapse
|
7
|
Identification of Crop Type Based on C-AENN Using Time Series Sentinel-1A SAR Data. REMOTE SENSING 2022. [DOI: 10.3390/rs14061379] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Crop type identification is the initial stage and an important part of the agricultural monitoring system. It is well known that synthetic aperture radar (SAR) Sentinel-1A imagery provides a reliable data source for crop type identification. However, a single-temporal SAR image does not contain enough features, and the unique physical characteristics of radar images are relatively lacking, which limits its potential in crop mapping. In addition, current methods may not be applicable for time -series SAR data. To address the above issues, a new crop type identification method was proposed. Specifically, a farmland mask was firstly generated by the object Markov random field (OMRF) model to remove the interference of non-farmland factors. Then, the features of the standard backscatter coefficient, Sigma-naught (σ0), and the normalized backscatter coefficient by the incident angle, Gamma-naught (γ0), were extracted for each type of crop, and the optimal feature combination was found from time -series SAR images by means of Jeffries-Matusita (J-M) distance analysis. Finally, to make efficient utilization of optimal multi-temporal feature combination, a new network, the convolutional-autoencoder neural network (C-AENN), was developed for the crop type identification task. In order to prove the effectiveness of the method, several classical machine learning methods such as support vector machine (SVM), random forest (RF), etc., and deep learning methods such as one dimensional convolutional neural network (1D-CNN) and stacked auto-encoder (SAE), etc., were used for comparison. In terms of quantitative assessment, the proposed method achieved the highest accuracy, with a macro-F1 score of 0.9825, an overall accuracy (OA) score of 0.9794, and a Kappa coefficient (Kappa) score of 0.9705. In terms of qualitative assessment, four typical regions were chosen for intuitive comparison with the sample maps, and the identification result covering the study area was compared with a contemporaneous optical image, which indicated the high accuracy of the proposed method. In short, this study enables the effective identification of crop types, which demonstrates the importance of multi-temporal radar images in feature combination and the necessity of deep learning networks to extract complex features.
Collapse
|
8
|
ClassHyPer: ClassMix-Based Hybrid Perturbations for Deep Semi-Supervised Semantic Segmentation of Remote Sensing Imagery. REMOTE SENSING 2022. [DOI: 10.3390/rs14040879] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Inspired by the tremendous success of deep learning (DL) and the increased availability of remote sensing data, DL-based image semantic segmentation has attracted growing interest in the remote sensing community. The ideal scenario of DL application requires a vast number of annotation data with the same feature distribution as the area of interest. However, obtaining such enormous training sets that suit the data distribution of the target area is highly time-consuming and costly. Consistency-regularization-based semi-supervised learning (SSL) methods have gained growing popularity thanks to their ease of implementation and remarkable performance. However, there have been limited applications of SSL in remote sensing. This study comprehensively analyzed several advanced SSL methods based on consistency regularization from the perspective of data- and model-level perturbation. Then, an end-to-end SSL approach based on a hybrid perturbation paradigm was introduced to improve the DL model’s performance with a limited number of labels. The proposed method integrates the semantic boundary information to generate more meaningful mixing images when performing data-level perturbation. Additionally, by using implicit pseudo-supervision based on model-level perturbation, it eliminates the need to set extra threshold parameters in training. Furthermore, it can be flexibly paired with the DL model in an end-to-end manner, as opposed to the separated training stages used in the traditional pseudo-labeling. Experimental results for five remote sensing benchmark datasets in the application of segmentation of roads, buildings, and land cover demonstrated the effectiveness and robustness of the proposed approach. It is particularly encouraging that the ratio of accuracy obtained using the proposed method with 5% labels to that using the purely supervised method with 100% labels was more than 89% on all benchmark datasets.
Collapse
|
9
|
Crop Type Mapping from Optical and Radar Time Series Using Attention-Based Deep Learning. REMOTE SENSING 2021. [DOI: 10.3390/rs13224668] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Crop maps are key inputs for crop inventory production and yield estimation and can inform the implementation of effective farm management practices. Producing these maps at detailed scales requires exhaustive field surveys that can be laborious, time-consuming, and expensive to replicate. With a growing archive of remote sensing data, there are enormous opportunities to exploit dense satellite image time series (SITS), temporal sequences of images over the same area. Generally, crop type mapping relies on single-sensor inputs and is solved with the help of traditional learning algorithms such as random forests or support vector machines. Nowadays, deep learning techniques have brought significant improvements by leveraging information in both spatial and temporal dimensions, which are relevant in crop studies. The concurrent availability of Sentinel-1 (synthetic aperture radar) and Sentinel-2 (optical) data offers a great opportunity to utilize them jointly; however, optimizing their synergy has been understudied with deep learning techniques. In this work, we analyze and compare three fusion strategies (input, layer, and decision levels) to identify the best strategy that optimizes optical-radar classification performance. They are applied to a recent architecture, notably, the pixel-set encoder–temporal attention encoder (PSE-TAE) developed specifically for object-based classification of SITS and based on self-attention mechanisms. Experiments are carried out in Brittany, in the northwest of France, with Sentinel-1 and Sentinel-2 time series. Input and layer-level fusion competitively achieved the best overall F-score surpassing decision-level fusion by 2%. On a per-class basis, decision-level fusion increased the accuracy of dominant classes, whereas layer-level fusion improves up to 13% for minority classes. Against single-sensor baseline, multi-sensor fusion strategies identified crop types more accurately: for example, input-level outperformed Sentinel-2 and Sentinel-1 by 3% and 9% in F-score, respectively. We have also conducted experiments that showed the importance of fusion for early time series classification and under high cloud cover condition.
Collapse
|
10
|
Mapping Crop Rotation by Using Deeply Synergistic Optical and SAR Time Series. REMOTE SENSING 2021. [DOI: 10.3390/rs13204160] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Crop rotations, the farming practice of growing crops in sequential seasons, occupy a core position in agriculture management, showing a key influence on food security and agro-ecosystem sustainability. Despite the improvement in accuracy of identifying mono-agricultural crop distribution, crop rotation patterns remain poorly mapped. In this study, a hybrid convolutional neural network (CNN) and long short-term memory (LSTM) architecture, namely crop rotation mapping (CRM), were proposed to synergize the synthetic aperture radar (SAR) and optical time series in a rotational mapping task. The proposed end-to-end architecture had reasonable accuracies (i.e., accuracy > 0.85) in mapping crop rotation, which outperformed other state-of-the-art non-deep or deep-learning solutions. For some confusing rotation types, such as fallow-single rice and crayfish-single rice, CRM showed substantial improvements from traditional methods. Furthermore, the deeply synergistic SAR-optical, time-series data, with a corresponding attention mechanism, were effective in extracting crop rotation features, with an overall gain of accuracy of four points compared with ablation models. Therefore, our proposed method added wisdom to dynamic crop rotation mapping and yields important information for the agro-ecosystem management of the study area.
Collapse
|
11
|
Benos L, Tagarakis AC, Dolias G, Berruto R, Kateris D, Bochtis D. Machine Learning in Agriculture: A Comprehensive Updated Review. SENSORS (BASEL, SWITZERLAND) 2021; 21:3758. [PMID: 34071553 PMCID: PMC8198852 DOI: 10.3390/s21113758] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 05/21/2021] [Accepted: 05/24/2021] [Indexed: 01/05/2023]
Abstract
The digital transformation of agriculture has evolved various aspects of management into artificial intelligent systems for the sake of making value from the ever-increasing data originated from numerous sources. A subset of artificial intelligence, namely machine learning, has a considerable potential to handle numerous challenges in the establishment of knowledge-based farming systems. The present study aims at shedding light on machine learning in agriculture by thoroughly reviewing the recent scholarly literature based on keywords' combinations of "machine learning" along with "crop management", "water management", "soil management", and "livestock management", and in accordance with PRISMA guidelines. Only journal papers were considered eligible that were published within 2018-2020. The results indicated that this topic pertains to different disciplines that favour convergence research at the international level. Furthermore, crop management was observed to be at the centre of attention. A plethora of machine learning algorithms were used, with those belonging to Artificial Neural Networks being more efficient. In addition, maize and wheat as well as cattle and sheep were the most investigated crops and animals, respectively. Finally, a variety of sensors, attached on satellites and unmanned ground and aerial vehicles, have been utilized as a means of getting reliable input data for the data analyses. It is anticipated that this study will constitute a beneficial guide to all stakeholders towards enhancing awareness of the potential advantages of using machine learning in agriculture and contributing to a more systematic research on this topic.
Collapse
Affiliation(s)
- Lefteris Benos
- Centre of Research and Technology-Hellas (CERTH), Institute for Bio-Economy and Agri-Technology (IBO), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (L.B.); (A.C.T.); (G.D.); (D.K.)
| | - Aristotelis C. Tagarakis
- Centre of Research and Technology-Hellas (CERTH), Institute for Bio-Economy and Agri-Technology (IBO), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (L.B.); (A.C.T.); (G.D.); (D.K.)
| | - Georgios Dolias
- Centre of Research and Technology-Hellas (CERTH), Institute for Bio-Economy and Agri-Technology (IBO), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (L.B.); (A.C.T.); (G.D.); (D.K.)
| | - Remigio Berruto
- Department of Agriculture, Forestry and Food Science (DISAFA), University of Turin, Largo Braccini 2, 10095 Grugliasco, Italy;
| | - Dimitrios Kateris
- Centre of Research and Technology-Hellas (CERTH), Institute for Bio-Economy and Agri-Technology (IBO), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (L.B.); (A.C.T.); (G.D.); (D.K.)
| | - Dionysis Bochtis
- Centre of Research and Technology-Hellas (CERTH), Institute for Bio-Economy and Agri-Technology (IBO), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (L.B.); (A.C.T.); (G.D.); (D.K.)
- FarmB Digital Agriculture P.C., Doiranis 17, GR 54639 Thessaloniki, Greece
| |
Collapse
|
12
|
Crop Monitoring and Classification Using Polarimetric RADARSAT-2 Time-Series Data Across Growing Season: A Case Study in Southwestern Ontario, Canada. REMOTE SENSING 2021. [DOI: 10.3390/rs13071394] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Multitemporal polarimetric synthetic aperture radar (PolSAR) has proven as a very effective technique in agricultural monitoring and crop classification. This study presents a comprehensive evaluation of crop monitoring and classification over an agricultural area in southwestern Ontario, Canada. The time-series RADARSAT-2 C-Band PolSAR images throughout the entire growing season were exploited. A set of 27 representative polarimetric observables categorized into ten groups was selected and analyzed in this research. First, responses and temporal evolutions of each of the polarimetric observables over different crop types were quantitatively analyzed. The results reveal that the backscattering coefficients in cross-pol and Pauli second channel, the backscattering ratio between HV and VV channels (HV/VV), the polarimetric decomposition outputs, the correlation coefficient between HH and VV channelρ ρHHVV, and the radar vegetation index (RVI) show the highest sensitivity to crop growth. Then, the capability of PolSAR time-series data of the same beam mode was also explored for crop classification using the Random Forest (RF) algorithm. The results using single groups of polarimetric observables show that polarimetric decompositions, backscattering coefficients in Pauli and linear polarimetric channels, and correlation coefficients produced the best classification accuracies, with overall accuracies (OAs) higher than 87%. A forward selection procedure to pursue optimal classification accuracy was expanded to different perspectives, enabling an optimal combination of polarimetric observables and/or multitemporal SAR images. The results of optimal classifications show that a few polarimetric observables or a few images on certain critical dates may produce better accuracies than the whole dataset. The best result was achieved using an optimal combination of eight groups of polarimetric observables and six SAR images, with an OA of 94.04%. This suggests that an optimal combination considering both perspectives may be valuable for crop classification, which could serve as a guideline and is transferable for future research.
Collapse
|