1
|
Spatial Transferability of Random Forest Models for Crop Type Classification Using Sentinel-1 and Sentinel-2. REMOTE SENSING 2022. [DOI: 10.3390/rs14061493] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Large-scale crop type mapping often requires prediction beyond the environmental settings of the training sites. Shifts in crop phenology, field characteristics, or ecological site conditions in the previously unseen area, may reduce the classification performance of machine learning classifiers that often overfit to the training sites. This study aims to assess the spatial transferability of Random Forest models for crop type classification across Germany. The effects of different input datasets, i.e., only optical, only Synthetic Aperture Radar (SAR), and optical-SAR data combination, and the impact of spatial feature selection were systematically tested to identify the optimal approach that shows the highest accuracy in the transfer region. The spatial feature selection, a feature selection approach combined with spatial cross-validation, should remove features that carry site-specific information in the training data, which in turn can reduce the accuracy of the classification model in previously unseen areas. Seven study sites distributed over Germany were analyzed using reference data for the major 11 crops grown in the year 2018. Sentinel-1 and Sentinel-2 data from October 2017 to October 2018 were used as input. The accuracy estimation was performed using the spatially independent sample sets. The results of the optical-SAR combination outperformed those of single sensors in the training sites (maximum F1-score–0.85), and likewise in the areas not covered by training data (maximum F1-score–0.79). Random forest models based on only SAR features showed the lowest accuracy losses when transferred to unseen regions (average F1loss–0.04). In contrast to using the entire feature set, spatial feature selection substantially reduces the number of input features while preserving good predictive performance on unseen sites. Altogether, applying spatial feature selection to a combination of optical-SAR features or using SAR-only features is beneficial for large-scale crop type classification where training data is not evenly distributed over the complete study region.
Collapse
|
2
|
Differentiation of River Sediments Fractions in UAV Aerial Images by Convolution Neural Network. REMOTE SENSING 2021. [DOI: 10.3390/rs13163188] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Riverbed material has multiple functions in river ecosystems, such as habitats, feeding grounds, spawning grounds, and shelters for aquatic organisms, and particle size of riverbed material reflects the tractive force of the channel flow. Therefore, regular surveys of riverbed material are conducted for environmental protection and river flood control projects. The field method is the most conventional riverbed material survey. However, conventional surveys of particle size of riverbed material require much labor, time, and cost to collect material on site. Furthermore, its spatial representativeness is also a problem because of the limited survey area against a wide riverbank. As a further solution to these problems, in this study, we tried an automatic classification of riverbed conditions using aerial photography with an unmanned aerial vehicle (UAV) and image recognition with artificial intelligence (AI) to improve survey efficiency. Due to using AI for image processing, a large number of images can be handled regardless of whether they are of fine or coarse particles. We tried a classification of aerial riverbed images that have the difference of particle size characteristics with a convolutional neural network (CNN). GoogLeNet, Alexnet, VGG-16 and ResNet, the common pre-trained networks, were retrained to perform the new task with the 70 riverbed images using transfer learning. Among the networks tested, GoogleNet showed the best performance for this study. The overall accuracy of the image classification reached 95.4%. On the other hand, it was supposed that shadows of the gravels caused the error of the classification. The network retrained with the images taken in the uniform temporal period gives higher accuracy for classifying the images taken in the same period as the training data. The results suggest the potential of evaluating riverbed materials using aerial photography with UAV and image recognition with CNN.
Collapse
|