1
|
Google Earth Engine and Artificial Intelligence (AI): A Comprehensive Review. REMOTE SENSING 2022. [DOI: 10.3390/rs14143253] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Remote sensing (RS) plays an important role gathering data in many critical domains (e.g., global climate change, risk assessment and vulnerability reduction of natural hazards, resilience of ecosystems, and urban planning). Retrieving, managing, and analyzing large amounts of RS imagery poses substantial challenges. Google Earth Engine (GEE) provides a scalable, cloud-based, geospatial retrieval and processing platform. GEE also provides access to the vast majority of freely available, public, multi-temporal RS data and offers free cloud-based computational power for geospatial data analysis. Artificial intelligence (AI) methods are a critical enabling technology to automating the interpretation of RS imagery, particularly on object-based domains, so the integration of AI methods into GEE represents a promising path towards operationalizing automated RS-based monitoring programs. In this article, we provide a systematic review of relevant literature to identify recent research that incorporates AI methods in GEE. We then discuss some of the major challenges of integrating GEE and AI and identify several priorities for future research. We developed an interactive web application designed to allow readers to intuitively and dynamically review the publications included in this literature review.
Collapse
|
2
|
A Dual Attention Convolutional Neural Network for Crop Classification Using Time-Series Sentinel-2 Imagery. REMOTE SENSING 2022. [DOI: 10.3390/rs14030498] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Accurate and timely mapping of crop types and having reliable information about the cultivation pattern/area play a key role in various applications, including food security and sustainable agriculture management. Remote sensing (RS) has extensively been employed for crop type classification. However, accurate mapping of crop types and extents is still a challenge, especially using traditional machine learning methods. Therefore, in this study, a novel framework based on a deep convolutional neural network (CNN) and a dual attention module (DAM) and using Sentinel-2 time-series datasets was proposed to classify crops. A new DAM was implemented to extract informative deep features by taking advantage of both spectral and spatial characteristics of Sentinel-2 datasets. The spectral and spatial attention modules (AMs) were respectively applied to investigate the behavior of crops during the growing season and their neighborhood properties (e.g., textural characteristics and spatial relation to surrounding crops). The proposed network contained two streams: (1) convolution blocks for deep feature extraction and (2) several DAMs, which were employed after each convolution block. The first stream included three multi-scale residual convolution blocks, where the spectral attention blocks were mainly applied to extract deep spectral features. The second stream was built using four multi-scale convolution blocks with a spatial AM. In this study, over 200,000 samples from six different crop types (i.e., alfalfa, broad bean, wheat, barley, canola, and garden) and three non-crop classes (i.e., built-up, barren, and water) were collected to train and validate the proposed framework. The results demonstrated that the proposed method achieved high overall accuracy and a Kappa coefficient of 98.54% and 0.981, respectively. It also outperformed other state-of-the-art classification methods, including RF, XGBOOST, R-CNN, 2D-CNN, 3D-CNN, and CBAM, indicating its high potential to discriminate different crop types.
Collapse
|
3
|
Classifying Crop Types Using Two Generations of Hyperspectral Sensors (Hyperion and DESIS) with Machine Learning on the Cloud. REMOTE SENSING 2021. [DOI: 10.3390/rs13224704] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Advances in spaceborne hyperspectral (HS) remote sensing, cloud-computing, and machine learning can help measure, model, map and monitor agricultural crops to address global food and water security issues, such as by providing accurate estimates of crop area and yield to model agricultural productivity. Leveraging these advances, we used the Earth Observing-1 (EO-1) Hyperion historical archive and the new generation DLR Earth Sensing Imaging Spectrometer (DESIS) data to evaluate the performance of hyperspectral narrowbands in classifying major agricultural crops of the U.S. with machine learning (ML) on Google Earth Engine (GEE). EO-1 Hyperion images from the 2010–2013 growing seasons and DESIS images from the 2019 growing season were used to classify three world crops (corn, soybean, and winter wheat) along with other crops and non-crops near Ponca City, Oklahoma, USA. The supervised classification algorithms: Random Forest (RF), Support Vector Machine (SVM), and Naive Bayes (NB), and the unsupervised clustering algorithm WekaXMeans (WXM) were run using selected optimal Hyperion and DESIS HS narrowbands (HNBs). RF and SVM returned the highest overall producer’s, and user’s accuracies, with the performances of NB and WXM being substantially lower. The best accuracies were achieved with two or three images throughout the growing season, especially a combination of an earlier month (June or July) and a later month (August or September). The narrow 2.55 nm bandwidth of DESIS provided numerous spectral features along the 400–1000 nm spectral range relative to smoother Hyperion spectral signatures with 10 nm bandwidth in the 400–2500 nm spectral range. Out of 235 DESIS HNBs, 29 were deemed optimal for agricultural study. Advances in ML and cloud-computing can greatly facilitate HS data analysis, especially as more HS datasets, tools, and algorithms become available on the Cloud.
Collapse
|
4
|
Identifying Dynamic Changes in Water Surface Using Sentinel-1 Data Based on Genetic Algorithm and Machine Learning Techniques. REMOTE SENSING 2021. [DOI: 10.3390/rs13183745] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The knowledge of water surface changes provides invaluable information for water resources management and flood monitoring. However, the accurate identification of water bodies is a long-term challenge due to human activities and climate change. Sentinel-1 synthetic aperture radar (SAR) data have been drawn, increasing attention to water extraction due to the availability of weather conditions, water sensitivity and high spatial and temporal resolutions. This study investigated the abilities of random forest (RF), Extreme Gradient Boosting (XGB) and support vector machine (SVM) methods to identify water bodies using Sentinel-1 imageries in the upper stream of the Yangtze River, China. Three sets of hyper-parameters including default values, optimized by grid searches and genetic algorithms, were examined for each model. Model performances were evaluated using a Sentinel-1 image of the developed site and the transfer site. The results showed that SVM outperformed RF and XGB under the three scenarios on both the validated and transfer sites. Among them, SVM optimized by genetic algorithm obtained the best accuracy with precisions of 0.9917 and 0.985, kappa statistics of 0.9833 and 0.97, F1-scores of 0.9919 and 0.9848 on validated and transfer sites, respectively. The best model was then used to identify the dynamic changes in water surfaces during the 2020 flood season in the study area. Overall, the study further demonstrated that SVM optimized using a genetic algorithm was a suitable method for monitoring water surface changes with a Sentinel-1 dataset.
Collapse
|
5
|
Characterizing the Up-To-Date Land-Use and Land-Cover Change in Xiong’an New Area from 2017 to 2020 Using the Multi-Temporal Sentinel-2 Images on Google Earth Engine. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2021. [DOI: 10.3390/ijgi10070464] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Land use and land cover (LULC) are fundamental units of human activities. Therefore, it is of significance to accurately and in a timely manner obtain the LULC maps where dramatic LULC changes are undergoing. Since 2017 April, a new state-level area, Xiong’an New Area, was established in China. In order to better characterize the LULC changes in Xiong’an New Area, this study makes full use of the multi-temporal 10-m Sentinel-2 images, the cloud-computing Google Earth Engine (GEE) platform, and the powerful classification capability of random forest (RF) models to generate the continuous LULC maps from 2017 to 2020. To do so, a novel multiple RF-based classification framework is adopted by outputting the classification probability based on each monthly composite and aggregating the multiple probability maps to generate the final classification map. Based on the obtained LULC maps, this study analyzes the spatio-temporal changes of LULC types in the last four years and the different change patterns in three counties. Experimental results indicate that the derived LULC maps achieve high accuracy for each year, with the overall accuracy and Kappa values no less than 0.95. It is also found that the changed areas account for nearly 36%, and the dry farmland, impervious surface, and other land-cover types have changed dramatically and present varying change patterns in three counties, which might be caused by the latest planning of Xiong’an New Area. The obtained 10-m four-year LULC maps in this study are supposed to provide some valuable information on the monitoring and understanding of what kinds of LULC changes have taken place in Xiong’an New Area.
Collapse
|
6
|
Air Pollution Prediction with Multi-Modal Data and Deep Neural Networks. REMOTE SENSING 2020. [DOI: 10.3390/rs12244142] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Air pollution is becoming a rising and serious environmental problem, especially in urban areas affected by an increasing migration rate. The large availability of sensor data enables the adoption of analytical tools to provide decision support capabilities. Employing sensors facilitates air pollution monitoring, but the lack of predictive capability limits such systems’ potential in practical scenarios. On the other hand, forecasting methods offer the opportunity to predict the future pollution in specific areas, potentially suggesting useful preventive measures. To date, many works tackled the problem of air pollution forecasting, most of which are based on sequence models. These models are trained with raw pollution data and are subsequently utilized to make predictions. This paper proposes a novel approach evaluating four different architectures that utilize camera images to estimate the air pollution in those areas. These images are further enhanced with weather data to boost the classification accuracy. The proposed approach exploits generative adversarial networks combined with data augmentation techniques to mitigate the class imbalance problem. The experiments show that the proposed method achieves robust accuracy of up to 0.88, which is comparable to sequence models and conventional models that utilize air pollution data. This is a remarkable result considering that the historic air pollution data is directly related to the output—future air pollution data, whereas the proposed architecture uses camera images to recognize the air pollution—which is an inherently much more difficult problem.
Collapse
|