1
|
Development of a Novel Burned-Area Subpixel Mapping (BASM) Workflow for Fire Scar Detection at Subpixel Level. REMOTE SENSING 2022. [DOI: 10.3390/rs14153546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The accurate detection of burned forest area is essential for post-fire management and assessment, and for quantifying carbon budgets. Therefore, it is imperative to map burned areas accurately. Currently, there are few burned-area products around the world. Researchers have mapped burned areas directly at the pixel level that is usually a mixture of burned area and other land cover types. In order to improve the burned area mapping at subpixel level, we proposed a Burned Area Subpixel Mapping (BASM) workflow to map burned areas at the subpixel level. We then applied the workflow to Sentinel 2 data sets to obtain burned area mapping at subpixel level. In this study, the information of true fire scar was provided by the Department of Emergency Management of Hunan Province, China. To validate the accuracy of the BASM workflow for detecting burned areas at the subpixel level, we applied the workflow to the Sentinel 2 image data and then compared the detected burned area at subpixel level with in situ measurements at fifteen fire-scar reference sites located in Hunan Province, China. Results show the proposed method generated successfully burned area at the subpixel level. The methods, especially the BASM-Feature Extraction Rule Based (BASM-FERB) method, could minimize misclassification and effects due to noise more effectively compared with the BASM-Random Forest (BASM-RF), BASM-Backpropagation Neural Net (BASM-BPNN), BASM-Support Vector Machine (BASM-SVM), and BASM-notra methods. We conducted a comparison study among BASM-FERB, BASM-RF, BASM-BPNN, BASM-SVM, and BASM-notra using five accuracy evaluation indices, i.e., overall accuracy (OA), user’s accuracy (UA), producer’s accuracy (PA), intersection over union (IoU), and Kappa coefficient (Kappa). The detection accuracy of burned area at the subpixel level by BASM-FERB’s OA, UA, IoU, and Kappa is 98.11%, 81.72%, 74.32%, and 83.98%, respectively, better than BASM-RF’s, BASM-BPNN’s, BASM-SVM’s, and BASM-notra’s, even though BASM-RF’s and BASM-notra’s average PA is higher than BASM-FERB’s, with 89.97%, 91.36%, and 89.52%, respectively. We conclude that the newly proposed BASM workflow can map burned areas at the subpixel level, providing greater accuracy in regards to the burned area for post-forest fire management and assessment.
Collapse
|
2
|
Active Fire Detection from Landsat-8 Imagery Using Deep Multiple Kernel Learning. REMOTE SENSING 2022. [DOI: 10.3390/rs14040992] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Active fires are devastating natural disasters that cause socio-economical damage across the globe. The detection and mapping of these disasters require efficient tools, scientific methods, and reliable observations. Satellite images have been widely used for active fire detection (AFD) during the past years due to their nearly global coverage. However, accurate AFD and mapping in satellite imagery is still a challenging task in the remote sensing community, which mainly uses traditional methods. Deep learning (DL) methods have recently yielded outstanding results in remote sensing applications. Nevertheless, less attention has been given to them for AFD in satellite imagery. This study presented a deep convolutional neural network (CNN) “MultiScale-Net” for AFD in Landsat-8 datasets at the pixel level. The proposed network had two main characteristics: (1) several convolution kernels with multiple sizes, and (2) dilated convolution layers (DCLs) with various dilation rates. Moreover, this paper suggested an innovative Active Fire Index (AFI) for AFD. AFI was added to the network inputs consisting of the SWIR2, SWIR1, and Blue bands to improve the performance of the MultiScale-Net. In an ablation analysis, three different scenarios were designed for multi-size kernels, dilation rates, and input variables individually, resulting in 27 distinct models. The quantitative results indicated that the model with AFI-SWIR2-SWIR1-Blue as the input variables, using multiple kernels of sizes 3 × 3, 5 × 5, and 7 × 7 simultaneously, and a dilation rate of 2, achieved the highest F1-score and IoU of 91.62% and 84.54%, respectively. Stacking AFI with the three Landsat-8 bands led to fewer false negative (FN) pixels. Furthermore, our qualitative assessment revealed that these models could detect single fire pixels detached from the large fire zones by taking advantage of multi-size kernels. Overall, the MultiScale-Net met expectations in detecting fires of varying sizes and shapes over challenging test samples.
Collapse
|
3
|
A Dual Attention Convolutional Neural Network for Crop Classification Using Time-Series Sentinel-2 Imagery. REMOTE SENSING 2022. [DOI: 10.3390/rs14030498] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Accurate and timely mapping of crop types and having reliable information about the cultivation pattern/area play a key role in various applications, including food security and sustainable agriculture management. Remote sensing (RS) has extensively been employed for crop type classification. However, accurate mapping of crop types and extents is still a challenge, especially using traditional machine learning methods. Therefore, in this study, a novel framework based on a deep convolutional neural network (CNN) and a dual attention module (DAM) and using Sentinel-2 time-series datasets was proposed to classify crops. A new DAM was implemented to extract informative deep features by taking advantage of both spectral and spatial characteristics of Sentinel-2 datasets. The spectral and spatial attention modules (AMs) were respectively applied to investigate the behavior of crops during the growing season and their neighborhood properties (e.g., textural characteristics and spatial relation to surrounding crops). The proposed network contained two streams: (1) convolution blocks for deep feature extraction and (2) several DAMs, which were employed after each convolution block. The first stream included three multi-scale residual convolution blocks, where the spectral attention blocks were mainly applied to extract deep spectral features. The second stream was built using four multi-scale convolution blocks with a spatial AM. In this study, over 200,000 samples from six different crop types (i.e., alfalfa, broad bean, wheat, barley, canola, and garden) and three non-crop classes (i.e., built-up, barren, and water) were collected to train and validate the proposed framework. The results demonstrated that the proposed method achieved high overall accuracy and a Kappa coefficient of 98.54% and 0.981, respectively. It also outperformed other state-of-the-art classification methods, including RF, XGBOOST, R-CNN, 2D-CNN, 3D-CNN, and CBAM, indicating its high potential to discriminate different crop types.
Collapse
|
4
|
TCD-Net: A Novel Deep Learning Framework for Fully Polarimetric Change Detection Using Transfer Learning. REMOTE SENSING 2022. [DOI: 10.3390/rs14030438] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Due to anthropogenic and natural activities, the land surface continuously changes over time. The accurate and timely detection of changes is greatly important for environmental monitoring, resource management and planning activities. In this study, a novel deep learning-based change detection algorithm is proposed for bi-temporal polarimetric synthetic aperture radar (PolSAR) imagery using a transfer learning (TL) method. In particular, this method has been designed to automatically extract changes by applying three main steps as follows: (1) pre-processing, (2) parallel pseudo-label training sample generation based on a pre-trained model and fuzzy c-means (FCM) clustering algorithm, and (3) classification. Moreover, a new end-to-end three-channel deep neural network, called TCD-Net, has been introduced in this study. TCD-Net can learn more strong and abstract representations for the spatial information of a certain pixel. In addition, by adding an adaptive multi-scale shallow block and an adaptive multi-scale residual block to the TCD-Net architecture, this model with much lower parameters is sensitive to objects of various sizes. Experimental results on two Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) bi-temporal datasets demonstrated the effectiveness of the proposed algorithm compared to other well-known methods with an overall accuracy of 96.71% and a kappa coefficient of 0.82.
Collapse
|