1
|
Kurban R. Gaussian of Differences: A Simple and Efficient General Image Fusion Method. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1215. [PMID: 37628245 PMCID: PMC10453154 DOI: 10.3390/e25081215] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/27/2023]
Abstract
The separate analysis of images obtained from a single source using different camera settings or spectral bands, whether from one or more than one sensor, is quite difficult. To solve this problem, a single image containing all of the distinctive pieces of information in each source image is generally created by combining the images, a process called image fusion. In this paper, a simple and efficient, pixel-based image fusion method is proposed that relies on weighting the edge information associated with each pixel of all of the source images proportional to the distance from their neighbors by employing a Gaussian filter. The proposed method, Gaussian of differences (GD), was evaluated using multi-modal medical images, multi-sensor visible and infrared images, multi-focus images, and multi-exposure images, and was compared to existing state-of-the-art fusion methods by utilizing objective fusion quality metrics. The parameters of the GD method are further enhanced by employing the pattern search (PS) algorithm, resulting in an adaptive optimization strategy. Extensive experiments illustrated that the proposed GD fusion method ranked better on average than others in terms of objective quality metrics and CPU time consumption.
Collapse
Affiliation(s)
- Rifat Kurban
- Department of Computer Engineering, Abdullah Gul University, 38080 Kayseri, Turkey
| |
Collapse
|
2
|
Singh P, Singh S, Paprzycki M. DICO: Dingo coot optimization-based ZF net for pansharpening. INTERNATIONAL JOURNAL OF KNOWLEDGE-BASED AND INTELLIGENT ENGINEERING SYSTEMS 2023. [DOI: 10.3233/kes-221530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
With the recent advancements in technology, there has been a tremendous growth in the usage of images captured using satellites in various applications, like defense, academics, resource exploration, land-use mapping, and so on. Certain mission-critical applications need images of higher visual quality, but the images captured by the sensors normally suffer from a tradeoff between high spectral and spatial resolutions. Hence, for obtaining images with high visual quality, it is necessary to combine the low resolution multispectral (MS) image with the high resolution panchromatic (PAN) image, and this is accomplished by means of pansharpening. In this paper, an efficient pansharpening technique is devised by using a hybrid optimized deep learning network. Zeiler and Fergus network (ZF Net) is utilized for performing the fusion of the sharpened and upsampled MS image with the PAN image. A novel Dingo coot (DICO) optimization is created for updating the learning parameters and weights of the ZF Net. Moreover, the devised DICO_ZF Net for pansharpening is examined for its effectiveness by considering measures, like Peak Signal To Noise Ratio (PSNR) and Degree of Distortion (DD) and is found to have attained values at 50.177 dB and 0.063 dB.
Collapse
Affiliation(s)
- Preeti Singh
- Department of Computer Science and Engineering, Madan Mohan Malaviya University of Technology, Gorakhpur (U.P.), India
| | - Sarvpal Singh
- Department of Information Technology and Computer Application, Madan Mohan Malaviya University of Technology, Gorakhpur (U.P.), India
| | | |
Collapse
|
3
|
Chen G, Lu H, Zou W, Li L, Emam M, Chen X, Jing W, Wang J, Li C. Spatiotemporal Fusion for Spectral Remote Sensing: A Statistical Analysis and Review. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2023. [DOI: 10.1016/j.jksuci.2023.02.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2023]
|
4
|
Ma W, Wang K, Li J, Yang SX, Li J, Song L, Li Q. Infrared and Visible Image Fusion Technology and Application: A Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:599. [PMID: 36679396 PMCID: PMC9862268 DOI: 10.3390/s23020599] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/25/2022] [Accepted: 12/29/2022] [Indexed: 06/17/2023]
Abstract
The images acquired by a single visible light sensor are very susceptible to light conditions, weather changes, and other factors, while the images acquired by a single infrared light sensor generally have poor resolution, low contrast, low signal-to-noise ratio, and blurred visual effects. The fusion of visible and infrared light can avoid the disadvantages of two single sensors and, in fusing the advantages of both sensors, significantly improve the quality of the images. The fusion of infrared and visible images is widely used in agriculture, industry, medicine, and other fields. In this study, firstly, the architecture of mainstream infrared and visible image fusion technology and application was reviewed; secondly, the application status in robot vision, medical imaging, agricultural remote sensing, and industrial defect detection fields was discussed; thirdly, the evaluation indicators of the main image fusion methods were combined into the subjective evaluation and the objective evaluation, the properties of current mainstream technologies were then specifically analyzed and compared, and the outlook for image fusion was assessed; finally, infrared and visible image fusion was summarized. The results show that the definition and efficiency of the fused infrared and visible image had been improved significantly. However, there were still some problems, such as the poor accuracy of the fused image, and irretrievably lost pixels. There is a need to improve the adaptive design of the traditional algorithm parameters, to combine the innovation of the fusion algorithm and the optimization of the neural network, so as to further improve the image fusion accuracy, reduce noise interference, and improve the real-time performance of the algorithm.
Collapse
Affiliation(s)
- Weihong Ma
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
| | - Kun Wang
- School of Electrical Engineering, Chongqing University of Science & Technology, Chongqing 401331, China
| | - Jiawei Li
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
| | - Simon X. Yang
- Advanced Robotics and Intelligent Systems Laboratory, School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada
| | - Junfei Li
- Advanced Robotics and Intelligent Systems Laboratory, School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada
| | - Lepeng Song
- School of Electrical Engineering, Chongqing University of Science & Technology, Chongqing 401331, China
| | - Qifeng Li
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
| |
Collapse
|
5
|
Scalable Semiparametric Spatio-temporal Regression for Large Data Analysis. JOURNAL OF AGRICULTURAL, BIOLOGICAL AND ENVIRONMENTAL STATISTICS 2022. [DOI: 10.1007/s13253-022-00525-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
6
|
A New Orbiting Deployable System for Small Satellite Observations for Ecology and Earth Observation. REMOTE SENSING 2022. [DOI: 10.3390/rs14092066] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
In this paper, we present several study cases focused on marine, oceanographic, and atmospheric environments, which would greatly benefit from the use of a deployable system for small satellite observations. As opposed to the large standard ones, small satellites have become an effective and affordable alternative access to space, owing to their lower costs, innovative design and technology, and higher revisiting times, when launched in a constellation configuration. One of the biggest challenges is created by the small satellite instrumentation working in the visible (VIS), infrared (IR), and microwave (MW) spectral ranges, for which the resolution of the acquired data depends on the physical dimension of the telescope and the antenna collecting the signal. In this respect, a deployable payload, fitting the limited size and mass imposed by the small satellite architecture, once unfolded in space, can reach performances similar to those of larger satellites. In this study, we show how ecology and Earth Observations can benefit from data acquired by small satellites, and how they can be further improved thanks to deployable payloads. We focus on DORA—Deployable Optics for Remote sensing Applications—in the VIS to TIR spectral range, and on a planned application in the MW spectral range, and we carry out a radiometric analysis to verify its performances for Earth Observation studies.
Collapse
|
7
|
Mizuochi H, Iwao K, Yamamoto S. Thermal remote sensing over heterogeneous urban and suburban landscapes using sensor-driven super-resolution. PLoS One 2022; 17:e0266541. [PMID: 35385560 PMCID: PMC8986004 DOI: 10.1371/journal.pone.0266541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 03/22/2022] [Indexed: 11/19/2022] Open
Abstract
Thermal remote sensing is an important tool for monitoring regional climate and environment, including urban heat islands. However, it suffers from a relatively lower spatial resolution compared to optical remote sensing. To improve the spatial resolution, various “data-driven” image processing techniques (pan-sharpening, kernel-driven methods, and machine learning) have been developed in the previous decades. Such empirical super-resolution methods create visually appealing thermal images; however, they may sacrifice radiometric consistency because they are not necessarily sensitive to specific sensor features. In this paper, we evaluated a “sensor-driven” super-resolution approach that explicitly considers the sensor blurring process, to ensure radiometric consistency with the original thermal image during high-resolution thermal image retrieval. The sensor-driven algorithm was applied to a cloud-free Moderate Resolution Imaging Spectroradiometer (MODIS) scene of heterogeneous urban and suburban landscape that included built-up areas, low mountains with a forest, a lake, croplands, and river channels. Validation against the reference high-resolution thermal image obtained by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) shows that the sensor-driven algorithm can downscale the MODIS image to 250-m resolution, while maintaining a high statistical consistency with the original MODIS and ASTER images. Part of our algorithm, such as radiometric offset correction based on the Mahalanobis distance, may be integrated with other existing approaches in the future.
Collapse
Affiliation(s)
- Hiroki Mizuochi
- Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, Japan
- * E-mail:
| | - Koki Iwao
- Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, Japan
| | - Satoru Yamamoto
- Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, Japan
| |
Collapse
|
8
|
An object-based sparse representation model for spatiotemporal image fusion. Sci Rep 2022; 12:5021. [PMID: 35322054 PMCID: PMC8943014 DOI: 10.1038/s41598-022-08728-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 03/10/2022] [Indexed: 11/12/2022] Open
Abstract
Many algorithms have been proposed for spatiotemporal image fusion on simulated data, yet only a few deal with spectral changes in real satellite images. An innovative spatiotemporal sparse representation (STSR) image fusion approach is introduced in this study to generate global dense high spatial and temporal resolution images from real satellite images. It aimed to minimize the data gap, especially when fine spatial resolution images are unavailable for a specific period. The proposed approach uses a set of real coarse- and fine-spatial resolution satellite images acquired simultaneously and another coarse image acquired at a different time to predict the corresponding unknown fine image. During the fusion process, pixels located between object classes with different spectral responses are more vulnerable to spectral distortion. Therefore, firstly, a rule-based fuzzy classification algorithm is used in STSR to classify input data and extract accurate edge candidates. Then, an object-based estimation of physical constraints and brightness shift between input data is utilized to construct the proposed sparse representation (SR) model that can deal with real input satellite images. Initial rules to adjust spatial covariance and equalize spectral response of object classes between input images are introduced as prior information to the model, followed by an optimization step to improve the STSR approach. The proposed method is applied to real fine Sentinel-2 and coarse Landsat-8 satellite data. The results showed that introducing objects in the fusion process improved spatial detail, especially over the edge candidates, and eliminated spectral distortion by preserving the spectral continuity of extracted objects. Experiments revealed the promising performance of the proposed object-based STSR image fusion approach based on its quantitative results, where it preserved almost 96.9% and 93.8% of the spectral detail over the smooth and urban areas, respectively.
Collapse
|
9
|
Spatiotemporal Fusion Modelling Using STARFM: Examples of Landsat 8 and Sentinel-2 NDVI in Bavaria. REMOTE SENSING 2022. [DOI: 10.3390/rs14030677] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
The increasing availability and variety of global satellite products provide a new level of data with different spatial, temporal, and spectral resolutions; however, identifying the most suited resolution for a specific application consumes increasingly more time and computation effort. The region’s cloud coverage additionally influences the choice of the best trade-off between spatial and temporal resolution, and different pixel sizes of remote sensing (RS) data may hinder the accurate monitoring of different land cover (LC) classes such as agriculture, forest, grassland, water, urban, and natural-seminatural. To investigate the importance of RS data for these LC classes, the present study fuses NDVIs of two high spatial resolution data (high pair) (Landsat (30 m, 16 days; L) and Sentinel-2 (10 m, 5–6 days; S), with four low spatial resolution data (low pair) (MOD13Q1 (250 m, 16 days), MCD43A4 (500 m, one day), MOD09GQ (250 m, one-day), and MOD09Q1 (250 m, eight day)) using the spatial and temporal adaptive reflectance fusion model (STARFM), which fills regions’ cloud or shadow gaps without losing spatial information. These eight synthetic NDVI STARFM products (2: high pair multiply 4: low pair) offer a spatial resolution of 10 or 30 m and temporal resolution of 1, 8, or 16 days for the entire state of Bavaria (Germany) in 2019. Due to their higher revisit frequency and more cloud and shadow-free scenes (S = 13, L = 9), Sentinel-2 (overall R2 = 0.71, and RMSE = 0.11) synthetic NDVI products provide more accurate results than Landsat (overall R2 = 0.61, and RMSE = 0.13). Likewise, for the agriculture class, synthetic products obtained using Sentinel-2 resulted in higher accuracy than Landsat except for L-MOD13Q1 (R2 = 0.62, RMSE = 0.11), resulting in similar accuracy preciseness as S-MOD13Q1 (R2 = 0.68, RMSE = 0.13). Similarly, comparing L-MOD13Q1 (R2 = 0.60, RMSE = 0.05) and S-MOD13Q1 (R2 = 0.52, RMSE = 0.09) for the forest class, the former resulted in higher accuracy and precision than the latter. Conclusively, both L-MOD13Q1 and S-MOD13Q1 are suitable for agricultural and forest monitoring; however, the spatial resolution of 30 m and low storage capacity makes L-MOD13Q1 more prominent and faster than that of S-MOD13Q1 with the 10-m spatial resolution.
Collapse
|
10
|
A Spatiotemporal Fusion Method Based on Multiscale Feature Extraction and Spatial Channel Attention Mechanism. REMOTE SENSING 2022. [DOI: 10.3390/rs14030461] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Remote sensing satellite images with a high spatial and temporal resolution play a crucial role in Earth science applications. However, due to technology and cost constraints, it is difficult for a single satellite to achieve both a high spatial resolution and high temporal resolution. The spatiotemporal fusion method is a cost-effective solution for generating a dense temporal data resolution with a high spatial resolution. In recent years, spatiotemporal image fusion based on deep learning has received wide attention. In this article, a spatiotemporal fusion method based on multiscale feature extraction and a spatial channel attention mechanism is proposed. Firstly, the method uses a multiscale mechanism to fully utilize the structural features in the images. Then a novel attention mechanism is used to capture both spatial and channel information; finally, the rich features and spatial and channel information are used to fuse the images. Experimental results obtained from two datasets show that the proposed method outperforms existing fusion methods in both subjective and objective evaluations.
Collapse
|
11
|
MSNet: A Multi-Stream Fusion Network for Remote Sensing Spatiotemporal Fusion Based on Transformer and Convolution. REMOTE SENSING 2021. [DOI: 10.3390/rs13183724] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Remote sensing products with high temporal and spatial resolution can be hardly obtained under the constrains of existing technology and cost. Therefore, the spatiotemporal fusion of remote sensing images has attracted considerable attention. Spatiotemporal fusion algorithms based on deep learning have gradually developed, but they also face some problems. For example, the amount of data affects the model’s ability to learn, and the robustness of the model is not high. The features extracted through the convolution operation alone are insufficient, and the complex fusion method also introduces noise. To solve these problems, we propose a multi-stream fusion network for remote sensing spatiotemporal fusion based on Transformer and convolution, called MSNet. We introduce the structure of the Transformer, which aims to learn the global temporal correlation of the image. At the same time, we also use a convolutional neural network to establish the relationship between input and output and to extract features. Finally, we adopt the fusion method of average weighting to avoid using complicated methods to introduce noise. To test the robustness of MSNet, we conducted experiments on three datasets and compared them with four representative spatiotemporal fusion algorithms to prove the superiority of MSNet (Spectral Angle Mapper (SAM) < 0.193 on the CIA dataset, erreur relative global adimensionnelle de synthese (ERGAS) < 1.687 on the LGC dataset, and root mean square error (RMSE) < 0.001 on the AHB dataset).
Collapse
|
12
|
Resolution Enhancement of Remotely Sensed Land Surface Temperature: Current Status and Perspectives. REMOTE SENSING 2021. [DOI: 10.3390/rs13071306] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Remotely sensed land surface temperature (LST) distribution has played a valuable role in land surface processes studies from local to global scales. However, it is still difficult to acquire concurrently high spatiotemporal resolution LST data due to the trade-off between spatial and temporal resolutions in thermal remote sensing. To address this problem, various methods have been proposed to enhance the resolutions of LST data, and substantial progress in this field has been achieved in recent years. Therefore, this study reviewed the current status of resolution enhancement methods for LST data. First, three groups of enhancement methods—spatial resolution enhancement, temporal resolution enhancement, and simultaneous spatiotemporal resolution enhancement—were comprehensively investigated and analyzed. Then, the quality assessment strategies for LST resolution enhancement methods and their advantages and disadvantages were specifically discussed. Finally, key directions for future studies in this field were suggested, i.e., synergy between process-driven and data-driven methods, cross-comparison among different methods, and improvement in localization strategy.
Collapse
|
13
|
Dynamic Mapping of Subarctic Surface Water by Fusion of Microwave and Optical Satellite Data Using Conditional Adversarial Networks. REMOTE SENSING 2021. [DOI: 10.3390/rs13020175] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Surface water monitoring with fine spatiotemporal resolution in the subarctic is important for understanding the impact of climate change upon hydrological cycles in the region. This study provides dynamic water mapping with daily frequency and a moderate (500 m) resolution over a heterogeneous thermokarst landscape in eastern Siberia. A combination of random forest and conditional generative adversarial networks (pix2pix) machine learning (ML) methods were applied to data fusion between the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Advanced Microwave Scanning Radiometer 2, with the addition of ancillary hydrometeorological information. The results show that our algorithm successfully filled in observational gaps in the MODIS data caused by cloud interference, thereby improving MODIS data availability from 30.3% to almost 100%. The water fraction estimated by our algorithm was consistent with that derived from the reference MODIS data (relative mean bias: −2.43%; relative root mean squared error: 14.7%), and effectively rendered the seasonality and heterogeneous distribution of the Lena River and the thermokarst lakes. Practical knowledge of the application of ML to surface water monitoring also resulted from the preliminary experiments involving the random forest method, including timing of the water-index thresholding and selection of the input features for ML training.
Collapse
|
14
|
Guirado E, Blanco-Sacristán J, Rodríguez-Caballero E, Tabik S, Alcaraz-Segura D, Martínez-Valderrama J, Cabello J. Mask R-CNN and OBIA Fusion Improves the Segmentation of Scattered Vegetation in Very High-Resolution Optical Sensors. SENSORS (BASEL, SWITZERLAND) 2021; 21:E320. [PMID: 33466513 PMCID: PMC7796453 DOI: 10.3390/s21010320] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 12/29/2020] [Accepted: 01/01/2021] [Indexed: 11/17/2022]
Abstract
Vegetation generally appears scattered in drylands. Its structure, composition and spatial patterns are key controls of biotic interactions, water, and nutrient cycles. Applying segmentation methods to very high-resolution images for monitoring changes in vegetation cover can provide relevant information for dryland conservation ecology. For this reason, improving segmentation methods and understanding the effect of spatial resolution on segmentation results is key to improve dryland vegetation monitoring. We explored and analyzed the accuracy of Object-Based Image Analysis (OBIA) and Mask Region-based Convolutional Neural Networks (Mask R-CNN) and the fusion of both methods in the segmentation of scattered vegetation in a dryland ecosystem. As a case study, we mapped Ziziphus lotus, the dominant shrub of a habitat of conservation priority in one of the driest areas of Europe. Our results show for the first time that the fusion of the results from OBIA and Mask R-CNN increases the accuracy of the segmentation of scattered shrubs up to 25% compared to both methods separately. Hence, by fusing OBIA and Mask R-CNNs on very high-resolution images, the improved segmentation accuracy of vegetation mapping would lead to more precise and sensitive monitoring of changes in biodiversity and ecosystem services in drylands.
Collapse
Affiliation(s)
- Emilio Guirado
- Multidisciplinary Institute for Environment Studies “Ramon Margalef” University of Alicante, Edificio Nuevos Institutos, Carretera de San Vicente del Raspeig s/n San Vicente del Raspeig, 03690 Alicante, Spain;
- Andalusian Center for Assessment and monitoring of global change (CAESCG), University of Almeria, 04120 Almeria, Spain;
| | - Javier Blanco-Sacristán
- College of Engineering, Mathematics and Physical Sciences, University of Exeter, Penryn Campus, Cornwall TR10 9EZ, UK;
| | - Emilio Rodríguez-Caballero
- Agronomy Department, University of Almeria, 04120 Almeria, Spain;
- Centro de Investigación de Colecciones Científicas de la Universidad de Almería (CECOUAL), 04120 Almeria, Spain
| | - Siham Tabik
- Department of Computer Science and Artificial Intelligence, University of Granada, 18071 Granada, Spain;
| | - Domingo Alcaraz-Segura
- Department of Botany, Faculty of Science, University of Granada, 18071 Granada, Spain;
- iEcolab, Inter-University Institute for Earth System Research, University of Granada, 18006 Granada, Spain
| | - Jaime Martínez-Valderrama
- Multidisciplinary Institute for Environment Studies “Ramon Margalef” University of Alicante, Edificio Nuevos Institutos, Carretera de San Vicente del Raspeig s/n San Vicente del Raspeig, 03690 Alicante, Spain;
| | - Javier Cabello
- Andalusian Center for Assessment and monitoring of global change (CAESCG), University of Almeria, 04120 Almeria, Spain;
- Department of Biology and Geology, University of Almeria, 04120 Almeria, Spain
| |
Collapse
|
15
|
A Machine Learning Approach for Remote Sensing Data Gap-Filling with Open-Source Implementation: An Example Regarding Land Surface Temperature, Surface Albedo and NDVI. REMOTE SENSING 2020. [DOI: 10.3390/rs12233865] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Satellite remote sensing has now become a unique tool for continuous and predictable monitoring of geosystems at various scales, observing the dynamics of different geophysical parameters of the environment. One of the essential problems with most satellite environmental monitoring methods is their sensitivity to atmospheric conditions, in particular cloud cover, which leads to the loss of a significant part of data, especially at high latitudes, potentially reducing the quality of observation time series until it is useless. In this paper, we present a toolbox for filling gaps in remote sensing time-series data based on machine learning algorithms and spatio-temporal statistics. The first implemented procedure allows us to fill gaps based on spatial relationships between pixels, obtained from historical time-series. Then, the second procedure is dedicated to filling the remaining gaps based on the temporal dynamics of each pixel value. The algorithm was tested and verified on Sentinel-3 SLSTR and Terra MODIS land surface temperature data and under different geographical and seasonal conditions. As a result of validation, it was found that in most cases the error did not exceed 1 °C. The algorithm was also verified for gaps restoration in Terra MODIS derived normalized difference vegetation index and land surface broadband albedo datasets. The software implementation is Python-based and distributed under conditions of GNU GPL 3 license via public repository.
Collapse
|
16
|
An Effective High Spatiotemporal Resolution NDVI Fusion Model Based on Histogram Clustering. REMOTE SENSING 2020. [DOI: 10.3390/rs12223774] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The normalized difference vegetation index (NDVI) is a powerful tool for understanding past vegetation, monitoring the current state, and predicting its future. Due to technological and budget limitations, the existing global NDVI time-series data cannot simultaneously meet the needs of high spatial and temporal resolution. This study proposes a high spatiotemporal resolution NDVI fusion model based on histogram clustering (NDVI_FMHC), which uses a new spatiotemporal fusion framework to predict phenological and shape changes. Meanwhile, this model also uses four strategies to reduce error, including the construction of an overdetermined linear mixed model, multiscale prediction, residual distribution, and Gaussian filtering. Five groups of real MODIS_NDVI and Landsat_NDVI datasets were used to verify the predictive performance of the NDVI_FMHC. The results indicate that NDVI_FMHC has higher accuracy and robustness in forest areas (r = 0.9488 and ADD = 0.0229) and cultivated land areas (r = 0.9493 and ADD = 0.0605), while the prediction effect is relatively weak in areas subject to shape changes, such as flooded areas (r = 0.8450 and ADD = 0.0968), urban areas (r = 0.8855 and ADD = 0.0756), and fire areas (r = 0.8417 and ADD = 0.0749). Compared with ESTARFM, NDVI_LMGM, and FSDAF, NDVI_FMHC has the highest prediction accuracy, the best spatial detail retention, and the strongest ability to capture shape changes. Therefore, the NDVI_FMHC can obtain NDVI time-series data with high spatiotemporal resolution, which can be used to realize long-term land surface dynamic process research in a complex environment.
Collapse
|
17
|
Automatic Mapping of Rice Growth Stages Using the Integration of SENTINEL-2, MOD13Q1, and SENTINEL-1. REMOTE SENSING 2020. [DOI: 10.3390/rs12213613] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Rice (Oryza sativa L.) is a staple food crop for more than half of the world’s population. Rice production is facing a myriad of problems, including water shortage, climate, and land-use change. Accurate maps of rice growth stages are critical for monitoring rice production and assessing its impacts on national and global food security. Rice growth stages are typically monitored by coarse-resolution satellite imagery. However, it is difficult to accurately map due to the occurrence of mixed pixels in fragmented and patchy rice fields, as well as cloud cover, particularly in tropical countries. To solve these problems, we developed an automated mapping workflow to produce near real-time multi-temporal maps of rice growth stages at a 10-m spatial resolution using multisource remote sensing data (Sentinel-2, MOD13Q1, and Sentinel-1). This study was investigated between 1 June and 29 September 2018 in two (wet and dry) areas of Java Island in Indonesia. First, we built prediction models based on Sentinel-2, and fusion of MOD13Q1/Sentinel-1 using the ground truth information. Second, we applied the prediction models on all images in area and time and separation between the non-rice planting class and rice planting class over the cropping pattern. Moreover, the model’s consistency on the multitemporal map with a 5–30-day lag was investigated. The result indicates that the Sentinel-2 based model classification gives a high overall accuracy of 90.6% and the fusion model MOD13Q1/Sentinel-1 shows 78.3%. The performance of multitemporal maps was consistent between time lags with an accuracy of 83.27–90.39% for Sentinel-2 and 84.15% for the integration of Sentinel-2/MOD13Q1/Sentinel-1. The results from this study show that it is possible to integrate multisource remote sensing for regular monitoring of rice phenology, thereby generating spatial information to support local-, national-, and regional-scale food security applications.
Collapse
|
18
|
Remote Sensing Applied in Forest Management to Optimize Ecosystem Services: Advances in Research. FORESTS 2020. [DOI: 10.3390/f11090969] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Research Highlights: the wide variety of multispectral sensors that currently exist make it possible to improve the study of forest systems and ecosystem services. Background and Objectives: this study aims to analyze the current usefulness of remote sensing in forest management and ecosystem services sciences, and to identify future lines of research on these issues worldwide during the period 1976–2019. Materials and Methods: a bibliometric technique is applied to 2066 articles published between 1976 and 2019 on these topics to find findings on scientific production and key subject areas. Results: scientific production has increased annually, so that in the last five years, 50.34% of all articles have been published. The thematic areas in which more articles were linked were environmental science, agricultural, and biological sciences, and earth and planetary sciences. Seven lines of research have been identified that generate contributions on this topic. In addition, the analysis of the relevance of the keywords has detected the ten main future directions of research. The growing worldwide trend of scientific production shows interest in developing aspects of this field of study. Conclusions: this study contributes to the academic, scientific, and institutional discussion to improve decision-making, and proposes new scenarios and uses of this technology to improve the administration and management of forest resources.
Collapse
|
19
|
Multi-Decadal Changes in Mangrove Extent, Age and Species in the Red River Estuaries of Viet Nam. REMOTE SENSING 2020. [DOI: 10.3390/rs12142289] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This research investigated the performance of four different machine learning supervised image classifiers: artificial neural network (ANN), decision tree (DT), random forest (RF), and support vector machine (SVM) using SPOT-7 and Sentinel-1 images to classify mangrove age and species in 2019 in a Red River estuary, typical of others found in northern Viet Nam. The four classifiers were chosen because they are considered to have high accuracy, however, their use in mangrove age and species classifications has thus far been limited. A time-series of Landsat images from 1975 to 2019 was used to map mangrove extent changes using the unsupervised classification method of iterative self-organizing data analysis technique (ISODATA) and a comparison with accuracy of K-means classification, which found that mangrove extent has increased, despite a fall in the 1980s, indicating the success of mangrove plantation and forest protection efforts by local people in the study area. To evaluate the supervised image classifiers, 183 in situ training plots were assessed, 70% of them were used to train the supervised algorithms, with 30% of them employed to validate the results. In order to improve mangrove species separations, Gram–Schmidt and principal component analysis image fusion techniques were applied to generate better quality images. All supervised and unsupervised (2019) results of mangrove age, species, and extent were mapped and accuracy was evaluated. Confusion matrices were calculated showing that the classified layers agreed with the ground-truth data where most producer and user accuracies were greater than 80%. The overall accuracy and Kappa coefficients (around 0.9) indicated that the image classifications were very good. The test showed that SVM was the most accurate, followed by DT, ANN, and RF in this case study. The changes in mangrove extent identified in this study and the methods tested for using remotely sensed data will be valuable to monitoring and evaluation assessments of mangrove plantation projects.
Collapse
|
20
|
Abstract
The combination of freely accessible satellite imagery from multiple programs improves the spatio-temporal coverage of remote sensing data, but it exhibits barriers regarding the variety of web services, file formats, and data standards. Ris an open-source software environment with state-of-the-art statistical packages for the analysis of optical imagery. However, it lacks the tools for providing unified access to multi-program archives to customize and process the time series of images. This manuscript introduces RGISTools, a new software that solves these issues, and provides a working example on water mapping, which is a socially and environmentally relevant research field. The case study uses a digital elevation model and a rarely assessed combination of Landsat-8 and Sentinel-2 imagery to determine the water level of a reservoir in Northern Spain. The case study demonstrates how to acquire and process time series of surface reflectance data in an efficient manner. Our method achieves reasonably accurate results, with a root mean squared error of 0.90 m. Future improvements of the package involve the expansion of the workflow to cover the processing of radar images. This should counteract the limitation of the cloud coverage with multi-spectral images.
Collapse
|
21
|
Modelling Crop Biomass from Synthetic Remote Sensing Time Series: Example for the DEMMIN Test Site, Germany. REMOTE SENSING 2020. [DOI: 10.3390/rs12111819] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study compares the performance of the five widely used crop growth models (CGMs): World Food Studies (WOFOST), Coalition for Environmentally Responsible Economies (CERES)-Wheat, AquaCrop, cropping systems simulation model (CropSyst), and the semi-empiric light use efficiency approach (LUE) for the prediction of winter wheat biomass on the Durable Environmental Multidisciplinary Monitoring Information Network (DEMMIN) test site, Germany. The study focuses on the use of remote sensing (RS) data, acquired in 2015, in CGMs, as they offer spatial information on the actual conditions of the vegetation. Along with this, the study investigates the data fusion of Landsat (30 m) and Moderate Resolution Imaging Spectroradiometer (MODIS) (500 m) data using the spatial and temporal reflectance adaptive reflectance fusion model (STARFM) fusion algorithm. These synthetic RS data offer a 30-m spatial and one-day temporal resolution. The dataset therefore provides the necessary information to run CGMs and it is possible to examine the fine-scale spatial and temporal changes in crop phenology for specific fields, or sub sections of them, and to monitor crop growth daily, considering the impact of daily climate variability. The analysis includes a detailed comparison of the simulated and measured crop biomass. The modelled crop biomass using synthetic RS data is compared to the model outputs using the original MODIS time series as well. On comparison with the MODIS product, the study finds the performance of CGMs more reliable, precise, and significant with synthetic time series. Using synthetic RS data, the models AquaCrop and LUE, in contrast to other models, simulate the winter wheat biomass best, with an output of high R2 (>0.82), low RMSE (<600 g/m2) and significant p-value (<0.05) during the study period. However, inputting MODIS data makes the models underperform, with low R2 (<0.68) and high RMSE (>600 g/m2). The study shows that the models requiring fewer input parameters (AquaCrop and LUE) to simulate crop biomass are highly applicable and precise. At the same time, they are easier to implement than models, which need more input parameters (WOFOST and CERES-Wheat).
Collapse
|
22
|
Effect of Image Fusion on Vegetation Index Quality—A Comparative Study from Gaofen-1, Gaofen-2, Gaofen-4, Landsat-8 OLI and MODIS Imagery. REMOTE SENSING 2020. [DOI: 10.3390/rs12101550] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In recent years, the use of image fusion method has received increasing attention in remote sensing, vegetation cover changes, vegetation indices (VIs) mapping, etc. For making high-resolution and good quality (with low-cost) VI mapping from a fused image, its quality and underlying factors need to be identified properly. For example, same-sensor image fusion generally has a higher spatial resolution ratio (SRR) (1:3 to 1:5) but multi-sensor fusion has a lower SRR (1:8 to 1:10). In addition to SRR, there might be other factors affecting the fused vegetation index (FVI) result which have not been investigated in detail before. In this research, we used a strategy on image fusion and quality assessment to find the effect of image fusion for VI quality using Gaofen-1 (GF1), Gaofen-2 (GF2), Gaofen-4 (GF4), Landsat-8 OLI, and MODIS imagery with their panchromatic (PAN) and multispectral (MS) bands in low SRR (1:6 to 1:15). For this research, we acquired a total of nine images (4 PAN+5 MS) on the same (almost) date (GF1, GF2, GF4 and MODIS images were acquired on 2017/07/13 and the Landsat-8 OLI image was acquired on 2017/07/17). The results show that image fusion has the least impact on Green Normalized Vegetation Index (GNDVI) and Atmospherically Resistant Vegetation Index (ARVI) compared to other VIs. The quality of VI is mostly insensitive with image fusion except for the high-pass filter (HPF) algorithm. The subjective and objective quality evaluation shows that Gram-Schmidt (GS) fusion has the least impact on FVI quality, and with decreasing SRR, the FVI quality is decreasing at a slow rate. FVI quality varies with types image fusion algorithms and SRR along with spectral response function (SRF) and signal-to-noise ratio (SNR). However, the FVI quality seems good even for small SRR (1:6 to 1:15 or lower) as long as they have good SNR and minimum SRF effect. The findings of this study could be cost-effective and highly applicable for high-quality VI mapping even in small SRR (1:15 or even lower).
Collapse
|
23
|
Abstract
Earth observation data with high spatiotemporal resolution are critical for dynamic monitoring and prediction in geoscience applications, however, due to some technique and budget limitations, it is not easy to acquire satellite images with both high spatial and high temporal resolutions. Spatiotemporal image fusion techniques provide a feasible and economical solution for generating dense-time data with high spatial resolution, pushing the limits of current satellite observation systems. Among existing various fusion algorithms, deeplearningbased models reveal a promising prospect with higher accuracy and robustness. This paper refined and improved the existing deep convolutional spatiotemporal fusion network (DCSTFN) to further boost model prediction accuracy and enhance image quality. The contributions of this paper are twofold. First, the fusion result is improved considerably with brand-new network architecture and a novel compound loss function. Experiments conducted in two different areas demonstrate these improvements by comparing them with existing algorithms. The enhanced DCSTFN model shows superior performance with higher accuracy, vision quality, and robustness. Second, the advantages and disadvantages of existing deeplearningbased spatiotemporal fusion models are comparatively discussed and a network design guide for spatiotemporal fusion is provided as a reference for future research. Those comparisons and guidelines are summarized based on numbers of actual experiments and have promising potentials to be applied for other image sources with customized spatiotemporal fusion networks.
Collapse
|
24
|
A Comprehensive and Automated Fusion Method: The Enhanced Flexible Spatiotemporal DAta Fusion Model for Monitoring Dynamic Changes of Land Surface. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9183693] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Spatiotemporal fusion methods provide an effective way to generate both high temporal and high spatial resolution data for monitoring dynamic changes of land surface. But existing fusion methods face two main challenges of monitoring the abrupt change events and accurately preserving the spatial details of objects. The Flexible Spatiotemporal DAta Fusion method (FSDAF) was proposed, which can monitor the abrupt change events, but its predicted images lacked intra-class variability and spatial details. To overcome the above limitations, this study proposed a comprehensive and automated fusion method, the Enhanced FSDAF (EFSDAF) method and tested it for Landsat–MODIS image fusion. Compared with FSDAF, the EFSDAF has the following strengths: (1) it considers the mixed pixels phenomenon of a Landsat image, and the predicted images by EFSDAF have more intra-class variability and spatial details; (2) it adjusts the differences between Landsat images and MODIS images; and (3) it improves the fusion accuracy in the abrupt change area by introducing a new residual index (RI). Vegetation phenology and flood events were selected to evaluate the performance of EFSDAF. Its performance was compared with the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), the Spatial and Temporal Reflectance Unmixing Model (STRUM), and FSDAF. Results show that EFSDAF can monitor the changes of vegetation (gradual change) and flood (abrupt change), and the fusion images by EFSDAF are the best from both visual and quantitative evaluations. More importantly, EFSDAF can accurately generate the spatial details of the object and has strong robustness. Due to the above advantages of EFSDAF, it has great potential to monitor long-term dynamic changes of land surface.
Collapse
|
25
|
Development of an Operational Algorithm for Automated Deforestation Mapping via the Bayesian Integration of Long-Term Optical and Microwave Satellite Data. REMOTE SENSING 2019. [DOI: 10.3390/rs11172038] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The frequent fine-scale monitoring of deforestation using satellite sensors is important for the sustainable management of forests. Traditional optical satellite sensors suffer from cloud interruption, particularly in tropical regions, and recent active microwave sensors (i.e., synthetic aperture radar) demonstrate the difficulty in data interpretation owing to their inherent sensor noise and complicated backscatter features of forests. Although the sensor integration of optical and microwave sensors is of compelling research interest, particularly in the conduct of deforestation monitoring, this topic has not been widely studied. In this paper, we introduce an operational algorithm for automated deforestation mapping using long-term optical and L-band SAR data, including a simple time-series analysis of Landsat stacks and a multilayered neural network with Advanced Spaceborne Thermal Emission and Reflection Radiometer and Phased Array-type L-band Synthetic Aperture Radar-2, followed by sensor integration based on the Bayesian Updating of Land-Cover. We applied the algorithm over a deciduous tropical forest in Cambodia in 2003–2018 for validation, and the algorithm demonstrated better accuracy than existing approaches, which only depend on optical data or SAR data. Owing to the cloud penetration ability of SAR, observation gaps of optical data under cloudy conditions were filled, resulting in a prompter detection of deforestation even in the tropical rainy season. We also investigated the effect of posterior probability constraints in the Bayesian approach. The land-cover maps (forest/deforestation) created by the well-tuned Bayesian approach achieved 94.0% ± 4.5%, 80.0% ± 10.1%, and 96.4% ± 1.9% for the user’s accuracy, producer’s accuracy, and overall accuracy, respectively. In the future, small-scale commission errors in the resultant maps should be improved by using more sophisticated machine-learning approaches and considering the reforestation effects in the algorithm. The application of the algorithm to other landscapes with other sensor combinations is also desirable.
Collapse
|
26
|
Multi-Source Geo-Information Fusion in Transition: A Summer 2019 Snapshot. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2019. [DOI: 10.3390/ijgi8080330] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Since the launch of Landsat-1 in 1972, the scientific domain of geo-information has been incrementally shaped through different periods, due to technology evolutions: in devices (satellites, UAV, IoT), in sensors (optical, radar, LiDAR), in software (GIS, WebGIS, 3D), and in communication (Big Data). Land Cover and Disaster Management remain the main big issues where these technologies are highly required. Data fusion methods and tools have been adapted progressively to new data sources, which are augmenting in volume, variety, and in quick accessibility. This Special Issue gives a snapshot of the current status of that adaptation, as well as looking at what challenges are coming soon.
Collapse
|