1
|
Deng Y, Shao Z, Dang C, Huang X, Wu W, Zhuang Q, Ding Q. Assessing urban wetlands dynamics in Wuhan and Nanchang, China. THE SCIENCE OF THE TOTAL ENVIRONMENT 2023; 901:165777. [PMID: 37524189 DOI: 10.1016/j.scitotenv.2023.165777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 07/06/2023] [Accepted: 07/23/2023] [Indexed: 08/02/2023]
Abstract
Urban wetlands play a crucial role in sustainable social development. However, current research mainly focuses on specific wetland types, and fine extraction of urban wetlands remains a challenge. This study proposes a fine extraction framework based on hierarchical decision trees and shape features for urban wetlands, using Sentinel-2 remote sensing data to obtain detailed wetland data of Wuhan and Nanchang from 2016 to 2022. Our framework applies random forests to classify land cover, extracts urban fine wetlands by hierarchical decision trees and shape features, and assesses the dynamics of wetlands in the two cities. We also analyzed and discussed the characteristics of urban wetlands in the two cities. The results show that wetland accuracies of Wuhan and Nanchang are greater than 84.5 % and 82.9 %, respectively. The wetland areas of Wuhan in 2016, 2019, and 2022 are 1969.4 km2, 1713.8 km2, and 1681.1 km2, while those in Nanchang are 1405.9 km2, 1361.6 km2, and 766.9 km2. Inland wetlands are the main wetland types in both regions, with lake wetlands accounting for the highest proportion (over 40 %). The urban wetlands in the two cities exhibit different spatial and temporal evolution patterns, with varying change trends of wetland area and the structural proportions of fine wetlands. Besides, Wuhan's urban wetlands are primarily located in the south, while Nanchang's urban wetlands are concentrated in the east, exhibiting higher spatial and temporal dynamics. Analysis suggests that the reduced urban wetlands from 2016 to 2022 are related to fluctuating decreasing precipitation, growing population, and gross domestic product (GDP). Our study provides support for the conservation of urban wetland resources in Wuhan and Nanchang and highlights the need for targeted management strategies.
Collapse
Affiliation(s)
- Ying Deng
- State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
| | - Zhenfeng Shao
- State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China.
| | - Chaoya Dang
- State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
| | - Xiao Huang
- Department of Geosciences, University of Arkansas, Fayetteville, AR 72701, USA
| | - Wenfu Wu
- School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
| | - Qingwei Zhuang
- State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
| | - Qing Ding
- State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
| |
Collapse
|
2
|
Xu W, Zhang C, Wang Q, Dai P. FEA-Swin: Foreground Enhancement Attention Swin Transformer Network for Accurate UAV-Based Dense Object Detection. SENSORS (BASEL, SWITZERLAND) 2022; 22:6993. [PMID: 36146340 PMCID: PMC9502707 DOI: 10.3390/s22186993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/10/2022] [Accepted: 09/12/2022] [Indexed: 06/16/2023]
Abstract
UAV-based object detection has recently attracted a lot of attention due to its diverse applications. Most of the existing convolution neural network based object detection models can perform well in common object detection cases. However, due to the fact that objects in UAV images are spatially distributed in a very dense manner, these methods have limited performance for UAV-based object detection. In this paper, we propose a novel transformer-based object detection model to improve the accuracy of object detection in UAV images. To detect dense objects competently, an advanced foreground enhancement attention Swin Transformer (FEA-Swin) framework is designed by integrating context information into the original backbone of a Swin Transformer. Moreover, to avoid the loss of information of small objects, an improved weighted bidirectional feature pyramid network (BiFPN) is presented by designing the skip connection operation. The proposed method aggregates feature maps from four stages and keeps abundant information of small objects. Specifically, to balance the detection accuracy and efficiency, we introduce an efficient neck of the BiFPN network by removing a redundant network layer. Experimental results on both public datasets and a self-made dataset demonstrate the performance of our method compared to the state-of-the-art methods in terms of detection accuracy.
Collapse
Affiliation(s)
- Wenyu Xu
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
- Science Island Branch of Graduate School, University of Science and Technology of China, Hefei 230026, China
| | - Chaofan Zhang
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
| | - Qi Wang
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
- Science Island Branch of Graduate School, University of Science and Technology of China, Hefei 230026, China
| | - Pangda Dai
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
| |
Collapse
|
3
|
Performance Evaluation of Feature Matching Techniques for Detecting Reinforced Soil Retaining Wall Displacement. REMOTE SENSING 2022. [DOI: 10.3390/rs14071697] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Image registration technology is widely applied in various matching methods. In this study, we aim to evaluate the feature matching performance and to find an optimal technique for detecting three types of behaviors—facing displacement, settlement, and combined displacement—in reinforced soil retaining walls (RSWs). For a single block with an artificial target and a multiblock structure with artificial and natural targets, five popular detectors and descriptors—KAZE, SURF, MinEigen, ORB, and BRISK—were used to evaluate the resolution performance. For comparison, the repeatability, matching score, and inlier matching features were analyzed based on the number of extracted and matched features. The axial registration error (ARE) was used to verify the accuracy of the methods by comparing the position between the estimated and real features. The results showed that the KAZE method was the best detector and descriptor for RSWs (block shape target), with the highest probability of successfully matching features. In the multiblock experiment, the block used as a natural target showed similar matching performance to that of the block with an artificial target attached. Therefore, the behaviors of RSW blocks can be analyzed using the KAZE method without installing an artificial target.
Collapse
|
4
|
Dang C, Shao Z, Huang X, Qian J, Cheng G, Ding Q, Fan Y. Assessment of the importance of increasing temperature and decreasing soil moisture on global ecosystem productivity using solar-induced chlorophyll fluorescence. GLOBAL CHANGE BIOLOGY 2022; 28:2066-2080. [PMID: 34918427 DOI: 10.1111/gcb.16043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Accepted: 12/03/2021] [Indexed: 06/14/2023]
Abstract
The accurate assessment of the global gross primary productivity (GPP) of vegetation is the key to estimating the global carbon cycle. Temperature (Ts) and soil moisture (SM) are essential for vegetation growth. It is acknowledged that the global Ts has shown an increasing trend, yet SM has shown a decreasing trend. However, the importance of SM and Ts changes on the productivity of global ecosystems remains unclear, as SM and Ts are strongly coupled through soil-atmosphere interactions. Using solar-induced chlorophyll fluorescence (SIF) as a proxy for GPP and by decoupling SM and Ts changes, our investigation shows Ts plays a more important role in SIF in 60% of the vegetation areas. Overall, increased Ts promotes SIF by mitigating the resistance from SM's reduction. However, the importance of SM and Ts varies, given different vegetation types. The results show that in the humid zone, the variation of Ts plays a more important role in SIF, but in the arid and semi-arid zones, the variation of SM plays a more important role; in the semi-humid zone, the disparity in the importance of SM and Ts is difficult to unravel. In addition, our results suggest that SIF is very sensitive to aridity gradients in arid and semi-arid ecosystems. By decoupling the intertwined SM-Ts impact on SIF, our study provides essential evidence that benefits future investigation on the factors the influence ecosystem productivity at regional or global scales.
Collapse
Affiliation(s)
- Chaoya Dang
- State Key Laboratory Information Engineering Survey Mapping and Remote Sensing, Wuhan University, Wuhan, China
| | - Zhenfeng Shao
- State Key Laboratory Information Engineering Survey Mapping and Remote Sensing, Wuhan University, Wuhan, China
| | - Xiao Huang
- Department of Geosciences, University of Arkansas, Fayetteville, Arkansas, USA
| | - Jiaxin Qian
- State Key Laboratory Information Engineering Survey Mapping and Remote Sensing, Wuhan University, Wuhan, China
| | - Gui Cheng
- State Key Laboratory Information Engineering Survey Mapping and Remote Sensing, Wuhan University, Wuhan, China
| | - Qing Ding
- State Key Laboratory Information Engineering Survey Mapping and Remote Sensing, Wuhan University, Wuhan, China
| | - Yewen Fan
- State Key Laboratory Information Engineering Survey Mapping and Remote Sensing, Wuhan University, Wuhan, China
| |
Collapse
|
5
|
Attention Enhanced U-Net for Building Extraction from Farmland Based on Google and WorldView-2 Remote Sensing Images. REMOTE SENSING 2021. [DOI: 10.3390/rs13214411] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
High-resolution remote sensing images contain abundant building information and provide an important data source for extracting buildings, which is of great significance to farmland preservation. However, the types of ground features in farmland are complex, the buildings are scattered and may be obscured by clouds or vegetation, leading to problems such as a low extraction accuracy in the existing methods. In response to the above problems, this paper proposes a method of attention-enhanced U-Net for building extraction from farmland, based on Google and WorldView-2 remote sensing images. First, a Resnet unit is adopted as the infrastructure of the U-Net network encoding part, then the spatial and channel attention mechanism module is introduced between the Resnet unit and the maximum pool and the multi-scale fusion module is added to improve the U-Net network. Second, the buildings found on WorldView-2 and Google images are extracted through farmland boundary constraints. Third, boundary optimization and fusion processing are carried out for the building extraction results on the WorldView-2 and Google images. Fourth, a case experiment is performed. The method in this paper is compared with semantic segmentation models, such as FCN8, U-Net, Attention_UNet, and DeepLabv3+. The experimental results indicate that this method attains a higher accuracy and better effect in terms of building extraction within farmland; the accuracy is 97.47%, the F1 score is 85.61%, the recall rate (Recall) is 93.02%, and the intersection of union (IoU) value is 74.85%. Hence, buildings within farming areas can be effectively extracted, which is conducive to the preservation of farmland.
Collapse
|
6
|
Cheng ML, Matsuoka M. An Efficient and Precise Remote Sensing Optical Image Matching Technique Using Binary-Based Feature Points. SENSORS 2021; 21:s21186035. [PMID: 34577242 PMCID: PMC8469316 DOI: 10.3390/s21186035] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 09/05/2021] [Accepted: 09/06/2021] [Indexed: 11/16/2022]
Abstract
Matching local feature points is an important but crucial step for various optical image processing applications, such as image registration, image mosaicking, and structure-from-motion (SfM). Three significant issues associated with this subject have been the focus for years, including the robustness of the image features detected, the number of matches obtained, and the efficiency of the data processing. This paper proposes a systematic algorithm that incorporates the synthetic-colored enhanced accelerated binary robust invariant scalar keypoints (SC-EABRISK) method and the affine transformation with bounding box (ATBB) procedure to address these three issues. The SC-EABRISK approach selects the most representative feature points from an image and rearranges their descriptors by adding color information for more precise image matching. The ATBB procedure, meanwhile, is an outreach that implements geometric mapping to retrieve more matches from the feature points ignored during SC-EABRISK processing. The experimental results obtained using benchmark imagery datasets, close-range photos (CRPs), and aerial and satellite images indicate that the developed algorithm can perform up to 20 times faster than the previous EABRISK method, achieve thousands of matches, and improve the matching precision by more than 90%. Consequently, SC-EABRISK with the ATBB algorithm can address image matching efficiently and precisely.
Collapse
|
7
|
Deng L, Yuan X, Deng C, Chen J, Cai Y. Image Stitching Based on Nonrigid Warping for Urban Scene. SENSORS 2020; 20:s20247050. [PMID: 33317036 PMCID: PMC7763989 DOI: 10.3390/s20247050] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 12/05/2020] [Accepted: 12/06/2020] [Indexed: 11/16/2022]
Abstract
Image stitching based on a global alignment model is widely used in computer vision. However, the resulting stitched image may look blurry or ghosted due to parallax. To solve this problem, we propose a parallax-tolerant image stitching method based on nonrigid warping in this paper. Given a group of putative feature correspondences between overlapping images, we first use a semiparametric function fitting, which introduces a motion coherence constraint to remove outliers. Then, the input images are warped according to a nonrigid warp model based on Gaussian radial basis functions. The nonrigid warping is a kind of elastic deformation that is flexible and smooth enough to eliminate moderate parallax errors. This leads to high-precision alignment in the overlapped region. For the nonoverlapping region, we use a rigid similarity model to reduce distortion. Through effective transition, the nonrigid warping of the overlapped region and the rigid warping of the nonoverlapping region can be used jointly. Our method can obtain more accurate local alignment while maintaining the overall shape of the image. Experimental results on several challenging data sets for urban scene show that the proposed approach is better than state-of-the-art approaches in both qualitative and quantitative indicators.
Collapse
|
8
|
Object Detection in UAV Images via Global Density Fused Convolutional Network. REMOTE SENSING 2020. [DOI: 10.3390/rs12193140] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Object detection in Unmanned Aerial Vehicle (UAV) images plays fundamental roles in a wide variety of applications. As UAVs are maneuverable with high speed, multiple viewpoints, and varying altitudes, objects in UAV images are distributed with great heterogeneity, varying in size, with high density, bringing great difficulty to object detection using existing algorithms. To address the above issues, we propose a novel global density fused convolutional network (GDF-Net) optimized for object detection in UAV images. We test the effectiveness and robustness of the proposed GDF-Nets on the VisDrone dataset and the UAVDT dataset. The designed GDF-Net consists of a Backbone Network, a Global Density Model (GDM), and an Object Detection Network. Specifically, GDM refines density features via the application of dilated convolutional networks, aiming to deliver larger reception fields and to generate global density fused features. Compared with base networks, the addition of GDM improves the model performance in both recall and precision. We also find that the designed GDM facilitates the detection of objects in congested scenes with high distribution density. The presented GDF-Net framework can be instantiated to not only the base networks selected in this study but also other popular object detection models.
Collapse
|
9
|
Abstract
Optical and Synthetic Aperture Radar (SAR) fusion is addressed in this paper. Intensity–Hue–Saturation (IHS) is an easily implemented fusion method and can separate Red–Green–Blue (RGB) images into three independent components; however, using this method directly for optical and SAR images fusion will cause spectral distortion. The Gradient Transfer Fusion (GTF) algorithm is proposed firstly for infrared and gray visible images fusion, which formulates image fusion as an optimization problem and keeps the radiation information and spatial details simultaneously. However, the algorithm assumes that the spatial details only come from one of the source images, which is inconsistent with the actual situation of optical and SAR images fusion. In this paper, a fusion algorithm named IHS-GTF for optical and SAR images is proposed, which combines the advantages of IHS and GTF and considers the spatial details from the both images based on pixel saliency. The proposed method was assessed by visual analysis and ten indices and was further tested by extracting impervious surface (IS) from the fused image with random forest classifier. The results show the good preservation of spatial details and spectral information by our proposed method, and the overall accuracy of IS extraction is 2% higher than that of using optical image alone. The results demonstrate the ability of the proposed method for fusing optical and SAR data effectively to generate useful data.
Collapse
|