1
|
Yang G, Zhao Y, Xing H, Fu Y, Liu G, Kang X, Mai X. Understanding the changes in spatial fairness of urban greenery using time-series remote sensing images: A case study of Guangdong-Hong Kong-Macao Greater Bay. THE SCIENCE OF THE TOTAL ENVIRONMENT 2020; 715:136763. [PMID: 32007872 DOI: 10.1016/j.scitotenv.2020.136763] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 01/15/2020] [Accepted: 01/15/2020] [Indexed: 06/10/2023]
Abstract
Urban greenery is essential to the living environment of humans. Objectively assessing the rationality of the spatial distribution of green space resources will contribute to regional greening plans, thereby reducing social injustice. However, it is difficult to propose a reasonable greening policy aimed at the coordinated development of an urban agglomeration due to a lack of baseline information. This study investigated the changes in spatial fairness of the greenery surrounding residents in Guangdong-Hong Kong-Macao Greater Bay by examining time-series remote sensing images from 1997 to 2017. With the substitution of impervious, artificial surfaces for universal areas of human activities, we quantified the amount of surrounding greenery from the perspective of human activities at the pixel level by utilizing a nested buffer. The Gini coefficient was further calculated for each city to quantify the spatial fairness of the surrounding greenery to people. The results indicated that areas with less greenery surrounding them decreased during 1997 and 2017 in Guangdong-Hong Kong-Macao Greater Bay. The spatial fairness did not tend to increase with the improvements in the overall greening level. The spatial fairness of 4 cities had an increasing trend, and the Gini coefficients of 5 cities were still over 0.6 in 2017. We further proposed different greening policy suggestions for different cities based on the amount of greenery surrounding people and the trend in fairness. The results and the conclusion of this research will help to improve future regional greening policies and to reduce environmental injustice.
Collapse
|
|
5 |
18 |
2
|
Fang L, Dong B, Wang C, Yang F, Cui Y, Xu W, Peng L, Wang Y, Li H. Research on the influence of land use change to habitat of cranes in Shengjin Lake wetland. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2020; 27:7515-7525. [PMID: 31885059 DOI: 10.1007/s11356-019-07096-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 11/18/2019] [Indexed: 06/10/2023]
Abstract
Shengjin Lake wetland reserve is an important habitat for the winter cranes of our country, and the change of land use structure in the area have had a vital influence on the winter cranes and their habitat. The TM remote sensing images of 1986-2015 years were selected in this paper, and the land use change model and gray relational analysis model were used to analyze the effect of land use degree on the habitat of the winter crane and correlation degree between cranes and land use of Shengjin lake wetland. The land use transformation method was employed to analyze the transfer of the habitat of cranes and the relationship between the size of the cranes habitat and the number of populations. The results showed that the degree of land use change fluctuated greatly in different periods, and comprehensive index of land use degree between 220 and 260, also the land use was based on woodland, grassland, and water and their effect on the habitat was limited; the marsh land had the highest retention rate among the cranes habitat being 34.44%. While the reed flat had the lowest rate, only 15.36%, and the reed breach land was mainly transferred to marsh and dry land, 23.22% and 18.16%, respectively. The mud was mainly transferred to water and farmland, respectively, for 31.79% and 27.75%; except the period from 2011 to 2015, the change of habitat area was basically consistent with the change of the number of cranes.
Collapse
|
|
5 |
6 |
3
|
Dong Y, Liu Y, Cheng Y, Gao G, Chen K, Li C. Adaptive adjacent context negotiation network for object detection in remote sensing imagery. PeerJ Comput Sci 2024; 10:e2199. [PMID: 39145254 PMCID: PMC11323134 DOI: 10.7717/peerj-cs.2199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 06/25/2024] [Indexed: 08/16/2024]
Abstract
Accurate localization of objects of interest in remote sensing images (RSIs) is of great significance for object identification, resource management, decision-making and disaster relief response. However, many difficulties, like complex backgrounds, dense target quantities, large-scale variations, and small-scale objects, which make the detection accuracy unsatisfactory. To improve the detection accuracy, we propose an Adaptive Adjacent Context Negotiation Network (A2CN-Net). Firstly, the composite fast Fourier convolution (CFFC) module is given to reduce the information loss of small objects, which is inserted into the backbone network to obtain spectral global context information. Then, the Global Context Information Enhancement (GCIE) module is given to capture and aggregate global spatial features, which is beneficial for locating objects of different scales. Furthermore, to alleviate the aliasing effect caused by the fusion of adjacent feature layers, a novel Adaptive Adjacent Context Negotiation network (A2CN) is given to adaptive integration of multi-level features, which consists of local and adjacent branches, with the local branch adaptively highlighting feature information and the adjacent branch introducing global information at the adjacent level to enhance feature representation. In the meantime, considering the variability in the focus of feature layers in different dimensions, learnable weights are applied to the local and adjacent branches for adaptive feature fusion. Finally, extensive experiments are performed in several available public datasets, including DIOR and DOTA-v1.0. Experimental studies show that A2CN-Net can significantly boost detection performance, with mAP increasing to 74.2% and 79.2%, respectively.
Collapse
|
research-article |
1 |
|
4
|
Li Y, Li Y, Zhu X, Fang H, Ye L. A method for extracting buildings from remote sensing images based on 3DJA-UNet3. Sci Rep 2024; 14:19067. [PMID: 39154127 PMCID: PMC11330448 DOI: 10.1038/s41598-024-70019-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 08/12/2024] [Indexed: 08/19/2024] Open
Abstract
Building extraction aims to extract building pixels from remote sensing imagery, which plays a significant role in urban planning, dynamic urban monitoring, and many other applications. UNet3+ is widely applied in building extraction from remote sensing images. However, it still faces issues such as low segmentation accuracy, imprecise boundary delineation, and the complexity of network models. Therefore, based on the UNet3+ model, this paper proposes a 3D Joint Attention (3DJA) module that effectively enhances the correlation between local and global features, obtaining more accurate object semantic information and enhancing feature representation. The 3DJA module models semantic interdependence in the vertical and horizontal dimensions to obtain feature map spatial encoding information, as well as in the channel dimensions to increase the correlation between dependent channel graphs. In addition, a bottleneck module is constructed to reduce the number of network parameters and improve model training efficiency. Many experiments are conducted on publicly accessible WHU,INRIA and Massachusetts building dataset, and the benchmarks, BOMSC-Net, CVNet, SCA-Net, SPCL-Net, ACMFNet, MFCF-Net models are selected for comparison with the 3DJA-UNet3+ model proposed in this paper. The experimental results show that 3DJA-UNet3+ achieves competitive results in three evaluation indicators: overall accuracy, mean intersection over union, and F1-score. The code will be available at https://github.com/EnjiLi/3DJA-UNet3Plus .
Collapse
|
research-article |
1 |
|
5
|
Alamgeer M, Al Mazroa A, S. Alotaibi S, Alanazi MH, Alonazi M, S. Salama A. Improving remote sensing scene classification using dung Beetle optimization with enhanced deep learning approach. Heliyon 2024; 10:e37154. [PMID: 39318799 PMCID: PMC11420495 DOI: 10.1016/j.heliyon.2024.e37154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 08/14/2024] [Accepted: 08/28/2024] [Indexed: 09/26/2024] Open
Abstract
Remote sensing (RS) scene classification has received significant consideration because of its extensive use by the RS community. Scene classification in satellite images has widespread uses in remote surveillance, environmental observation, remote scene analysis, urban planning, and earth observations. Because of the immense benefits of the land scene classification task, various approaches have been presented recently for automatically classifying land scenes from remote sensing images (RSIs). Several approaches dependent upon convolutional neural networks (CNNs) are presented for classifying brutal RS scenes; however, they could only partially capture the context from RSIs due to the problematic texture, cluttered context, tiny size of objects, and considerable differences in object scale. This article designs a Remote Sensing Scene Classification using Dung Beetle Optimization with Enhanced Deep Learning (RSSC-DBOEDL) approach. The purpose of the RSSC-DBOEDL technique is to categorize different varieties of scenes that exist in the RSI. In the presented RSSC-DBOEDL technique, the enhanced MobileNet model is primarily deployed as a feature extractor. The DBO method could be implemented in this study for hyperparameter tuning of the enhanced MobileNet model. The RSSC-DBOEDL technique uses a multi-head attention-based long short-term memory (MHA-LSTM) technique to classify the scenes in the RSI. The simulation evaluation of the RSSC-DBOEDL approach has been examined under the benchmark RSI datasets. The simulation results of the RSSC-DBOEDL approach exhibited a more excellent accuracy outcome of 98.75 % and 95.07 % under UC Merced and EuroSAT datasets with other existing methods regarding distinct measures.
Collapse
|
research-article |
1 |
|
6
|
Lee KY, Shih SS, Huang ZZ. Mangrove colonization on tidal flats causes straightened tidal channels and consequent changes in the hydrodynamic gradient and siltation potential. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2022; 314:115058. [PMID: 35452881 DOI: 10.1016/j.jenvman.2022.115058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 03/29/2022] [Accepted: 04/08/2022] [Indexed: 06/14/2023]
Abstract
A healthy mangrove ecosystem includes diverse landscape structures, such as tidal flats, tidal channels, and areas with circulating waters, in addition to mangrove stands. The complex structure of mangrove forests affects the hydrodynamics and sediment transport behaviour of tidal channels. Understanding the influence of the mangrove invasion of tidal flats on the pattern and stability of tidal channels is essential. In this study, two types of remote sensing images, Google Earth images and aerial photographs, were collected to analyze the relationship between mangrove colonization and changes in tidal channel patterns. After applying binary image processing, these two kinds of images show similar abilities to discriminate the locations, extents, and boundaries of mangroves and tidal channels. We found that the mangrove area was inversely proportional to the tidal channel sinuosity and width. The tidal channels exhibited a meandering pattern with a wider width before the mangroves invaded the tidal flats. After the expansion of the mangroves, the tidal channels gradually transformed into a straight shape with a narrower width. After the mangroves developed into forests, the tidal channels maintained a straight and stable pattern. Since mangroves promote siltation and increase the elevation of the surrounding mudflats, the habitat suitability for mangroves in the neighbouring tidal flat areas may vary. These processes may help expand mangrove habitats, thereby compressing the area of flats and changing the shape of tidal channels. Due to tidal current effects, the unit stream power of a straight tidal channel is approximately twice that of a meandering channel, indicating that straight tidal channels have a stronger anti-siltation capability. Our research also found that the tidal channels may return to a meandering pattern when mangroves are degraded or die and their area decreases. This study provides key evidence that mangroves affect tidal channel types and hydrodynamic characteristics, thus providing a useful reference for restoring and managing estuarine mangrove ecosystems.
Collapse
|
|
3 |
|
7
|
Sun Y, Gu X, Zhou X, Yang J, Shen W, Cheng Y, Zhang JM, Chen Y. DPIF-Net: a dual path network for rural road extraction based on the fusion of global and local information. PeerJ Comput Sci 2024; 10:e2079. [PMID: 38855245 PMCID: PMC11157547 DOI: 10.7717/peerj-cs.2079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 05/03/2024] [Indexed: 06/11/2024]
Abstract
Background Automatic extraction of roads from remote sensing images can facilitate many practical applications. However, thus far, thousands of kilometers or more of roads worldwide have not been recorded, especially low-grade roads in rural areas. Moreover, rural roads have different shapes and are influenced by complex environments and other interference factors, which has led to a scarcity of dedicated low level category road datasets. Methods To address these issues, based on convolutional neural networks (CNNs) and tranformers, this article proposes the Dual Path Information Fusion Network (DPIF-Net). In addition, given the severe lack of low-grade road datasets, we constructed the GaoFen-2 (GF-2) rural road dataset to address this challenge, which spans three regions in China and covers an area of over 2,300 km, almost entirely composed of low-grade roads. To comprehensively test the low-grade road extraction performance and generalization ability of the model, comparative experiments are carried out on the DeepGlobe, and Massachusetts regular road datasets. Results The results show that DPIF-Net achieves the highest IoU and F1 score on three datasets compared with methods such as U-Net, SegNet, DeepLabv3+, and D-LinkNet, with notable performance on the GF-2 dataset, reaching 0.6104 and 0.7608, respectively. Furthermore, multiple validation experiments demonstrate that DPIF-Net effectively preserves improved connectivity in low-grade road extraction with a modest parameter count of 63.9 MB. The constructed low-grade road dataset and proposed methods will facilitate further research on rural roads, which holds promise for assisting governmental authorities in making informed decisions and strategies to enhance rural road infrastructure.
Collapse
|
research-article |
1 |
|
8
|
Liu Y, Xu H, Shi X. Reconstruction of super-resolution from high-resolution remote sensing images based on convolutional neural networks. PeerJ Comput Sci 2024; 10:e2218. [PMID: 39678281 PMCID: PMC11639152 DOI: 10.7717/peerj-cs.2218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 07/05/2024] [Indexed: 12/17/2024]
Abstract
In this study, a novel algorithm named the Edge-enhanced Generative Adversarial Network (EGAN) is proposed to address the issues of noise corruption and edge fuzziness in the super-resolution of remote sensing images. To build upon the baseline model called Deep Blind Super-Resolution GAN (DBSR-GAN), an edge enhancement module is introduced to enhance the edge information of the images. To enlarge the receptive field of the algorithm, the Mask branch within the edge enhancement structure is further optimized. Moreover, the loss of image consistency is introduced to guide edge reconstruction, and subpixel convolution is employed for upsampling, thus resulting in sharper edge contours and more consistent stylized results. To tackle the low utilization of global information and the reconstruction of super-resolution artifacts in remote sensing images, an alternative algorithm named Nonlocal Module and Artifact Discrimination EGAN (END-GAN) is proposed. The END-GAN introduces a nonlocal module based on the EGAN in the feature extraction stage of the algorithm, enabling better utilization of the internal correlations of remote sensing images and enhancing the algorithm's capability to extract global target features. Additionally, a method discriminating artifacts is implemented to distinguish between artifacts and reals in reconstructed images. Then, the algorithm is optimized by introducing an artifact loss discrimination alongside the original loss function. Experimental comparisons on two datasets of remote sensing images, NWPUVHR-10 and UCAS-AOD, demonstrate significant improvements in the evaluation indexes when the proposed algorithm is under investigation.
Collapse
|
research-article |
1 |
|
9
|
Elgamily KM, Mohamed MA, Abou-Taleb AM, Ata MM. Enhanced object detection in remote sensing images by applying metaheuristic and hybrid metaheuristic optimizers to YOLOv7 and YOLOv8. Sci Rep 2025; 15:7226. [PMID: 40021716 PMCID: PMC11871368 DOI: 10.1038/s41598-025-89124-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Accepted: 02/03/2025] [Indexed: 03/03/2025] Open
Abstract
Developments in object detection algorithms are critical for urban planning, environmental monitoring, surveillance, and many other applications. The primary objective of the article was to improve detection precision and model efficiency. The paper compared the performance of six different metaheuristic optimization algorithms including Gray Wolf Optimizer (GWO), Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Remora Optimization Algorithm (ROA), Aquila Optimizer (AO), and Hybrid PSO-GWO (HPSGWO) combined with YOLOv7 and YOLOv8. The study included two distinct remote sensing datasets, RSOD and VHR-10. Many performance measures as precision, recall, and mean average precision (mAP) were used during the training, validation, and testing processes, as well as the fit score. The results show significant improvements in both YOLO variants following optimization using these strategies. The GWO-optimized YOLOv7 with 0.96 mAP 50, and 0.69 mAP 50:95, and the HPSGWO-optimized YOLOv8 with 0.97 mAP 50, and 0.72 mAP 50:95 had the best performance in the RSOD dataset. Similarly, the GWO-optimized versions of YOLOv7 and YOLOv8 had the best performance on the VHR-10 dataset with 0.87 mAP 50, and 0.58 mAP 50:95 for YOLOv7 and with 0.99 mAP 50, and 0.69 mAP 50:95 for YOLOv8, indicating greater performance. The findings supported the usefulness of metaheuristic optimization in increasing the precision and recall rates of YOLO algorithms and demonstrated major significance in improving object recognition tasks in remote sensing imaging, opening up a viable route for applications in a variety of disciplines.
Collapse
|
research-article |
1 |
|
10
|
Lin Y, Zhang J, Huang J. Centralised visual processing center for remote sensing target detection. Sci Rep 2024; 14:17021. [PMID: 39043706 PMCID: PMC11266420 DOI: 10.1038/s41598-024-67451-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 07/11/2024] [Indexed: 07/25/2024] Open
Abstract
Target detection in satellite images is an essential topic in the field of remote sensing and computer vision. Despite extensive research efforts, accurate and efficient target detection in remote sensing images remains unsolved due to the large target scale span, dense distribution, and overhead imaging and complex backgrounds, which result in high target feature similarity and serious occlusion. In order to address the above issues in a comprehensive manner, within this paper, we first propose a Centralised Visual Processing Center (CVPC), this structure is a parallel visual processing center for Transformer encoder and CNN, employing a lightweight encoder to capture broad, long-range interdependencies. Pixel-level Learning Center (PLC) module is used to establish pixel-level correlations and improve the depiction of detailed features. CVPC effectively improves the detection efficiency of remote sensing targets with high feature similarity and severe occlusion. Secondly, we propose a centralised feature cross-layer fusion pyramid structure to fuse the results with the CVPC in a top-down manner to enhance the detailed feature representation capability at each layer. Ultimately, we present a Context Enhanced Adaptive Sparse Convolutional Network (CEASC), which improves the accuracy while ensuring the detection efficiency. Based on the above modules, we designed and conducted a series of experiments. These experiments are conducted on three challenging public datasets, DOTA-v1.0, DIOR, and RSDO, showing that our proposed 3CNet achieves a more advanced detection accuracy while balancing the detection speed (78.62% mAP for DOTA-v1.0, 79.12% mAP for DIOR, and 95.50% mAP for RSOD).
Collapse
|
research-article |
1 |
|
11
|
Alotaibi Y, Rajendran B, Rani K. G, Rajendran S. Dipper throated optimization with deep convolutional neural network-based crop classification for remote sensing image analysis. PeerJ Comput Sci 2024; 10:e1828. [PMID: 38435591 PMCID: PMC10909238 DOI: 10.7717/peerj-cs.1828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 12/29/2023] [Indexed: 03/05/2024]
Abstract
Problem With the rapid advancement of remote sensing technology is that the need for efficient and accurate crop classification methods has become increasingly important. This is due to the ever-growing demand for food security and environmental monitoring. Traditional crop classification methods have limitations in terms of accuracy and scalability, especially when dealing with large datasets of high-resolution remote sensing images. This study aims to develop a novel crop classification technique, named Dipper Throated Optimization with Deep Convolutional Neural Networks based Crop Classification (DTODCNN-CC) for analyzing remote sensing images. The objective is to achieve high classification accuracy for various food crops. Methods The proposed DTODCNN-CC approach consists of the following key components. Deep convolutional neural network (DCNN) a GoogleNet architecture is employed to extract robust feature vectors from the remote sensing images. The Dipper throated optimization (DTO) optimizer is used for hyper parameter tuning of the GoogleNet model to achieve optimal feature extraction performance. Extreme Learning Machine (ELM): This machine learning algorithm is utilized for the classification of different food crops based on the extracted features. The modified sine cosine algorithm (MSCA) optimization technique is used to fine-tune the parameters of ELM for improved classification accuracy. Results Extensive experimental analyses are conducted to evaluate the performance of the proposed DTODCNN-CC approach. The results demonstrate that DTODCNN-CC can achieve significantly higher crop classification accuracy compared to other state-of-the-art deep learning methods. Conclusion The proposed DTODCNN-CC technique provides a promising solution for efficient and accurate crop classification using remote sensing images. This approach has the potential to be a valuable tool for various applications in agriculture, food security, and environmental monitoring.
Collapse
|
research-article |
1 |
|