1
|
Unsupervised Building Extraction from Multimodal Aerial Data Based on Accurate Vegetation Removal and Image Feature Consistency Constraint. REMOTE SENSING 2022. [DOI: 10.3390/rs14081912] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Accurate building extraction from remotely sensed data is difficult to perform automatically because of the complex environments and the complex shapes, colours and textures of buildings. Supervised deep-learning-based methods offer a possible solution to solve this problem. However, these methods generally require many high-quality, manually labelled samples to obtain satisfactory test results, and their production is time and labour intensive. For multimodal data with sufficient information, extracting buildings accurately in as unsupervised a manner as possible. Combining remote sensing images and LiDAR point clouds for unsupervised building extraction is not a new idea, but existing methods often experience two problems: (1) the accuracy of vegetation detection is often not high, which leads to limited building extraction accuracy, and (2) they lack a proper mechanism to further refine the building masks. We propose two methods to address these problems, combining aerial images and aerial LiDAR point clouds. First, we improve two recently developed vegetation detection methods to generate accurate initial building masks. We then refine the building masks based on the image feature consistency constraint, which can replace inaccurate LiDAR-derived boundaries with accurate image-based boundaries, remove the remaining vegetation points and recover some missing building points. Our methods do not require manual parameter tuning or manual data labelling, but still exhibit a competitive performance compared to 29 methods: our methods exhibit accuracies higher than or comparable to 19 state-of-the-art methods (including 8 deep-learning-based methods and 11 unsupervised methods, and 9 of them combine remote sensing images and 3D data), and outperform the top 10 methods (4 of them combine remote sensing images and LiDAR data) evaluated using all three test areas of the Vaihingen dataset on the official website of the ISPRS Test Project on Urban Classification and 3D Building Reconstruction in average area quality. These comparative results verify that our unsupervised methods combining multisource data are very effective.
Collapse
|
2
|
Automatic Filtering of Lidar Building Point Cloud in Case of Trees Associated to Building Roof. REMOTE SENSING 2022. [DOI: 10.3390/rs14020430] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
This paper suggests a new algorithm for automatic building point cloud filtering based on the Z coordinate histogram. This operation aims to select the roof class points from the building point cloud, and the suggested algorithm considers the general case where high trees are associated with the building roof. The Z coordinate histogram is analyzed in order to divide the building point cloud into three zones: the surrounding terrain and low vegetation, the facades, and the tree crowns and/or the roof points. This operation allows the elimination of the first two classes which represent an obstacle toward distinguishing between the roof and the tree points. The analysis of the normal vectors, in addition to the change of curvature factor of the roof class leads to recognizing the high tree crown points. The suggested approach was tested on five datasets with different point densities and urban typology. Regarding the results’ accuracy quantification, the average values of the correctness, the completeness, and the quality indices are used. Their values are, respectively, equal to 97.9%, 97.6%, and 95.6%. These results confirm the high efficacy of the suggested approach.
Collapse
|
3
|
Building Extraction from Airborne LiDAR Data Based on Multi-Constraints Graph Segmentation. REMOTE SENSING 2021. [DOI: 10.3390/rs13183766] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Building extraction from airborne Light Detection and Ranging (LiDAR) point clouds is a significant step in the process of digital urban construction. Although the existing building extraction methods perform well in simple urban environments, when encountering complicated city environments with irregular building shapes or varying building sizes, these methods cannot achieve satisfactory building extraction results. To address these challenges, a building extraction method from airborne LiDAR data based on multi-constraints graph segmentation was proposed in this paper. The proposed method mainly converted point-based building extraction into object-based building extraction through multi-constraints graph segmentation. The initial extracted building points were derived according to the spatial geometric features of different object primitives. Finally, a multi-scale progressive growth optimization method was proposed to recover some omitted building points and improve the completeness of building extraction. The proposed method was tested and validated using three datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). Experimental results show that the proposed method can achieve the best building extraction results. It was also found that no matter the average quality or the average F1 score, the proposed method outperformed ten other investigated building extraction methods.
Collapse
|
4
|
A Segmentation Approach to Identify Underwater Dunes from Digital Bathymetric Models. GEOSCIENCES 2021. [DOI: 10.3390/geosciences11090361] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The recognition of underwater dunes has a central role to ensure safe navigation. Indeed, the presence of these dynamic landforms on the seafloor represents a hazard for navigation, especially in navigation channels, and should be at least highlighted to avoid collision with vessels. This paper proposes a novel method dedicated to the segmentation of these landforms in the fluvio-marine context. Its originality relies on the use of a conceptual model in which dunes are characterized by three salient features, namely the crest line, the stoss trough, and the lee trough. The proposed segmentation implements the conceptual model by considering the DBM (digital bathymetric model) as the seafloor surface from which the dunes shall be segmented. A geomorphometric analysis of the seabed is conducted to identify the salient features of the dunes. It is followed by an OBIA (object-based image analysis) approach aiming to eliminate the pixel-based analysis of the seabed surface, forming objects to better describe the dunes present in the seafloor. To validate the segmentation method, more than 850 dunes were segmented in the fluvio-marine context of the Northern Traverse of the Saint-Lawrence river. A performance rate of nearly 92% of well segmented dunes (i.e., true positive) was achieved.
Collapse
|
5
|
Honório LM, Pinto MF, Hillesheim MJ, de Araújo FC, Santos AB, Soares D. Photogrammetric Process to Monitor Stress Fields Inside Structural Systems. SENSORS 2021; 21:s21124023. [PMID: 34200918 PMCID: PMC8230454 DOI: 10.3390/s21124023] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 06/04/2021] [Accepted: 06/06/2021] [Indexed: 11/16/2022]
Abstract
This research employs displacement fields photogrammetrically captured on the surface of a solid or structure to estimate real-time stress distributions it undergoes during a given loading period. The displacement fields are determined based on a series of images taken from the solid surface while it experiences deformation. Image displacements are used to estimate the deformations in the plane of the beam surface, and Poisson’s Method is subsequently applied to reconstruct these surfaces, at a given time, by extracting triangular meshes from the corresponding points clouds. With the aid of the measured displacement fields, the Boundary Element Method (BEM) is considered to evaluate stress values throughout the solid. Herein, the unknown boundary forces must be additionally calculated. As the photogrammetrically reconstructed deformed surfaces may be defined by several million points, the boundary displacement values of boundary-element models having a convenient number of nodes are determined based on an optimized displacement surface that best fits the real measured data. The results showed the effectiveness and potential application of the proposed methodology in several tasks to determine real-time stress distributions in structures.
Collapse
Affiliation(s)
- Leonardo M. Honório
- Department of Electrical Engineering, UFJF, Juiz de Fora 36036-900, MG, Brazil;
- Correspondence:
| | - Milena F. Pinto
- Department of Electronics, Federal Center for Technological Education of Rio de Janeiro, CEFET-RJ, Rio de Janeiro 20271-110, RJ, Brazil;
| | - Maicon J. Hillesheim
- Faculty of Exact and Technological Sciences, UNEMAT, Sinop 78555-000, MT, Brazil;
| | - Francisco C. de Araújo
- Department of Civil Engineering, School of Mines, UFOP, Ouro Preto 35400-000, MG, Brazil;
| | - Alexandre B. Santos
- Department of Structural Engineering, UFJF, Juiz de Fora 36036-900, MG, Brazil;
| | - Delfim Soares
- Department of Electrical Engineering, UFJF, Juiz de Fora 36036-900, MG, Brazil;
| |
Collapse
|
6
|
Snake-Based Model for Automatic Roof Boundary Extraction in the Object Space Integrating a High-Resolution Aerial Images Stereo Pair and 3D Roof Models. REMOTE SENSING 2021. [DOI: 10.3390/rs13081429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The accelerated urban development over the last decades has made it necessary to update spatial information rapidly and constantly. Therefore, cities’ three-dimensional models have been widely used as a study base for various urban problems. However, although many efforts have been made to develop new building extraction methods, reliable and automatic extraction is still a major challenge for the remote sensing and computer vision communities, mainly due to the complexity and variability of urban scenes. This paper presents a method to extract building roof boundaries in the object space by integrating a high-resolution aerial images stereo pair, three-dimensional roof models reconstructed from light detection and ranging (LiDAR) data, and contextual information of the scenes involved. The proposed method focuses on overcoming three types of common problems that can disturb the automatic roof extraction in the urban environment: perspective occlusions caused by high buildings, occlusions caused by vegetation covering the roof, and shadows that are adjacent to the roofs, which can be misinterpreted as roof edges. For this, an improved Snake-based mathematical model is developed considering the radiometric and geometric properties of roofs to represent the roof boundary in the image space. A new approach for calculating the corner response and a shadow compensation factor was added to the model. The created model is then adapted to represent the boundaries in the object space considering a stereo pair of aerial images. Finally, the optimal polyline, representing a selected roof boundary, is obtained by optimizing the proposed Snake-based model using a dynamic programming (DP) approach considering the contextual information of the scene. The results showed that the proposed method works properly in boundary extraction of roofs with occlusion and shadows areas, presenting completeness and correctness average values above 90%, RMSE average values below 0.5 m for E and N components, and below 1 m for H component.
Collapse
|
7
|
Development of a Parcel-Level Land Boundary Extraction Algorithm for Aerial Imagery of Regularly Arranged Agricultural Areas. REMOTE SENSING 2021. [DOI: 10.3390/rs13061167] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The boundary extraction of an object from remote sensing imagery has been an important issue in the field of research. The automation of farmland boundary extraction is particularly in demand for rapid updates of the digital farm maps in Korea. This study aimed to develop a boundary extraction algorithm by systematically reconstructing a series of computational and mathematical methods, including the Suzuki85 algorithm, Canny edge detection, and Hough transform. Since most irregular farmlands in Korea have been consolidated into large rectangular arrangements for agricultural productivity, the boundary between two adjacent land parcels was assumed to be a straight line. The developed algorithm was applied over six different study sites to evaluate its performance at the boundary level and sectional area level. The correctness, completeness, and quality of the extracted boundaries were approximately 80.7%, 79.7%, and 67.0%, at the boundary level, and 89.7%, 90.0%, and 81.6%, at the area-based level, respectively. These performances are comparable with the results of previous studies on similar subjects; thus, this algorithm can be used for land parcel boundary extraction. The developed algorithm tended to subdivide land parcels for distinctive features, such as greenhouse structures or isolated irregular land parcels within the land blocks. The developed algorithm is currently applicable only to regularly arranged land parcels, and further study coupled with a decision tree or artificial intelligence may allow for boundary extraction from irregularly shaped land parcels.
Collapse
|
8
|
Polish Cadastre Modernization with Remotely Extracted Buildings from High-Resolution Aerial Orthoimagery and Airborne LiDAR. REMOTE SENSING 2021. [DOI: 10.3390/rs13040611] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Automatic building extraction from remote sensing data is a hot but challenging research topic for cadastre verification, modernization and updating. Deep learning algorithms are perceived as more promising in overcoming the difficulties of extracting semantic features from complex scenes and large differences in buildings’ appearance. This paper explores the modified fully convolutional network U-Shape Network (U-Net) for high resolution aerial orthoimagery segmentation and dense LiDAR data to extract building outlines automatically. The three-step end-to-end computational procedure allows for automated building extraction with an 89.5% overall accuracy and an 80.7% completeness, which made it very promising for cadastre modernization in Poland. The applied algorithms work well both in densely and poorly built-up areas, typical for peripheral areas of cities, where uncontrolled development had recently been observed. Discussing the possibilities and limitations, the authors also provide some important information that could help local authorities decide on the use of remote sensing data in land administration.
Collapse
|
9
|
Building Extraction from Airborne Multi-Spectral LiDAR Point Clouds Based on Graph Geometric Moments Convolutional Neural Networks. REMOTE SENSING 2020. [DOI: 10.3390/rs12193186] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Building extraction has attracted much attentions for decades as a prerequisite for many applications and is still a challenging topic in the field of photogrammetry and remote sensing. Due to the lack of spectral information, massive data processing, and approach universality, building extraction from point clouds is still a thorny and challenging problem. In this paper, a novel deep-learning-based framework is proposed for building extraction from point cloud data. Specifically, first, a sample generation method is proposed to split the raw preprocessed multi-spectral light detection and ranging (LiDAR) data into numerous samples, which are directly fed into convolutional neural networks and completely cover the original inputs. Then, a graph geometric moments (GGM) convolution is proposed to encode the local geometric structure of point sets. In addition, a hierarchical architecture equipped with GGM convolution, called GGM convolutional neural networks, is proposed to train and recognize building points. Finally, the test scenes with varying sizes can be fed into the framework and obtain a point-wise extraction result. We evaluate the proposed framework and methods on the airborne multi-spectral LiDAR point clouds collected by an Optech Titan system. Compared with previous state-of-the-art networks, which are designed for point cloud segmentation, our method achieves the best performance with a correctness of 95.1%, a completeness of 93.7%, an F-measure of 94.4%, and an intersection over union (IoU) of 89.5% on two test areas. The experimental results confirm the effectiveness and efficiency of the proposed framework and methods.
Collapse
|