1
|
Fotsing C, Tchuitcheu WC, Besong LI, Cunningham DW, Bobda C. A Specialized Pipeline for Efficient and Reliable 3D Semantic Model Reconstruction of Buildings from Indoor Point Clouds. J Imaging 2024; 10:261. [PMID: 39452424 PMCID: PMC11508631 DOI: 10.3390/jimaging10100261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Revised: 10/16/2024] [Accepted: 10/17/2024] [Indexed: 10/26/2024] Open
Abstract
Recent advances in laser scanning systems have enabled the acquisition of 3D point cloud representations of scenes, revolutionizing the fields of Architecture, Engineering, and Construction (AEC). This paper presents a novel pipeline for the automatic generation of 3D semantic models of multi-level buildings from indoor point clouds. The architectural components are extracted hierarchically. After segmenting the point clouds into potential building floors, a wall detection process is performed on each floor segment. Then, room, ground, and ceiling extraction are conducted using the walls 2D constellation obtained from the projection of the walls onto the ground plan. The identification of the openings in the walls is performed using a deep learning-based classifier that separates doors and windows from non-consistent holes. Based on the geometric and semantic information from previously detected elements, the final model is generated in IFC format. The effectiveness and reliability of the proposed pipeline are demonstrated through extensive experiments and visual inspections. The results reveal high precision and recall values in the extraction of architectural elements, ensuring the fidelity of the generated models. In addition, the pipeline's efficiency and accuracy offer valuable contributions to future advancements in point cloud processing.
Collapse
Affiliation(s)
- Cedrique Fotsing
- Department of Graphic Systems, Institute for Computer Science, Brandenburg University of Technology Cottbus-Senftenberg, Platz der Deutschen Einheit 1, 03046 Cottbus, Germany;
| | - Willy Carlos Tchuitcheu
- Department of Mathematics and Data Science, Faculty of Sciences and Bio-Engineering Sciences, Vrije Universiteit Brussel, 1050 Brussels, Belgium;
| | - Lemopi Isidore Besong
- Institute of Metallurgy, Clausthal University of Technology, 38678 Clausthal-Zellerfeld, Germany;
| | - Douglas William Cunningham
- Department of Graphic Systems, Institute for Computer Science, Brandenburg University of Technology Cottbus-Senftenberg, Platz der Deutschen Einheit 1, 03046 Cottbus, Germany;
| | - Christophe Bobda
- Department of Electrical and Computer Engineering, University of Florida, 36A Larsen Hall, Gainesville, FL 116200, USA;
| |
Collapse
|
2
|
Cao J, Zhao X, Schwertfeger S. Large-Scale Indoor Visual-Geometric Multimodal Dataset and Benchmark for Novel View Synthesis. SENSORS (BASEL, SWITZERLAND) 2024; 24:5798. [PMID: 39275709 PMCID: PMC11397877 DOI: 10.3390/s24175798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Revised: 08/27/2024] [Accepted: 08/29/2024] [Indexed: 09/16/2024]
Abstract
The accurate reconstruction of indoor environments is crucial for applications in augmented reality, virtual reality, and robotics. However, existing indoor datasets are often limited in scale, lack ground truth point clouds, and provide insufficient viewpoints, which impedes the development of robust novel view synthesis (NVS) techniques. To address these limitations, we introduce a new large-scale indoor dataset that features diverse and challenging scenes, including basements and long corridors. This dataset offers panoramic image sequences for comprehensive coverage, high-resolution point clouds, meshes, and textures as ground truth, and a novel benchmark specifically designed to evaluate NVS algorithms in complex indoor environments. Our dataset and benchmark aim to advance indoor scene reconstruction and facilitate the creation of more effective NVS solutions for real-world applications.
Collapse
Affiliation(s)
- Junming Cao
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiting Zhao
- Key Laboratory of Intelligent Perception and Human-Machine Collaboration, ShanghaiTech University, Ministry of Education, Shanghai 201210, China
| | - Sören Schwertfeger
- Key Laboratory of Intelligent Perception and Human-Machine Collaboration, ShanghaiTech University, Ministry of Education, Shanghai 201210, China
| |
Collapse
|
3
|
Ren J, Dai Y, Liu B, Xie P, Wang G. Hierarchical Vision Navigation System for Quadruped Robots with Foothold Adaptation Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115194. [PMID: 37299923 DOI: 10.3390/s23115194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 05/20/2023] [Accepted: 05/28/2023] [Indexed: 06/12/2023]
Abstract
Legged robots can travel through complex scenes via dynamic foothold adaptation. However, it remains a challenging task to efficiently utilize the dynamics of robots in cluttered environments and to achieve efficient navigation. We present a novel hierarchical vision navigation system combining foothold adaptation policy with locomotion control of the quadruped robots. The high-level policy trains an end-to-end navigation policy, generating an optimal path to approach the target with obstacle avoidance. Meanwhile, the low-level policy trains the foothold adaptation network through auto-annotated supervised learning to adjust the locomotion controller and to provide more feasible foot placement. Extensive experiments in both simulation and the real world show that the system achieves efficient navigation against challenges in dynamic and cluttered environments without prior information.
Collapse
Affiliation(s)
- Junli Ren
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Yingru Dai
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Bowen Liu
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Pengwei Xie
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Guijin Wang
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China
| |
Collapse
|
4
|
An Indoor Space Model of Building Considering Multi-Type Segmentation. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2022. [DOI: 10.3390/ijgi11070367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Indoor space is a core part of supporting indoor applications. Most of the existing indoor space models are expressed from three space scales: building, floor, and room, and the granularity is not fine enough, lacking the expression of each functional subspace inside the room. In this study, we first analyzed the spatio-temporal segmentation characteristics of indoor space, and proposed a multi-level indoor space model framework that takes into account multiple types of segmentation. As well, based on the IFC (Industry Foundation Classes) standard, the extension of the indoor functional subspace was realized. The experimental results showed that the indoor space model proposed in this paper can effectively support the expression of functional subspace under the multi-type segmentation based on indoor elements, especially from the aspects of semantics, geometry, relationship, and attribute. This study enriches the granularity of existing indoor models and provides support for refined indoor navigation and evacuation applications.
Collapse
|
5
|
Abstract
An efficient 3D survey of a complex indoor environment remains a challenging task, especially if the accuracy requirements for the geometric data are high for instance in building information modeling (BIM) or construction. The registration of non-overlapping terrestrial laser scanning (TLS) point clouds is laborious. We propose a novel indoor mapping strategy that uses a simultaneous localization and mapping (SLAM) laser scanner (LS) to support the building-scale registration of non-overlapping TLS point clouds in order to reconstruct comprehensive building floor/3D maps. This strategy improves efficiency since it allows georeferenced TLS data to only be collected from those parts of the building that require such accuracy. The rest of the building is measured with SLAM LS accuracy. Based on the results of the case study, the introduced method can locate non-overlapping TLS point clouds with an accuracy of 18–51 mm using target sphere comparison.
Collapse
|
6
|
An Efficient Approach to Automatic Construction of 3D Watertight Geometry of Buildings Using Point Clouds. REMOTE SENSING 2021. [DOI: 10.3390/rs13101947] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recent years have witnessed an increasing use of 3D models in general and 3D geometric models specifically of built environment for various applications, owing to the advancement of mapping techniques for accurate 3D information. Depending on the application scenarios, there exist various types of approaches to automate the construction of 3D building geometry. However, in those studies, less attention has been paid to watertight geometries derived from point cloud data, which are of use to the management and the simulations of building energy. To this end, an efficient reconstruction approach was introduced in this study and involves the following key steps. The point cloud data are first voxelised for the ray-casting analysis to obtain the 3D indoor space. By projecting it onto a horizontal plane, an image representing the indoor area is obtained and is used for the room segmentation. The 2D boundary of each room candidate is extracted using new grammar rules and is extruded using the room height to generate 3D models of individual room candidates. The room connection analyses are applied to the individual models obtained to determine the locations of doors and the topological relations between adjacent room candidates for forming an integrated and watertight geometric model. The approach proposed was tested using the point cloud data representing six building sites of distinct spatial confirmations of rooms, corridors and openings. The experimental results showed that accurate watertight building geometries were successfully created. The average differences between the point cloud data and the geometric models obtained were found to range from 12 to 21 mm. The maximum computation time taken was less than 5 min for the point cloud of approximately 469 million data points, more efficient than the typical reconstruction methods in the literature.
Collapse
|
7
|
Integration of Laser Scanner and Photogrammetry for Heritage BIM Enhancement. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2021. [DOI: 10.3390/ijgi10050316] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Digital 3D capture and reliable reproduction of architectural features is the first and most difficult step towards defining a heritage BIM. Three-dimensional digital survey technologies, such as TLS and photogrammetry, enable experts to scan buildings with a new level of detail. Challenges in the tracing of parametric objects in a TLS point cloud include the reconstruction of occluded parts, measurement of uncertainties relevant to surface reflectivity, and edge detection and location. In addition to image-based techniques being considered cost effective, highly flexible, and efficient in producing a high-quality 3D textured model, they also provide a better interpretation of surface linear characteristics. This article addresses an architecture survey workflow using photogrammetry and TLS to optimize a point cloud that is sufficient for a reliable HBIM. Fusion-based workflows were proposed during the recording of two heritage sites—the Matbouli House Museum in Historic Jeddah, a UNESCO World Heritage Site; and Asfan Castle. In the Matbouli House Museum building, which is rich with complex architectural features, multi-sensor recording was implemented at different resolutions and levels of detail. The TLS data were used to reconstruct the basic shape of the main structural elements, while the imagery’s superior radiometric data and accessibility were effectively used to enhance the TLS point clouds for improving the geometry, data interpretation, and parametric tracing of irregular objects in the facade. Furthermore, in the workflow that is considered to be the ragged terrain of the Castle of Asfan, here, the TLS point cloud was supplemented with UAV data in the upper building zones where the shadow data originated. Both datasets were registered using an ICP algorithm to scale the photogrammetric data and define their actual position in the construction system. The hybrid scans were imported and processed in the BIM environment. The building components were segmented and classified into regular and irregular surfaces, in order to perform detailed building information modeling of the architectural elements. The proposed workflows demonstrated an appropriate performance in terms of reliable and complete BIM mapping in the complex structures.
Collapse
|
8
|
From a Point Cloud to a Simulation Model-Bayesian Segmentation and Entropy Based Uncertainty Estimation for 3D Modelling. ENTROPY 2021; 23:e23030301. [PMID: 33802360 PMCID: PMC8000650 DOI: 10.3390/e23030301] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 02/22/2021] [Accepted: 02/25/2021] [Indexed: 11/16/2022]
Abstract
The 3D modelling of indoor environments and the generation of process simulations play an important role in factory and assembly planning. In brownfield planning cases, existing data are often outdated and incomplete especially for older plants, which were mostly planned in 2D. Thus, current environment models cannot be generated directly on the basis of existing data and a holistic approach on how to build such a factory model in a highly automated fashion is mostly non-existent. Major steps in generating an environment model of a production plant include data collection, data pre-processing and object identification as well as pose estimation. In this work, we elaborate on a methodical modelling approach, which starts with the digitalization of large-scale indoor environments and ends with the generation of a static environment or simulation model. The object identification step is realized using a Bayesian neural network capable of point cloud segmentation. We elaborate on the impact of the uncertainty information estimated by a Bayesian segmentation framework on the accuracy of the generated environment model. The steps of data collection and point cloud segmentation as well as the resulting model accuracy are evaluated on a real-world data set collected at the assembly line of a large-scale automotive production plant. The Bayesian segmentation network clearly surpasses the performance of the frequentist baseline and allows us to considerably increase the accuracy of the model placement in a simulation scene.
Collapse
|
9
|
Autonomous Indoor Scanning System Collecting Spatial and Environmental Data for Efficient Indoor Monitoring and Control. Processes (Basel) 2020. [DOI: 10.3390/pr8091133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
As various activities related to entertainment, business, shopping, and conventions are done increasingly indoors, the demand for indoor spatial information and indoor environmental data is growing. Unlike the case of outdoor environments, obtaining spatial information in indoor environments is difficult. Given the absence of GNSS (Global Navigation Satellite System) signals, various technologies for indoor positioning, mapping and modeling have been proposed. Related business models for indoor space services, safety, convenience, facility management, and disaster response, moreover, have been suggested. An autonomous scanning system for collection of indoor spatial and environmental data is proposed in this paper. The proposed system can be utilized to collect spatial dimensions suitable for extraction of a two-dimensional indoor drawing and obtainment of spatial imaging as well as indoor environmental data on temperature, humidity and particulate matter. For these operations, the system has two modes, manual and autonomous. The main function of the systems is autonomous mode, and the manual mode is implemented additionally. It can be applied in facilities without infrastructure for indoor data collection, such as for routine indoor data collection purposes, and it can also be used for immediate indoor data collection in cases of emergency (e.g., accidents, disasters).
Collapse
|
10
|
Vision Measurement of Tunnel Structures with Robust Modelling and Deep Learning Algorithms. SENSORS 2020; 20:s20174945. [PMID: 32882882 PMCID: PMC7506875 DOI: 10.3390/s20174945] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 08/24/2020] [Accepted: 08/25/2020] [Indexed: 11/17/2022]
Abstract
The health monitoring of tunnel structures is vital to the safe operation of railway transportation systems. With the increasing mileage of tunnels, regular inspection and health monitoring are urgently demanded for the tunnel structures, especially for information regarding deformation and damage. However, traditional methods of tunnel inspection are time-consuming, expensive and highly dependent on human subjectivity. In this paper, an automatic tunnel monitoring method is investigated based on image data which is collected through the moving vision measurement unit consisting of camera array. Furthermore, geometric modelling and crack inspection algorithms are proposed where a robust three-dimensional tunnel model is reconstructed utilizing a B-spline method and crack identification is conducted by means of a Mask R-CNN network. The innovation of this investigation is that we combine the robust modelling which could be applied for the deformation analysis and the crack detection where a deep learning method is employed to recognize the tunnel cracks intelligently based on image sensors. In this study, experiments were conducted on a subway tunnel structure several kilometers long, and a robust three-dimensional model is generated and the cracks are identified automatically with the image data. The superiority of this proposal is that the comprehensive information of geometry deformation and crack damage can ensure the reliability and improve the accuracy of health monitoring.
Collapse
|
11
|
Abstract
Interpreting 3D point cloud data of the interior and exterior of buildings is essential for automated navigation, interaction and 3D reconstruction. However, the direct exploitation of the geometry is challenging due to inherent obstacles such as noise, occlusions, sparsity or variance in the density. Alternatively, 3D mesh geometries derived from point clouds benefit from preprocessing routines that can surmount these obstacles and potentially result in more refined geometry and topology descriptions. In this article, we provide a rigorous comparison of both geometries for scene interpretation. We present an empirical study on the suitability of both geometries for the feature extraction and classification. More specifically, we study the impact for the retrieval of structural building components in a realistic environment which is a major endeavor in Building Information Modeling (BIM) reconstruction. The study runs on segment-based structuration of both geometries and shows that both achieve recognition rates over 75% F1 score when suitable features are used.
Collapse
|
12
|
Procedural Reconstruction of 3D Indoor Models from Lidar Data Using Reversible Jump Markov Chain Monte Carlo. REMOTE SENSING 2020. [DOI: 10.3390/rs12050838] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Automated reconstruction of Building Information Models (BIMs) from point clouds has been an intensive and challenging research topic for decades. Traditionally, 3D models of indoor environments are reconstructed purely by data-driven methods, which are susceptible to erroneous and incomplete data. Procedural-based methods such as the shape grammar are more robust to uncertainty and incompleteness of the data as they exploit the regularity and repetition of structural elements and architectural design principles in the reconstruction. Nevertheless, these methods are often limited to simple architectural styles: the so-called Manhattan design. In this paper, we propose a new method based on a combination of a shape grammar and a data-driven process for procedural modelling of indoor environments from a point cloud. The core idea behind the integration is to apply a stochastic process based on reversible jump Markov Chain Monte Carlo (rjMCMC) to guide the automated application of grammar rules in the derivation of a 3D indoor model. Experiments on synthetic and real data sets show the applicability of the method to efficiently generate 3D indoor models of both Manhattan and non-Manhattan environments with high accuracy, completeness, and correctness.
Collapse
|
13
|
An Offline Coarse-To-Fine Precision Optimization Algorithm for 3D Laser SLAM Point Cloud. REMOTE SENSING 2019. [DOI: 10.3390/rs11202352] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
3D laser simultaneous localization and mapping (SLAM) technology is one of the most efficient methods to capture spatial information. However, the low-precision of 3D laser SLAM point cloud limits its application in many fields. In order to improve the precision of 3D laser SLAM point cloud, we presented an offline coarse-to-fine precision optimization algorithm. The point clouds are first segmented and registered at the local level. Then, a pose graph of point cloud segments is constructed using feature similarity and global registration. At last, all segments are aligned and merged into the final optimized result. In addition, a cycle based error edge elimination method is utilized to guarantee the consistency of the pose graph. The experimental results demonstrated that our algorithm achieved good performance both in our test datasets and the Cartographer public dataset. Compared with the reference data obtained by terrestrial laser scanning (TLS), the average point-to-point distance root mean square errors (RMSE) of point clouds generated by Google’s Cartographer and LOAM laser SLAM algorithms are reduced by 47.3% and 53.4% respectively after optimization in our datasets. And the average plane-to-plane distances of them are reduced by 50.9% and 52.1% respectively.
Collapse
|
14
|
Lipuš B, Žalik B. 3D Convex Hull-Based Registration Method for Point Cloud Watermark Extraction. SENSORS 2019; 19:s19153268. [PMID: 31349567 PMCID: PMC6695679 DOI: 10.3390/s19153268] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 07/22/2019] [Accepted: 07/23/2019] [Indexed: 11/16/2022]
Abstract
Most 3D point cloud watermarking techniques apply Principal Component Analysis (PCA) to protect the watermark against affine transformation attacks. Unfortunately, they fail in the case of cropping and random point removal attacks. In this work, an alternative approach is proposed that solves these issues efficiently. A point cloud registration technique is developed, based on a 3D convex hull. The scale and the initial rigid affine transformation between the watermarked and the original point cloud can be estimated in this way to obtain a coarse point cloud registration. An iterative closest point algorithm is performed after that to align the attacked watermarked point cloud to the original one completely. The watermark can then be extracted from the watermarked point cloud easily. The extensive experiments confirmed that the proposed approach resists the affine transformation, cropping, random point removal, and various combinations of these attacks. The most dangerous is an attack with noise that can be handled only to some extent. However, this issue is common to the other state-of-the-art approaches.
Collapse
Affiliation(s)
- Bogdan Lipuš
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška Cesta 46, SI-2000 Maribor, Slovenia.
| | - Borut Žalik
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška Cesta 46, SI-2000 Maribor, Slovenia
| |
Collapse
|
15
|
Obstacle-Aware Indoor Pathfinding Using Point Clouds. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2019. [DOI: 10.3390/ijgi8050233] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the rise of urban population, updated spatial information of indoor environments is needed in a growing number of applications. Navigational assistance for disabled or aged people, guidance for robots, augmented reality for gaming, and tourism or training emergency assistance units are just a few examples of the emerging applications requiring real three-dimensional (3D) spatial data of indoor scenes. This work proposes the use of point clouds for obstacle-aware indoor pathfinding. Point clouds are firstly used for reconstructing semantically rich 3D models of building structural elements in order to extract initial navigational information. Potential obstacles to navigation are classified in the point cloud and directly used to correct the path according to the mobility skills of different users. The methodology is tested in several real case studies for wheelchair and ordinary users. Experiments show that, after several iterations, paths are readapted to avoid obstacles.
Collapse
|
16
|
Semantic Interpretation of Mobile Laser Scanner Point Clouds in Indoor Scenes Using Trajectories. REMOTE SENSING 2018. [DOI: 10.3390/rs10111754] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
The data acquisition with Indoor Mobile Laser Scanners (IMLS) is quick, low-cost and accurate for indoor 3D modeling. Besides a point cloud, an IMLS also provides the trajectory of the mobile scanner. We analyze this trajectory jointly with the point cloud to support the labeling of noisy, highly reflected and cluttered points in indoor scenes. An adjacency-graph-based method is presented for detecting and labeling of permanent structures, such as walls, floors, ceilings, and stairs. Through occlusion reasoning and the use of the trajectory as a set of scanner positions, gaps are discriminated from real openings in the data. Furthermore, a voxel-based method is applied for labeling of navigable space and separating them from obstacles. The results show that 80% of the doors and 85% of the rooms are correctly detected, and most of the walls and openings are reconstructed. The experimental outcomes indicate that the trajectory of MLS systems plays an essential role in the understanding of indoor scenes.
Collapse
|
17
|
Indoor Building Reconstruction from Occluded Point Clouds Using Graph-Cut and Ray-Tracing. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8091529] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Despite the increasing demand of updated and detailed indoor models, indoor reconstruction from point clouds is still in an early stage in comparison with the reconstruction of outdoor scenes. Specific challenges are related to the complex building layouts and the high presence of elements such as pieces of furniture causing clutter and occlusions. This work proposes an automatic method for modelling Manhattan-World indoors acquired with a mobile laser scanner in the presence of highly occluded walls. The core of the methodology is the transformation of indoor reconstruction into a labelling problem of structural cells in a 2D floor plan. Assuming the prevalence of orthogonal intersections between walls, indoor completion is formulated as an energy minimization problem using graph cuts. Doors and windows are detected from occlusions by implementing a ray-tracing algorithm. The methodology is tested in a real case study. Except for one window partially covered by a curtain, all building elements were successfully reconstructed.
Collapse
|
18
|
Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis. SENSORS 2018; 18:s18061838. [PMID: 29874873 PMCID: PMC6022126 DOI: 10.3390/s18061838] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Revised: 05/30/2018] [Accepted: 06/01/2018] [Indexed: 11/25/2022]
Abstract
Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths.
Collapse
|
19
|
Sensors for Indoor Mapping and Navigation. SENSORS 2016; 16:s16050655. [PMID: 27171079 PMCID: PMC4883346 DOI: 10.3390/s16050655] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 05/04/2016] [Revised: 05/04/2016] [Accepted: 05/04/2016] [Indexed: 11/26/2022]
|
20
|
Lee TJ, Yi DH, Cho DID. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots. SENSORS 2016; 16:s16030311. [PMID: 26938540 PMCID: PMC4813886 DOI: 10.3390/s16030311] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2015] [Revised: 02/03/2016] [Accepted: 02/17/2016] [Indexed: 11/23/2022]
Abstract
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
Collapse
Affiliation(s)
- Tae-Jae Lee
- Department of Electrical and Computer Engineering, Automation and Systems Research Institute (ASRI), Seoul National University, Seoul 151-742, Korea.
| | - Dong-Hoon Yi
- Department of Electrical and Computer Engineering, Automation and Systems Research Institute (ASRI), Seoul National University, Seoul 151-742, Korea.
| | - Dong-Il Dan Cho
- Department of Electrical and Computer Engineering, Automation and Systems Research Institute (ASRI), Seoul National University, Seoul 151-742, Korea.
- Inter-University Semiconductor Research Center (ISRC), Seoul National University, Seoul 151-742, Korea.
| |
Collapse
|
21
|
Automatic Detection and Segmentation of Columns in As-Built Buildings from Point Clouds. REMOTE SENSING 2015. [DOI: 10.3390/rs71115651] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
22
|
Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps. SENSORS 2015; 15:20894-924. [PMID: 26308003 PMCID: PMC4570453 DOI: 10.3390/s150820894] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2015] [Revised: 08/07/2015] [Accepted: 08/17/2015] [Indexed: 11/21/2022]
Abstract
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision.
Collapse
|