1
|
Large-Scale Reality Modeling of a University Campus Using Combined UAV and Terrestrial Photogrammetry for Historical Preservation and Practical Use. DRONES 2021. [DOI: 10.3390/drones5040136] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Unmanned aerial vehicles (UAV) enable detailed historical preservation of large-scale infrastructure and contribute to cultural heritage preservation, improved maintenance, public relations, and development planning. Aerial and terrestrial photo data coupled with high accuracy GPS create hyper-realistic mesh and texture models, high resolution point clouds, orthophotos, and digital elevation models (DEMs) that preserve a snapshot of history. A case study is presented of the development of a hyper-realistic 3D model that spans the complex 1.7 km2 area of the Brigham Young University campus in Provo, Utah, USA and includes over 75 significant structures. The model leverages photos obtained during the historic COVID-19 pandemic during a mandatory and rare campus closure and details a large scale modeling workflow and best practice data acquisition and processing techniques. The model utilizes 80,384 images and high accuracy GPS surveying points to create a 1.65 trillion-pixel textured structure-from-motion (SfM) model with an average ground sampling distance (GSD) near structures of 0.5 cm and maximum of 4 cm. Separate model segments (31) taken from data gathered between April and August 2020 are combined into one cohesive final model with an average absolute error of 3.3 cm and a full model absolute error of <1 cm (relative accuracies from 0.25 cm to 1.03 cm). Optimized and automated UAV techniques complement the data acquisition of the large-scale model, and opportunities are explored to archive as-is building and campus information to enable historical building preservation, facility maintenance, campus planning, public outreach, 3D-printed miniatures, and the possibility of education through virtual reality (VR) and augmented reality (AR) tours.
Collapse
|
2
|
Evaluating Feature Extraction Methods with Synthetic Noise Patterns for Image-Based Modelling of Texture-Less Objects. REMOTE SENSING 2020. [DOI: 10.3390/rs12233886] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Image-based three-dimensional (3D) reconstruction is a process of extracting 3D information from an object or entire scene while using low-cost vision sensors. A structure-from-motion coupled with multi-view stereo (SFM-MVS) pipeline is a widely used technique that allows 3D reconstruction from a collection of unordered images. The SFM-MVS pipeline typically comprises different processing steps, including feature extraction and feature matching, which provide the basis for automatic 3D reconstruction. However, surfaces with poor visual texture (repetitive, monotone, etc.) challenge the feature extraction and matching stage and affect the quality of reconstruction. The projection of image patterns while using a video projector during the image acquisition process is a well-known technique that has been shown to be successful for such surfaces. In this study, we evaluate the performance of different feature extraction methods on texture-less surfaces with the application of synthetically generated noise patterns (images). Seven state-of-the-art feature extraction methods (HARRIS, Shi-Tomasi, MSER, SIFT, SURF, KAZE, and BRISK) are evaluated on problematic surfaces in two experimental phases. In the first phase, the 3D reconstruction of real and virtual planar surfaces evaluates image patterns while using all feature extraction methods, where the patterns with uniform histograms have the most suitable morphological features. The best performing pattern from Phase One is used in Phase Two experiments in order to recreate a polygonal model of a 3D printed object using all of the feature extraction methods. The KAZE algorithm achieved the lowest standard deviation and mean distance values of 0.0635 mm and −0.00921 mm, respectively.
Collapse
|