1
|
Li CZ, Li S, Ya Y, Tam VW. Digital inspection techniques of modular integrated construction. Heliyon 2023; 9:e21399. [PMID: 37954356 PMCID: PMC10632720 DOI: 10.1016/j.heliyon.2023.e21399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 10/20/2023] [Indexed: 11/14/2023] Open
Abstract
As a new construction form, modular integrated construction (MiC) can effectively improve the construction quality and productivity, especially for the construction of high-density and high-rise buildings. However, the current MiC quality inspection relies on manual inspection, which is inefficient and unreliable. Systematic research on digital inspection techniques (DITs) is fragmented and unable to fully realize the potential of the MiC industry. This study aims to explore the current state of DIT applications in MiC and to summarize the knowledge in the field through an analysis of 248 relevant literatures. Accordingly, this study combines bibliometric analysis, and a system engineering evaluation approach based on 3D structures (time, knowledge, and logic) to provide an overview of the current state of DIT development. The overview includes the application of DITs from a whole life cycle perspective, the DIT knowledge structure, specific DIT applications, as well as current challenges and future prospects.
Collapse
Affiliation(s)
- Clyde Zhengdao Li
- Sino-Australia Joint Research Center in BIM and Smart Construction, College of Civil and Transportation Engineering, Shenzhen University, Shenzhen, China
| | - Shuo Li
- Sino-Australia Joint Research Center in BIM and Smart Construction, College of Civil and Transportation Engineering, Shenzhen University, Shenzhen, China
| | - Yingyi Ya
- Sino-Australia Joint Research Center in BIM and Smart Construction, College of Civil and Transportation Engineering, Shenzhen University, Shenzhen, China
| | - Vivian W.Y. Tam
- Western Sydney University, School of Engineering, Design and Built Environment, Penrith, NSW, 2750, Australia
| |
Collapse
|
2
|
Sapkota S, Paudyal DR. Growth Monitoring and Yield Estimation of Maize Plant Using Unmanned Aerial Vehicle (UAV) in a Hilly Region. SENSORS (BASEL, SWITZERLAND) 2023; 23:5432. [PMID: 37420599 DOI: 10.3390/s23125432] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/14/2023] [Accepted: 05/19/2023] [Indexed: 07/09/2023]
Abstract
More than 66% of the Nepalese population has been actively dependent on agriculture for their day-to-day living. Maize is the largest cereal crop in Nepal, both in terms of production and cultivated area in the hilly and mountainous regions of Nepal. The traditional ground-based method for growth monitoring and yield estimation of maize plant is time consuming, especially when measuring large areas, and may not provide a comprehensive view of the entire crop. Estimation of yield can be performed using remote sensing technology such as Unmanned Aerial Vehicles (UAVs), which is a rapid method for large area examination, providing detailed data on plant growth and yield estimation. This research paper aims to explore the capability of UAVs for plant growth monitoring and yield estimation in mountainous terrain. A multi-rotor UAV with a multi-spectral camera was used to obtain canopy spectral information of maize in five different stages of the maize plant life cycle. The images taken from the UAV were processed to obtain the result of the orthomosaic and the Digital Surface Model (DSM). The crop yield was estimated using different parameters such as Plant Height, Vegetation Indices, and biomass. A relationship was established in each sub-plot which was further used to calculate the yield of an individual plot. The estimated yield obtained from the model was validated against the ground-measured yield through statistical tests. A comparison of the Normalized Difference Vegetation Index (NDVI) and the Green-Red Vegetation Index (GRVI) indicators of a Sentinel image was performed. GRVI was found to be the most important parameter and NDVI was found to be the least important parameter for yield determination besides their spatial resolution in a hilly region.
Collapse
Affiliation(s)
- Sujan Sapkota
- Faculty of Science, Health and Technology, Nepal Open University, Manbhawan, Lalitpur, Nepal
| | - Dev Raj Paudyal
- Faculty of Science, Health and Technology, Nepal Open University, Manbhawan, Lalitpur, Nepal
- School of Surveying and Built Environment, University of Southern Queensland, Springfield, QLD 4300, Australia
| |
Collapse
|
3
|
Kownacki C, Ambroziak L, Ciężkowski M, Wolniakowski A, Romaniuk S, Bożko A, Ołdziej D. Precision Landing Tests of Tethered Multicopter and VTOL UAV on Moving Landing Pad on a Lake. SENSORS (BASEL, SWITZERLAND) 2023; 23:2016. [PMID: 36850613 PMCID: PMC9964198 DOI: 10.3390/s23042016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/21/2023] [Accepted: 02/03/2023] [Indexed: 06/18/2023]
Abstract
Autonomous take-off and landing on a moving landing pad are extraordinarily complex and challenging functionalities of modern UAVs, especially if they must be performed in windy environments. The article presents research focused on achieving such functionalities for two kinds of UAVs, i.e., a tethered multicopter and VTOL. Both vehicles are supported by a landing pad navigation station, which communicates with their ROS-based onboard computer. The computer integrates navigational data from the UAV and the landing pad navigational station through the utilization of an extended Kalman filter, which is a typical approach in such applications. The novelty of the presented system is extending navigational data with data from the ultra wide band (UWB) system, and this makes it possible to achieve a landing accuracy of about 1 m. In the research, landing tests were carried out in real conditions on a lake for both UAVs. In the tests, a special mobile landing pad was built and based on a barge. The results show that the expected accuracy of 1 m is indeed achieved, and both UAVs are ready to be tested in real conditions on a ferry.
Collapse
Affiliation(s)
- Cezary Kownacki
- Robotics and Mechatronics Department, Faculty of Mechanical Engineering, Bialystok University of Technology, Wiejska St. 45C, 15-351 Bialystok, Poland
| | - Leszek Ambroziak
- Robotics and Mechatronics Department, Faculty of Mechanical Engineering, Bialystok University of Technology, Wiejska St. 45C, 15-351 Bialystok, Poland
| | - Maciej Ciężkowski
- Automatic Control and Robotics Department, Faculty of Electrical Engineering, Bialystok University of Technology, Wiejska St. 45C, 15-351 Bialystok, Poland
| | - Adam Wolniakowski
- Automatic Control and Robotics Department, Faculty of Electrical Engineering, Bialystok University of Technology, Wiejska St. 45C, 15-351 Bialystok, Poland
| | - Sławomir Romaniuk
- Automatic Control and Robotics Department, Faculty of Electrical Engineering, Bialystok University of Technology, Wiejska St. 45C, 15-351 Bialystok, Poland
| | - Arkadiusz Bożko
- Robotics and Mechatronics Department, Faculty of Mechanical Engineering, Bialystok University of Technology, Wiejska St. 45C, 15-351 Bialystok, Poland
| | - Daniel Ołdziej
- Robotics and Mechatronics Department, Faculty of Mechanical Engineering, Bialystok University of Technology, Wiejska St. 45C, 15-351 Bialystok, Poland
| |
Collapse
|
4
|
Fryskowska-Skibniewska A, Delis P, Kedzierski M, Matusiak D. The Conception of Test Fields for Fast Geometric Calibration of the FLIR VUE PRO Thermal Camera for Low-Cost UAV Applications. SENSORS 2022; 22:s22072468. [PMID: 35408084 PMCID: PMC9003006 DOI: 10.3390/s22072468] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/15/2022] [Accepted: 03/21/2022] [Indexed: 02/05/2023]
Abstract
The dynamic evolution of photogrammetry led to the development of numerous methods of geometric calibration of cameras, which are mostly based on building flat targets (fields) with features that can be distinguished in the images. Geometric calibration of thermal cameras for UAVs is an active research field that attracts numerous researchers. As a result of their low price and general availability, non-metric cameras are being increasingly used for measurement purposes. Apart from resolution, non-metric sensors do not have any other known parameters. The commonly applied process is self-calibration, which enables the determining of the approximate elements of the camera’s interior orientation. The purpose of this work was to analyze the possibilities of geometric calibration of thermal UAV cameras using proposed test field patterns and materials. The experiment was conducted on a FLIR VUE PRO thermal camera dedicated to UAV platforms. The authors propose the selection of various image processing methods (histogram equalization, thresholding, brightness correction) in order to improve the quality of the thermograms. The consecutive processing methods resulted in over 80% effectiveness on average by 94%, 81%, and 80 %, respectively. This effectiveness, for no processing and processing with the use of the filtering method, was: 42% and 38%, respectively. Only high-pass filtering did not improve the obtained results. The final results of the proposed method and structure of test fields were verified on chosen geometric calibration algorithms. The results of fast and low-cost calibration are satisfactory, especially in terms of the automation of this process. For geometric correction, the standard deviations for the results of specific methods of thermogram sharpness enhancement are two to three times better than results without any correction.
Collapse
|
5
|
Seven Different Lighting Conditions in Photogrammetric Studies of a 3D Urban Mock-Up. ENERGIES 2021. [DOI: 10.3390/en14238002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
One of the most important elements during photogrammetric studies is the appropriate lighting of the object or area under investigation. Nevertheless, the concept of “adequate lighting” is relative. Therefore, we have attempted, based on experimental proof of concept (technology readiness level—TRL3), to verify the impact of various types of lighting emitted by LED light sources for scene illumination and their direct influence on the quality of the photogrammetric study of a 3D urban mock-up. An important issue in this study was the measurement and evaluation of the artificial light sources used, based on illuminance (E), correlated colour temperature (CCT), colour rendering index (CRI) and Spectral power distribution (SPD) and the evaluation of the obtained point clouds (seven photogrammetric products of the same object, developed for seven different lighting conditions). The general values of the quality of the photogrammetric studies were compared. Additionally, we determined seventeen features concerning the group of tie-points in the vicinity of each F-point and the type of study. The acquired traits were related to the number of tie-points in the vicinity, their luminosities and spectral characteristics for each of the colours (red, green, blue). The dependencies between the identified features and the obtained XYZ total error were verified, and the possibility of detecting F-points depending on their luminosity was also analysed. The obtained results can be important in the process of developing a photogrammetric method of urban lighting monitoring or in selecting additional lighting for objects that are the subject of a short-range photogrammetric study.
Collapse
|
6
|
Concept of an Innovative Autonomous Unmanned System for Bathymetric Monitoring of Shallow Waterbodies (INNOBAT System). ENERGIES 2021. [DOI: 10.3390/en14175370] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Bathymetry is a subset of hydrography, aimed at measuring the depth of waterbodies and waterways. Measurements are taken inter alia to detect natural obstacles or other navigational obstacles that endanger the safety of navigation, to examine the navigability conditions, anchorages, waterways and other commercial waterbodies, and to determine the parameters of the safe depth of waterbodies in the vicinity of ports, etc. Therefore, it is necessary to produce precise and reliable seabed maps, so that any hazards that may occur, particularly in shallow waterbodies, can be prevented, including the high dynamics of hydromorphological changes. This publication is aimed at developing a concept of an innovative autonomous unmanned system for bathymetric monitoring of shallow waterbodies. A bathymetric and topographic system will use autonomous unmanned aerial and surface vehicles to study the seabed relief in the littoral zone (even at depths of less than 1 m), in line with the requirements set out for the most stringent International Hydrographic Organization (IHO) order—exclusive. Unlike other existing solutions, the INNOBAT system will enable the coverage of the entire surveyed area with measurements, which will allow a comprehensive assessment of the hydrographic and navigation situation in the waterbody to be conducted.
Collapse
|
7
|
Banfi F, Mandelli A. Computer Vision Meets Image Processing and UAS PhotoGrammetric Data Integration: From HBIM to the eXtended Reality Project of Arco della Pace in Milan and Its Decorative Complexity. J Imaging 2021; 7:118. [PMID: 39080906 PMCID: PMC8321386 DOI: 10.3390/jimaging7070118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/08/2021] [Accepted: 07/13/2021] [Indexed: 11/30/2022] Open
Abstract
This study aims to enrich the knowledge of the monument Arco della Pace in Milan, surveying and modelling the sculpture that crowns the upper part of the building. The statues and the decorative apparatus are recorded with the photogrammetric technique using both a terrestrial camera and an Unmanned Aerial Vehicle (UAV). Research results and performance are oriented to improve computer vision and image processing integration with Unmanned Aerial System (UAS) photogrammetric data to enhance interactivity and information sharing between user and digital heritage models. The vast number of images captured from terrestrial and aerial photogrammetry will also permit to use of the Historic Building Information Modelling (HBIM) model in an eXtended Reality (XR) project developed ad-hoc, allowing different types of users (professionals, non-expert users, virtual tourists, and students) and devices (mobile phones, tablets, PCs, VR headsets) to access details and information that are not visible from the ground.
Collapse
Affiliation(s)
- Fabrizio Banfi
- Architecture, Built Environment and Construction Engineering (ABC) Department, Politecnico di Milano, 20133 Milano, Italy;
| | | |
Collapse
|
8
|
Polymodal Method of Improving the Quality of Photogrammetric Images and Models. ENERGIES 2021. [DOI: 10.3390/en14123457] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Photogrammetry using unmanned aerial vehicles has become very popular and is already commonly used. The most frequent photogrammetry products are an orthoimage, digital terrain model and a 3D object model. When executing measurement flights, it may happen that there are unsuitable lighting conditions, and the flight itself is fast and not very stable. As a result, noise and blur appear on the images, and the images themselves can have too low of a resolution to satisfy the quality requirements for a photogrammetric product. In such cases, the obtained images are useless or will significantly reduce the quality of the end-product of low-level photogrammetry. A new polymodal method of improving measurement image quality has been proposed to avoid such issues. The method discussed in this article removes degrading factors from the images and, as a consequence, improves the geometric and interpretative quality of a photogrammetric product. The author analyzed 17 various image degradation cases, developed 34 models based on degraded and recovered images, and conducted an objective analysis of the quality of the recovered images and models. As evidenced, the result was a significant improvement in the interpretative quality of the images themselves and a better geometry model.
Collapse
|
9
|
Burdziakowski P, Bobkowska K. UAV Photogrammetry under Poor Lighting Conditions-Accuracy Considerations. SENSORS (BASEL, SWITZERLAND) 2021; 21:3531. [PMID: 34069500 PMCID: PMC8161153 DOI: 10.3390/s21103531] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 05/15/2021] [Accepted: 05/17/2021] [Indexed: 11/22/2022]
Abstract
The use of low-level photogrammetry is very broad, and studies in this field are conducted in many aspects. Most research and applications are based on image data acquired during the day, which seems natural and obvious. However, the authors of this paper draw attention to the potential and possible use of UAV photogrammetry during the darker time of the day. The potential of night-time images has not been yet widely recognized, since correct scenery lighting or lack of scenery light sources is an obvious issue. The authors have developed typical day- and night-time photogrammetric models. They have also presented an extensive analysis of the geometry, indicated which process element had the greatest impact on degrading night-time photogrammetric product, as well as which measurable factor directly correlated with image accuracy. The reduction in geometry during night-time tests was greatly impacted by the non-uniform distribution of GCPs within the study area. The calibration of non-metric cameras is sensitive to poor lighting conditions, which leads to the generation of a higher determination error for each intrinsic orientation and distortion parameter. As evidenced, uniformly illuminated photos can be used to construct a model with lower reprojection error, and each tie point exhibits greater precision. Furthermore, they have evaluated whether commercial photogrammetric software enabled reaching acceptable image quality and whether the digital camera type impacted interpretative quality. The research paper is concluded with an extended discussion, conclusions, and recommendation on night-time studies.
Collapse
Affiliation(s)
- Pawel Burdziakowski
- Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk University of Technology, Narutowicza 11-12, 80-233 Gdansk, Poland;
| | | |
Collapse
|
10
|
Assessment of DSM Based on Radiometric Transformation of UAV Data. SENSORS 2021; 21:s21051649. [PMID: 33673425 PMCID: PMC7956773 DOI: 10.3390/s21051649] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 02/06/2021] [Accepted: 02/22/2021] [Indexed: 12/03/2022]
Abstract
Unmanned Aerial Vehicle (UAV) is one of the latest technologies for high spatial resolution 3D modeling of the Earth. The objectives of this study are to assess low-cost UAV data using image radiometric transformation techniques and investigate its effects on global and local accuracy of the Digital Surface Model (DSM). This research uses UAV Light Detection and Ranging (LIDAR) data from 80 m and UAV Drone data from 300 and 500 m flying height. RAW UAV images acquired from 500 m flying height are radiometrically transformed in Matrix Laboratory (MATLAB). UAV images from 300 m flying height are processed for the generation of 3D point cloud and DSM in Pix4D Mapper. UAV LIDAR data are used for the acquisition of Ground Control Points (GCP) and accuracy assessment of UAV Image data products. Accuracy of enhanced DSM with DSM generated from 300 m flight height were analyzed for point cloud number, density and distribution. Root Mean Square Error (RMSE) value of Z is enhanced from ±2.15 m to ±0.11 m. For local accuracy assessment of DSM, four different types of land covers are statistically compared with UAV LIDAR resulting in compatibility of enhancement technique with UAV LIDAR accuracy.
Collapse
|
11
|
A Novel Method for the Deblurring of Photogrammetric Images Using Conditional Generative Adversarial Networks. REMOTE SENSING 2020. [DOI: 10.3390/rs12162586] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The visual data acquisition from small unmanned aerial vehicles (UAVs) may encounter a situation in which blur appears on the images. Image blurring caused by camera motion during exposure significantly impacts the images interpretation quality and consequently the quality of photogrammetric products. On blurred images, it is difficult to visually locate ground control points, and the number of identified feature points decreases rapidly together with an increasing blur kernel. The nature of blur can be non-uniform, which makes it hard to forecast for traditional deblurring methods. Due to the above, the author of this publication concluded that the neural methods developed in recent years were able to eliminate blur on UAV images with an unpredictable or highly variable blur nature. In this research, a new, rapid method based on generative adversarial networks (GANs) was applied for deblurring. A data set for neural network training was developed based on real aerial images collected over the last few years. More than 20 full sets of photogrammetric products were developed, including point clouds, orthoimages and digital surface models. The sets were generated from both blurred and deblurred images using the presented method. The results presented in the publication show that the method for improving blurred photo quality significantly contributed to an improvement in the general quality of typical photogrammetric products. The geometric accuracy of the products generated from deblurred photos was maintained despite the rising blur kernel. The quality of textures and input photos was increased. This research proves that the developed method based on neural networks can be used for deblur, even in highly blurred images, and it significantly increases the final geometric quality of the photogrammetric products. In practical cases, it will be possible to implement an additional feature in the photogrammetric software, which will eliminate unwanted blur and allow one to use almost all blurred images in the modelling process.
Collapse
|
12
|
Using UAV Photogrammetry to Analyse Changes in the Coastal Zone Based on the Sopot Tombolo (Salient) Measurement Project. SENSORS 2020; 20:s20144000. [PMID: 32708434 PMCID: PMC7411703 DOI: 10.3390/s20144000] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 07/13/2020] [Accepted: 07/16/2020] [Indexed: 11/16/2022]
Abstract
The main factors influencing the shape of the beach, shoreline and seabed include undulation, wind and coastal currents. These phenomena cause continuous and multidimensional changes in the shape of the seabed and the Earth’s surface, and when they occur in an area of intense human activity, they should be constantly monitored. In 2018 and 2019, several measurement campaigns took place in the littoral zone in Sopot, related to the intensive uplift of the seabed and beach caused by the tombolo phenomenon. In this research, a unique combination of bathymetric data obtained from an unmanned surface vessel, photogrammetric data obtained from unmanned aerial vehicles and ground laser scanning were used, along with geodetic data from precision measurements with receivers of global satellite navigation systems. This paper comprehensively presents photogrammetric measurements made from unmanned aerial vehicles during these campaigns. It describes in detail the problems in reconstruction within the water areas, analyses the accuracy of various photogrammetric measurement techniques, proposes a statistical method of data filtration and presents the changes that occurred within the studies area. The work ends with an interpretation of the causes of changes in the land part of the littoral zone and a summary of the obtained results.
Collapse
|
13
|
Deep Learning-Based Single Image Super-Resolution: An Investigation for Dense Scene Reconstruction with UAS Photogrammetry. REMOTE SENSING 2020. [DOI: 10.3390/rs12111757] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The deep convolutional neural network (DCNN) has recently been applied to the highly challenging and ill-posed problem of single image super-resolution (SISR), which aims to predict high-resolution (HR) images from their corresponding low-resolution (LR) images. In many remote sensing (RS) applications, spatial resolution of the aerial or satellite imagery has a great impact on the accuracy and reliability of information extracted from the images. In this study, the potential of a DCNN-based SISR model, called enhanced super-resolution generative adversarial network (ESRGAN), to predict the spatial information degraded or lost in a hyper-spatial resolution unmanned aircraft system (UAS) RGB image set is investigated. ESRGAN model is trained over a limited number of original HR (50 out of 450 total images) and virtually-generated LR UAS images by downsampling the original HR images using a bicubic kernel with a factor × 4 . Quantitative and qualitative assessments of super-resolved images using standard image quality measures (IQMs) confirm that the DCNN-based SISR approach can be successfully applied on LR UAS imagery for spatial resolution enhancement. The performance of DCNN-based SISR approach for the UAS image set closely approximates performances reported on standard SISR image sets with mean peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index values of around 28 dB and 0.85 dB, respectively. Furthermore, by exploiting the rigorous Structure-from-Motion (SfM) photogrammetry procedure, an accurate task-based IQM for evaluating the quality of the super-resolved images is carried out. Results verify that the interior and exterior imaging geometry, which are extremely important for extracting highly accurate spatial information from UAS imagery in photogrammetric applications, can be accurately retrieved from a super-resolved image set. The number of corresponding keypoints and dense points generated from the SfM photogrammetry process are about 6 and 17 times more than those extracted from the corresponding LR image set, respectively.
Collapse
|