1
|
Luo W, Lu Z, Liao Q. LNMVSNet: A Low-Noise Multi-View Stereo Depth Inference Method for 3D Reconstruction. Sensors (Basel) 2024; 24:2400. [PMID: 38676016 PMCID: PMC11054877 DOI: 10.3390/s24082400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 03/04/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024]
Abstract
With the widespread adoption of modern RGB cameras, an abundance of RGB images is available everywhere. Therefore, multi-view stereo (MVS) 3D reconstruction has been extensively applied across various fields because of its cost-effectiveness and accessibility, which involves multi-view depth estimation and stereo matching algorithms. However, MVS tasks face noise challenges because of natural multiplicative noise and negative gain in algorithms, which reduce the quality and accuracy of the generated models and depth maps. Traditional MVS methods often struggle with noise, relying on assumptions that do not always hold true under real-world conditions, while deep learning-based MVS approaches tend to suffer from high noise sensitivity. To overcome these challenges, we introduce LNMVSNet, a deep learning network designed to enhance local feature attention and fuse features across different scales, aiming for low-noise, high-precision MVS 3D reconstruction. Through extensive evaluation of multiple benchmark datasets, LNMVSNet has demonstrated its superior performance, showcasing its ability to improve reconstruction accuracy and completeness, especially in the recovery of fine details and clear feature delineation. This advancement brings hope for the widespread application of MVS, ranging from precise industrial part inspection to the creation of immersive virtual environments.
Collapse
Affiliation(s)
| | | | - Qingmin Liao
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Beijing 100084, China; (W.L.); (Z.L.)
| |
Collapse
|
2
|
Liu Z, Wu G, Xie T, Li S, Wu C, Zhang Z, Zhou J. A Light Multi-View Stereo Method with Patch-Uncertainty Awareness. Sensors (Basel) 2024; 24:1293. [PMID: 38400452 PMCID: PMC10892961 DOI: 10.3390/s24041293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 02/09/2024] [Accepted: 02/13/2024] [Indexed: 02/25/2024]
Abstract
Multi-view stereo methods utilize image sequences from different views to generate a 3D point cloud model of the scene. However, existing approaches often overlook coarse-stage features, impacting the final reconstruction accuracy. Moreover, using a fixed range for all the pixels during inverse depth sampling can adversely affect depth estimation. To address these challenges, we present a novel learning-based multi-view stereo method incorporating attention mechanisms and an adaptive depth sampling strategy. Firstly, we propose a lightweight, coarse-feature-enhanced feature pyramid network in the feature extraction stage, augmented by a coarse-feature-enhanced module. This module integrates features with channel and spatial attention, enriching the contextual features that are crucial for the initial depth estimation. Secondly, we introduce a novel patch-uncertainty-based depth sampling strategy for depth refinement, dynamically configuring depth sampling ranges within the GRU-based optimization process. Furthermore, we incorporate an edge detection operator to extract edge features from the reference image's feature map. These edge features are additionally integrated into the iterative cost volume construction, enhancing the reconstruction accuracy. Lastly, our method is rigorously evaluated on the DTU and Tanks and Temples benchmark datasets, revealing its low GPU memory consumption and competitive reconstruction quality compared to other learning-based MVS methods.
Collapse
Affiliation(s)
- Zhen Liu
- College of Science, Zhejiang University of Technology, Hangzhou 310023, China; (Z.L.); (G.W.); (T.X.); (S.L.); (C.W.)
| | - Guangzheng Wu
- College of Science, Zhejiang University of Technology, Hangzhou 310023, China; (Z.L.); (G.W.); (T.X.); (S.L.); (C.W.)
| | - Tao Xie
- College of Science, Zhejiang University of Technology, Hangzhou 310023, China; (Z.L.); (G.W.); (T.X.); (S.L.); (C.W.)
| | - Shilong Li
- College of Science, Zhejiang University of Technology, Hangzhou 310023, China; (Z.L.); (G.W.); (T.X.); (S.L.); (C.W.)
| | - Chao Wu
- College of Science, Zhejiang University of Technology, Hangzhou 310023, China; (Z.L.); (G.W.); (T.X.); (S.L.); (C.W.)
| | | | - Jiali Zhou
- College of Science, Zhejiang University of Technology, Hangzhou 310023, China; (Z.L.); (G.W.); (T.X.); (S.L.); (C.W.)
| |
Collapse
|
3
|
Pan F, Wang P, Wang L, Li L. Multi-View Stereo Vision Patchmatch Algorithm Based on Data Augmentation. Sensors (Basel) 2023; 23:2729. [PMID: 36904934 PMCID: PMC10006994 DOI: 10.3390/s23052729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 02/27/2023] [Accepted: 02/27/2023] [Indexed: 06/18/2023]
Abstract
In this paper, a multi-view stereo vision patchmatch algorithm based on data augmentation is proposed. Compared to other works, this algorithm can reduce runtime and save computational memory through efficient cascading of modules; therefore, it can process higher-resolution images. Compared with algorithms utilizing 3D cost volume regularization, this algorithm can be applied on resource-constrained platforms. This paper applies the data augmentation module to an end-to-end multi-scale patchmatch algorithm and adopts adaptive evaluation propagation, avoiding the substantial memory resource consumption characterizing traditional region matching algorithms. Extensive experiments on the DTU and Tanks and Temples datasets show that our algorithm is very competitive in completeness, speed and memory.
Collapse
|
4
|
Li Y, Qi Y, Wang C, Bao Y. A Cluster-Based 3D Reconstruction System for Large-Scale Scenes. Sensors (Basel) 2023; 23:2377. [PMID: 36904582 PMCID: PMC10007267 DOI: 10.3390/s23052377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 02/15/2023] [Accepted: 02/19/2023] [Indexed: 06/18/2023]
Abstract
The reconstruction of realistic large-scale 3D scene models using aerial images or videos has significant applications in smart cities, surveying and mapping, the military and other fields. In the current state-of-the-art 3D-reconstruction pipeline, the massive scale of the scene and the enormous amount of input data are still considerable obstacles to the rapid reconstruction of large-scale 3D scene models. In this paper, we develop a professional system for large-scale 3D reconstruction. First, in the sparse point-cloud reconstruction stage, the computed matching relationships are used as the initial camera graph and divided into multiple subgraphs by a clustering algorithm. Multiple computational nodes execute the local structure-from-motion (SFM) technique, and local cameras are registered. Global camera alignment is achieved by integrating and optimizing all local camera poses. Second, in the dense point-cloud reconstruction stage, the adjacency information is decoupled from the pixel level by red-and-black checkerboard grid sampling. The optimal depth value is obtained using normalized cross-correlation (NCC). Additionally, during the mesh-reconstruction stage, feature-preserving mesh simplification, Laplace mesh-smoothing and mesh-detail-recovery methods are used to improve the quality of the mesh model. Finally, the above algorithms are integrated into our large-scale 3D-reconstruction system. Experiments show that the system can effectively improve the reconstruction speed of large-scale 3D scenes.
Collapse
Affiliation(s)
- Yao Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
| | - Yue Qi
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
- Peng Cheng Laboratory, Shenzhen 518055, China
- Qingdao Research Institute of Beihang University, Qingdao 266104, China
| | - Chen Wang
- School of Computer Science and Engineering, Beijing Technology and Business University, Beijing 100048, China
| | - Yongtang Bao
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
| |
Collapse
|
5
|
Verykokou S, Ioannidis C. An Overview on Image-Based and Scanner-Based 3D Modeling Technologies. Sensors (Basel) 2023; 23:s23020596. [PMID: 36679393 PMCID: PMC9861742 DOI: 10.3390/s23020596] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/23/2022] [Accepted: 01/02/2023] [Indexed: 05/27/2023]
Abstract
Advances in the scientific fields of photogrammetry and computer vision have led to the development of automated multi-image methods that solve the problem of 3D reconstruction. Simultaneously, 3D scanners have become a common source of data acquisition for 3D modeling of real objects/scenes/human bodies. This article presents a comprehensive overview of different 3D modeling technologies that may be used to generate 3D reconstructions of outer or inner surfaces of different kinds of targets. In this context, it covers the topics of 3D modeling using images via different methods, it provides a detailed classification of 3D scanners by additionally presenting the basic operating principles of each type of scanner, and it discusses the problem of generating 3D models from scans. Finally, it outlines some applications of 3D modeling, beyond well-established topographic ones.
Collapse
|
6
|
Jia R, Chen X, Cui J, Hu Z. MVS-T: A Coarse-to-Fine Multi-View Stereo Network with Transformer for Low-Resolution Images 3D Reconstruction. Sensors (Basel) 2022; 22:7659. [PMID: 36236760 PMCID: PMC9571650 DOI: 10.3390/s22197659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 09/09/2022] [Accepted: 10/03/2022] [Indexed: 06/16/2023]
Abstract
A coarse-to-fine multi-view stereo network with Transformer (MVS-T) is proposed to solve the problems of sparse point clouds and low accuracy in reconstructing 3D scenes from low-resolution multi-view images. The network uses a coarse-to-fine strategy to estimate the depth of the image progressively and reconstruct the 3D point cloud. First, pyramids of image features are constructed to transfer the semantic and spatial information among features at different scales. Then, the Transformer module is employed to aggregate the image's global context information and capture the internal correlation of the feature map. Finally, the image depth is inferred by constructing a cost volume and iterating through the various stages. For 3D reconstruction of low-resolution images, experiment results show that the 3D point cloud obtained by the network is more accurate and complete, which outperforms other advanced algorithms in terms of objective metrics and subjective visualization.
Collapse
Affiliation(s)
- Ruiming Jia
- School of Information Science and Technology, North China University of Technology, Beijing 100144, China
| | - Xin Chen
- School of Information Science and Technology, North China University of Technology, Beijing 100144, China
| | - Jiali Cui
- School of Information Science and Technology, North China University of Technology, Beijing 100144, China
| | - Zhenghui Hu
- Hangzhou Innovation Institute, Beihang University, Hangzhou 310051, China
| |
Collapse
|
7
|
Liu C, Jia S, Wu H, Zeng D, Cheng F, Zhang S. A Spatial-Frequency Domain Associated Image-Optimization Method for Illumination-Robust Image Matching. Sensors (Basel) 2020; 20:E6489. [PMID: 33202959 DOI: 10.3390/s20226489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 11/09/2020] [Accepted: 11/11/2020] [Indexed: 11/18/2022]
Abstract
Image matching forms an essential means of data association for computer vision, photogrammetry and remote sensing. The quality of image matching is heavily dependent on image details and naturalness. However, complex illuminations, denoting extreme and changing illuminations, are inevitable in real scenarios, and seriously deteriorate image matching performance due to their significant influence on the image naturalness and details. In this paper, a spatial-frequency domain associated image-optimization method, comprising two main models, is specially designed for improving image matching with complex illuminations. First, an adaptive luminance equalization is implemented in the spatial domain to reduce radiometric variations, instead of removing all illumination components. Second, a frequency domain analysis-based feature-enhancement model is proposed to enhance image features while preserving image naturalness and restraining over-enhancement. The proposed method associates the advantages of the spatial and frequency domain analyses to complete illumination equalization, feature enhancement and naturalness preservation, and thus acquiring the optimized images that are robust to the complex illuminations. More importantly, our method is generic and can be embedded in most image-matching schemes to improve image matching. The proposed method was evaluated on two different datasets and compared with four other state-of-the-art methods. The experimental results indicate that the proposed method outperforms other methods under complex illuminations, in both matching performances and practical applications such as structure from motion and multi-view stereo.
Collapse
|
8
|
Lee T, Turin SY, Stowers C, Gosain AK, Tepole AB. Personalized Computational Models of Tissue-Rearrangement in the Scalp Predict the Mechanical Stress Signature of Rotation Flaps. Cleft Palate Craniofac J 2020; 58:438-445. [PMID: 32914654 DOI: 10.1177/1055665620954094] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
OBJECTIVE To elucidate the mechanics of scalp rotation flaps through 3D imaging and computational modeling. Excessive tension near a wound or sutured region can delay wound healing or trigger complications. Measuring tension in the operating room is challenging, instead, noninvasive methods to improve surgical planning are needed. DESIGN Multi-view stereo allows creation of 3D patient-specific geometries based on a set of photographs. The patient-specific 3D geometry is imported into a finite element (FE) platform to perform a virtual procedure. The simulation is compared with the clinical outcome. Additional simulations quantify the effect of individual flap parameters on the resulting tension distribution. PARTICIPANTS Rotation flaps for reconstruction of scalp defects following melanoma resection in 2 cases are presented. Rotation flaps were designed without preoperative FE preparation. MAIN OUTCOME MEASURE Tension distribution over the operated region. RESULTS The tension from FE shows peaks at the base and distal ends of the scalp rotation flap. The predicted geometry from the simulation aligns with postoperative photographs. Simulations exploring the flap design parameters show variation in the tension. Lower tensions were achieved when rotation was oriented with respect to skin tension lines (horizontal tissue fibers) and smaller rotation angles. CONCLUSIONS Tension distribution following rotation of scalp flaps can be predicted through personalized FE simulations. Flaps can be designed to reduce tension using FE, which may greatly improve the reliability of scalp reconstruction in craniofacial surgery, critical in complex cases when scalp reconstruction is essential for coverage of hardware, implants, and/or bone graft.
Collapse
Affiliation(s)
- Taeksang Lee
- Department of Mechanical Engineering, 311308Purdue University, West Lafayette, IN, USA
| | - Sergey Y Turin
- Department of Plastic Surgery, Feinberg School of Medicine, Chicago, IL, USA
| | - Casey Stowers
- Department of Mechanical Engineering, 311308Purdue University, West Lafayette, IN, USA
| | - Arun K Gosain
- Department of Plastic Surgery, Feinberg School of Medicine, Chicago, IL, USA.,Department of Plastic Surgery, 2429Lurie Children's Hospital, Chicago, IL, USA
| | - Adrian Buganza Tepole
- Department of Mechanical Engineering, 311308Purdue University, West Lafayette, IN, USA.,Weldon School of Biomedical Engineering, 311308Purdue University, West Lafayette, IN, USA
| |
Collapse
|
9
|
Lati RN, Filin S, Elnashef B, Eizenberg H. 3-D Image-Driven Morphological Crop Analysis: A Novel Method for Detection of Sunflower Broomrape Initial Subsoil Parasitism. Sensors (Basel) 2019; 19:E1569. [PMID: 30939774 DOI: 10.3390/s19071569] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 03/18/2019] [Accepted: 03/28/2019] [Indexed: 01/29/2023]
Abstract
Effective control of the parasitic weed sunflower broomrape (Orobanche cumana Wallr.) can be achieved by herbicides application in early parasitism stages. However, the growing environmental concerns associated with herbicide treatments have motivated the adoption of precise chemical control approaches that detect and treat infested areas exclusively. The main challenge in developing such control practices for O. cumana lies in the fact that most of its life-cycle occurs in the soil sub-surface and by the time shoots emerge and become observable, the damage to the crop is irreversible. This paper approaches early O. cumana detection by hypothesizing that its parasitism already impacts the host plant morphology at the sub-soil surface developmental stage. To validate this hypothesis, O. cumana- infested sunflower and non-infested control plants were grown in pots and imaged weekly over 45-day period. Three-dimensional plant models were reconstructed using image-based multi-view stereo followed by derivation of their morphological parameters, down to the organ-level. Among the parameters estimated, height and first internode length were the earliest definitive indicators of infection. Furthermore, the detection timing of both parameters was early enough for herbicide post-emergence application. Considering the fact that 3-D morphological modeling is nondestructive, is based on commercially available RGB sensors and can be used under natural illumination; this approach holds potential contribution for site specific pre-emergence managements of parasitic weeds and as a phenotyping tool in O. cumana resistant sunflower breeding projects.
Collapse
|
10
|
Kortaberria G, Mutilba U, Gomez-Acedo E, Tellaeche A, Minguez R. Accuracy Evaluation of Dense Matching Techniques for Casting Part Dimensional Verification. Sensors (Basel) 2018; 18:s18093074. [PMID: 30217026 PMCID: PMC6164126 DOI: 10.3390/s18093074] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 09/04/2018] [Accepted: 09/12/2018] [Indexed: 11/16/2022]
Abstract
Product optimization for casting and post-casting manufacturing processes is becoming compulsory to compete in the current global manufacturing scenario. Casting design, simulation and verification tools are becoming crucial for eliminating oversized dimensions without affecting the casting component functionality. Thus, material and production costs decrease to maintain the foundry process profitable on the large-scale component supplier market. New measurement methods, such as dense matching techniques, rely on surface texture of casting parts to enable the 3D dense reconstruction of surface points without the need of an active light source as usually applied with 3D scanning optical sensors. This paper presents the accuracy evaluation of dense matching based approaches for casting part verification. It compares the accuracy obtained by dense matching technique with already certified and validated optical measuring methods. This uncertainty evaluation exercise considers both artificial targets and key natural points to quantify the possibilities and scope of each approximation. Obtained results, for both lab and workshop conditions, show that this image data processing procedure is fit for purpose to fulfill the required measurement tolerances for casting part manufacturing processes.
Collapse
Affiliation(s)
- Gorka Kortaberria
- Department of Mechanical Engineering, IK4-Tekniker, 20600 Eibar, Spain.
| | - Unai Mutilba
- Department of Mechanical Engineering, IK4-Tekniker, 20600 Eibar, Spain.
| | - Eneko Gomez-Acedo
- Department of Mechanical Engineering, IK4-Tekniker, 20600 Eibar, Spain.
| | - Alberto Tellaeche
- Department of Smart and Autonomous Systems, IK4-Tekniker, 20600 Eibar, Spain.
| | - Rikardo Minguez
- Department of Graphic Design and Engineering Projects, University of the Basque Country, 48013 Bilbao, Spain.
| |
Collapse
|
11
|
Hui F, Zhu J, Hu P, Meng L, Zhu B, Guo Y, Li B, Ma Y. Image-based dynamic quantification and high-accuracy 3D evaluation of canopy structure of plant populations. Ann Bot 2018; 121:1079-1088. [PMID: 29509841 PMCID: PMC5906925 DOI: 10.1093/aob/mcy016] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Accepted: 01/24/2018] [Indexed: 05/24/2023]
Abstract
BACKGROUND AND AIMS Global agriculture is facing the challenge of a phenotyping bottleneck due to large-scale screening/breeding experiments with improved breeds. Phenotypic analysis with high-throughput, high-accuracy and low-cost technologies has therefore become urgent. Recent advances in image-based 3D reconstruction offer the opportunity of high-throughput phenotyping. The main aim of this study was to quantify and evaluate the canopy structure of plant populations in two and three dimensions based on the multi-view stereo (MVS) approach, and to monitor plant growth and development from seedling stage to fruiting stage. METHODS Multi-view images of flat-leaf cucumber, small-leaf pepper and curly-leaf eggplant were obtained by moving a camera around the plant canopy. Three-dimensional point clouds were reconstructed from images based on the MVS approach and were then converted into surfaces with triangular facets. Phenotypic parameters, including leaf length, leaf width, leaf area, plant height and maximum canopy width, were calculated from reconstructed surfaces. Accurate evaluation in 2D and 3D for individual leaves was performed by comparing reconstructed phenotypic parameters with referenced values and by calculating the Hausdorff distance, i.e. the mean distance between two surfaces. KEY RESULTS Our analysis demonstrates that there were good agreements in leaf parameters between referenced and estimated values. A high level of overlap was also found between surfaces of image-based reconstructions and laser scanning. Accuracy of 3D reconstruction of curly-leaf plants was relatively lower than that of flat-leaf plants. Plant height of three plants and maximum canopy width of cucumber and pepper showed an increasing trend during the 70 d after transplanting. Maximum canopy width of eggplants reached its peak at the 40th day after transplanting. The larger leaf phenotypic parameters of cucumber were mostly found at the middle-upper leaf position. CONCLUSIONS High-accuracy 3D evaluation of reconstruction quality indicated that dynamic capture of the 3D canopy based on the MVS approach can be potentially used in 3D phenotyping for applications in breeding and field management.
Collapse
Affiliation(s)
- Fang Hui
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Resources and Environmental Sciences, China Agricultural University, Beijing, China
| | - Jinyu Zhu
- Institute of Vegetables and Flowers, Chinese Academy of Agricultural Science, Beijing, China
| | - Pengcheng Hu
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Resources and Environmental Sciences, China Agricultural University, Beijing, China
| | - Lei Meng
- Department of Geography and Institute of the Environment and Sustainability, Western Michigan University, Kalamazoo, MI, USA
| | - Binglin Zhu
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Resources and Environmental Sciences, China Agricultural University, Beijing, China
| | - Yan Guo
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Resources and Environmental Sciences, China Agricultural University, Beijing, China
| | - Baoguo Li
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Resources and Environmental Sciences, China Agricultural University, Beijing, China
| | - Yuntao Ma
- Key Laboratory of Arable Land Conservation (North China), Ministry of Agriculture, College of Resources and Environmental Sciences, China Agricultural University, Beijing, China
| |
Collapse
|
12
|
Andújar D, Calle M, Fernández-Quintanilla C, Ribeiro Á, Dorado J. Three-Dimensional Modeling of Weed Plants Using Low-Cost Photogrammetry. Sensors (Basel) 2018; 18:E1077. [PMID: 29614039 DOI: 10.3390/s18041077] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Revised: 03/24/2018] [Accepted: 03/30/2018] [Indexed: 12/20/2022]
Abstract
Sensing advances in plant phenotyping are of vital importance in basic and applied plant research. Plant phenotyping enables the modeling of complex shapes, which is useful, for example, in decision-making for agronomic management. In this sense, 3D processing algorithms for plant modeling is expanding rapidly with the emergence of new sensors and techniques designed to morphologically characterize. However, there are still some technical aspects to be improved, such as an accurate reconstruction of end-details. This study adapted low-cost techniques, Structure from Motion (SfM) and MultiView Stereo (MVS), to create 3D models for reconstructing plants of three weed species with contrasting shape and plant structures. Plant reconstruction was developed by applying SfM algorithms to an input set of digital images acquired sequentially following a track that was concentric and equidistant with respect to the plant axis and using three different angles, from a perpendicular to top view, which guaranteed the necessary overlap between images to obtain high precision 3D models. With this information, a dense point cloud was created using MVS, from which a 3D polygon mesh representing every plants’ shape and geometry was generated. These 3D models were validated with ground truth values (e.g., plant height, leaf area (LA) and plant dry biomass) using regression methods. The results showed, in general, a good consistency in the correlation equations between the estimated values in the models and the actual values measured in the weed plants. Indeed, 3D modeling using SfM algorithms proved to be a valuable methodology for weed phenotyping, since it accurately estimated the actual values of plant height and LA. Additionally, image processing using the SfM method was relatively fast. Consequently, our results indicate the potential of this budget system for plant reconstruction at high detail, which may be usable in several scenarios, including outdoor conditions. Future research should address other issues, such as the time-cost relationship and the need for detail in the different approaches.
Collapse
|
13
|
Qu Y, Huang J, Zhang X. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera. Sensors (Basel) 2018; 18:E225. [PMID: 29342908 DOI: 10.3390/s18010225] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 12/30/2017] [Accepted: 01/11/2018] [Indexed: 11/17/2022]
Abstract
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.
Collapse
|