51
|
Ge L, Yang Z, Sun Z, Zhang G, Zhang M, Zhang K, Zhang C, Tan Y, Li W. A Method for Broccoli Seedling Recognition in Natural Environment Based on Binocular Stereo Vision and Gaussian Mixture Model. SENSORS 2019; 19:s19051132. [PMID: 30845680 PMCID: PMC6427649 DOI: 10.3390/s19051132] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 02/27/2019] [Accepted: 02/28/2019] [Indexed: 02/04/2023]
Abstract
Illumination in the natural environment is uncontrollable, and the field background is complex and changeable which all leads to the poor quality of broccoli seedling images. The colors of weeds and broccoli seedlings are close, especially under weedy conditions. The factors above have a large influence on the stability, velocity and accuracy of broccoli seedling recognition based on traditional 2D image processing technologies. The broccoli seedlings are higher than the soil background and weeds in height due to the growth advantage of transplanted crops. A method of broccoli seedling recognition in natural environments based on Binocular Stereo Vision and a Gaussian Mixture Model is proposed in this paper. Firstly, binocular images of broccoli seedlings were obtained by an integrated, portable and low-cost binocular camera. Then left and right images were rectified, and a disparity map of the rectified images was obtained by the Semi-Global Matching (SGM) algorithm. The original 3D dense point cloud was reconstructed using the disparity map and left camera internal parameters. To reduce the operation time, a non-uniform grid sample method was used for the sparse point cloud. After that, the Gaussian Mixture Model (GMM) cluster was exploited and the broccoli seedling points were recognized from the sparse point cloud. An outlier filtering algorithm based on k-nearest neighbors (KNN) was applied to remove the discrete points along with the recognized broccoli seedling points. Finally, an ideal point cloud of broccoli seedlings can be obtained, and the broccoli seedlings recognized. The experimental results show that the Semi-Global Matching (SGM) algorithm can meet the matching requirements of broccoli images in the natural environment, and the average operation time of SGM is 138 ms. The SGM algorithm is superior to the Sum of Absolute Differences (SAD) algorithm and Sum of Squared Differences (SSD) algorithms. The recognition results of Gaussian Mixture Model (GMM) outperforms K-means and Fuzzy c-means with the average running time of 51 ms. To process a pair of images with the resolution of 640×480, the total running time of the proposed method is 578 ms, and the correct recognition rate is 97.98% of 247 pairs of images. The average value of sensitivity is 85.91%. The average percentage of the theoretical envelope box volume to the measured envelope box volume is 95.66%. The method can provide a low-cost, real-time and high-accuracy solution for crop recognition in natural environment.
Collapse
|
52
|
Multitemporal Terrestrial Laser Scanning for Marble Extraction Assessment in an Underground Quarry of the Apuan Alps (Italy). SENSORS 2019; 19:s19030450. [PMID: 30678272 PMCID: PMC6387222 DOI: 10.3390/s19030450] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 01/14/2019] [Accepted: 01/18/2019] [Indexed: 11/17/2022]
Abstract
This article focuses on the use of Terrestrial Laser Scanning (TLS) for change detection analysis of multitemporal point clouds datasets. Two topographic surveys were carried out during the years 2016 and 2017 in an underground marble quarry of the Apuan Alps (Italy) combining TLS with Global Navigation Satellite System (GNSS) and Total Station (TS) studies. Multitemporal 3D point clouds were processed and compared with the aim of identifying areas subjected to significant material extraction. Point clouds representing changed areas were converted into triangular meshes in order to compute the volume of extracted material over one year of quarrying activities. General purpose of this work is to show a valid method to examine the morphological changes due to raw material extraction with the focus of highlighting benefits, accuracies and drawbacks. The purpose of the executed survey was that of supporting the planning of quarrying activities in respect of regional rules, safety and commercial reasons.
Collapse
|
53
|
Towards Efficient Implementation of an Octree for a Large 3D Point Cloud. SENSORS 2018; 18:s18124398. [PMID: 30545103 PMCID: PMC6308722 DOI: 10.3390/s18124398] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 12/10/2018] [Accepted: 12/11/2018] [Indexed: 12/04/2022]
Abstract
The present study introduces an efficient algorithm to construct a file-based octree for a large 3D point cloud. However, the algorithm was very slow compared with a memory-based approach, and got even worse when using a 3D point cloud scanned in longish objects like tunnels and corridors. The defects were addressed by implementing a semi-isometric octree group. The approach implements several semi-isometric octrees in a group, which tightly covers the 3D point cloud, though each octree along with its leaf node still maintains an isometric shape. The proposed approach was tested using three 3D point clouds captured in a long tunnel and a short tunnel by a terrestrial laser scanner, and in an urban area by an airborne laser scanner. The experimental results showed that the performance of the semi-isometric approach was not worse than a memory-based approach, and quite a lot better than a file-based one. Thus, it was proven that the proposed semi-isometric approach achieves a good balance between query performance and memory efficiency. In conclusion, if given enough main memory and using a moderately sized 3D point cloud, a memory-based approach is preferable. When the 3D point cloud is larger than the main memory, a file-based approach seems to be the inevitable choice, however, the semi-isometric approach is the better option.
Collapse
|
54
|
Dong Z, Gao Y, Zhang J, Yan Y, Wang X, Chen F. HoPE: Horizontal Plane Extractor for Cluttered 3D Scenes. SENSORS 2018; 18:s18103214. [PMID: 30249053 PMCID: PMC6210707 DOI: 10.3390/s18103214] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 09/19/2018] [Accepted: 09/20/2018] [Indexed: 11/16/2022]
Abstract
Extracting horizontal planes in heavily cluttered three-dimensional (3D) scenes is an essential procedure for many robotic applications. Aiming at the limitations of general plane segmentation methods on this subject, we present HoPE, a Horizontal Plane Extractor that is able to extract multiple horizontal planes in cluttered scenes with both organized and unorganized 3D point clouds. It transforms the source point cloud in the first stage to the reference coordinate frame using the sensor orientation acquired either by pre-calibration or an inertial measurement unit, thereby leveraging the inner structure of the transformed point cloud to ease the subsequent processes that use two concise thresholds for producing the results. A revised region growing algorithm named Z clustering and a principal component analysis (PCA)-based approach are presented for point clustering and refinement, respectively. Furthermore, we provide a nearest neighbor plane matching (NNPM) strategy to preserve the identities of extracted planes across successive sequences. Qualitative and quantitative evaluations of both real and synthetic scenes demonstrate that our approach outperforms several state-of-the-art methods under challenging circumstances, in terms of robustness to clutter, accuracy, and efficiency. We make our algorithm an off-the-shelf toolbox which is publicly available.
Collapse
|
55
|
Rachakonda P, Muralikrishnan B, Cournoyer L, Sawyer D. Software to Determine Sphere Center from Terrestrial Laser Scanner Data per ASTM Standard E3125-17. JOURNAL OF RESEARCH OF THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY 2018; 123:1-4. [PMID: 34877138 PMCID: PMC7339744 DOI: 10.6028/jres.123.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/15/2018] [Indexed: 06/13/2023]
Abstract
Terrestrial laser scanners (TLSs) are instruments that can measure 3D coordinates
of objects at high speed using a laser, resulting in high density 3D point cloud data.
The Dimensional Metrology Group (DMG) at NIST performed research to support the
development of documentary standards within ASTM E57 committee on 3D imaging systems.
This led to the publication of the ASTM E3125-2017 standard on point-to-point distance
performance evaluation of 3D imaging systems such as TLSs. To ensure that the data from
different TLS systems are processed identically, the ASTM E3125- 2017 mandates the use
of a common algorithm to determine the center of a sphere from point cloud data. This
paper describes this algorithm and software code is provided as a download.
Collapse
|
56
|
Thapa S, Zhu F, Walia H, Yu H, Ge Y. A Novel LiDAR-Based Instrument for High-Throughput, 3D Measurement of Morphological Traits in Maize and Sorghum. SENSORS 2018; 18:s18041187. [PMID: 29652788 PMCID: PMC5948551 DOI: 10.3390/s18041187] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Revised: 04/09/2018] [Accepted: 04/10/2018] [Indexed: 11/17/2022]
Abstract
Recently, imaged-based approaches have developed rapidly for high-throughput plant phenotyping (HTPP). Imaging reduces a 3D plant into 2D images, which makes the retrieval of plant morphological traits challenging. We developed a novel LiDAR-based phenotyping instrument to generate 3D point clouds of single plants. The instrument combined a LiDAR scanner with a precision rotation stage on which an individual plant was placed. A LabVIEW program was developed to control the scanning and rotation motion, synchronize the measurements from both devices, and capture a 360° view point cloud. A data processing pipeline was developed for noise removal, voxelization, triangulation, and plant leaf surface reconstruction. Once the leaf digital surfaces were reconstructed, plant morphological traits, including individual and total leaf area, leaf inclination angle, and leaf angular distribution, were derived. The system was tested with maize and sorghum plants. The results showed that leaf area measurements by the instrument were highly correlated with the reference methods (R2 > 0.91 for individual leaf area; R2 > 0.95 for total leaf area of each plant). Leaf angular distributions of the two species were also derived. This instrument could fill a critical technological gap for indoor HTPP of plant morphological traits in 3D.
Collapse
|
57
|
Sun S, Li C, Paterson AH, Jiang Y, Xu R, Robertson JS, Snider JL, Chee PW. In-field High Throughput Phenotyping and Cotton Plant Growth Analysis Using LiDAR. FRONTIERS IN PLANT SCIENCE 2018; 9:16. [PMID: 29403522 PMCID: PMC5786533 DOI: 10.3389/fpls.2018.00016] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2017] [Accepted: 01/04/2018] [Indexed: 05/19/2023]
Abstract
Plant breeding programs and a wide range of plant science applications would greatly benefit from the development of in-field high throughput phenotyping technologies. In this study, a terrestrial LiDAR-based high throughput phenotyping system was developed. A 2D LiDAR was applied to scan plants from overhead in the field, and an RTK-GPS was used to provide spatial coordinates. Precise 3D models of scanned plants were reconstructed based on the LiDAR and RTK-GPS data. The ground plane of the 3D model was separated by RANSAC algorithm and a Euclidean clustering algorithm was applied to remove noise generated by weeds. After that, clean 3D surface models of cotton plants were obtained, from which three plot-level morphologic traits including canopy height, projected canopy area, and plant volume were derived. Canopy height ranging from 85th percentile to the maximum height were computed based on the histogram of the z coordinate for all measured points; projected canopy area was derived by projecting all points on a ground plane; and a Trapezoidal rule based algorithm was proposed to estimate plant volume. Results of validation experiments showed good agreement between LiDAR measurements and manual measurements for maximum canopy height, projected canopy area, and plant volume, with R2-values of 0.97, 0.97, and 0.98, respectively. The developed system was used to scan the whole field repeatedly over the period from 43 to 109 days after planting. Growth trends and growth rate curves for all three derived morphologic traits were established over the monitoring period for each cultivar. Overall, four different cultivars showed similar growth trends and growth rate patterns. Each cultivar continued to grow until ~88 days after planting, and from then on varied little. However, the actual values were cultivar specific. Correlation analysis between morphologic traits and final yield was conducted over the monitoring period. When considering each cultivar individually, the three traits showed the best correlations with final yield during the period between around 67 and 109 days after planting, with maximum R2-values of up to 0.84, 0.88, and 0.85, respectively. The developed system demonstrated relatively high throughput data collection and analysis.
Collapse
|
58
|
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction. SENSORS 2017; 17:s17122791. [PMID: 29207468 PMCID: PMC5751708 DOI: 10.3390/s17122791] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Revised: 11/29/2017] [Accepted: 11/30/2017] [Indexed: 11/29/2022]
Abstract
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.
Collapse
|
59
|
A Method for Automatic Surface Inspection Using a Model-Based 3D Descriptor. SENSORS 2017; 17:s17102262. [PMID: 28974037 PMCID: PMC5676666 DOI: 10.3390/s17102262] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Revised: 09/19/2017] [Accepted: 09/20/2017] [Indexed: 11/17/2022]
Abstract
Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%.
Collapse
|
60
|
Ahn JS, Park A, Kim JW, Lee BH, Eom JB. Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination. SENSORS 2017; 17:s17071634. [PMID: 28714897 PMCID: PMC5539490 DOI: 10.3390/s17071634] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2017] [Revised: 07/10/2017] [Accepted: 07/12/2017] [Indexed: 12/01/2022]
Abstract
We demonstrated a three-dimensional (3D) dental scanning apparatus based on structured illumination. A liquid lens was used for tuning focus and a piezomotor stage was used for the shift of structured light. A simple algorithm, which detects intensity modulation, was used to perform optical sectioning with structured illumination. We reconstructed a 3D point cloud, which represents the 3D coordinates of the digitized surface of a dental gypsum cast by piling up sectioned images. We performed 3D registration of an individual 3D point cloud, which includes alignment and merging the 3D point clouds to exhibit a 3D model of the dental cast.
Collapse
|
61
|
Fisheye-Based Method for GPS Localization Improvement in Unknown Semi-Obstructed Areas. SENSORS 2017; 17:s17010119. [PMID: 28106746 PMCID: PMC5298692 DOI: 10.3390/s17010119] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2016] [Revised: 12/27/2016] [Accepted: 01/04/2017] [Indexed: 11/21/2022]
Abstract
A precise GNSS (Global Navigation Satellite System) localization is vital for autonomous road vehicles, especially in cluttered or urban environments where satellites are occluded, preventing accurate positioning. We propose to fuse GPS (Global Positioning System) data with fisheye stereovision to face this problem independently to additional data, possibly outdated, unavailable, and needing correlation with reality. Our stereoscope is sky-facing with 360° × 180° fisheye cameras to observe surrounding obstacles. We propose a 3D modelling and plane extraction through following steps: stereoscope self-calibration for decalibration robustness, stereo matching considering neighbours epipolar curves to compute 3D, and robust plane fitting based on generated cartography and Hough transform. We use these 3D data with GPS raw data to estimate NLOS (Non Line Of Sight) reflected signals pseudorange delay. We exploit extracted planes to build a visibility mask for NLOS detection. A simplified 3D canyon model allows to compute reflections pseudorange delays. In the end, GPS positioning is computed considering corrected pseudoranges. With experimentations on real fixed scenes, we show generated 3D models reaching metric accuracy and improvement of horizontal GPS positioning accuracy by more than 50%. The proposed procedure is effective, and the proposed NLOS detection outperforms CN0-based methods (Carrier-to-receiver Noise density).
Collapse
|
62
|
Rose JC, Kicherer A, Wieland M, Klingbeil L, Töpfer R, Kuhlmann H. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions. SENSORS (BASEL, SWITZERLAND) 2016; 16:E2136. [PMID: 27983669 PMCID: PMC5191116 DOI: 10.3390/s16122136] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2016] [Revised: 12/08/2016] [Accepted: 12/08/2016] [Indexed: 11/25/2022]
Abstract
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.
Collapse
|
63
|
Jochem A, Hollaus M, Rutzinger M, Höfle B. Estimation of aboveground biomass in alpine forests: a semi-empirical approach considering canopy transparency derived from airborne LiDAR data. SENSORS 2010; 11:278-95. [PMID: 22346577 PMCID: PMC3274100 DOI: 10.3390/s110100278] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2010] [Revised: 12/01/2010] [Accepted: 12/23/2010] [Indexed: 11/16/2022]
Abstract
In this study, a semi-empirical model that was originally developed for stem volume estimation is used for aboveground biomass (AGB) estimation of a spruce dominated alpine forest. The reference AGB of the available sample plots is calculated from forest inventory data by means of biomass expansion factors. Furthermore, the semi-empirical model is extended by three different canopy transparency parameters derived from airborne LiDAR data. These parameters have not been considered for stem volume estimation until now and are introduced in order to investigate the behavior of the model concerning AGB estimation. The developed additional input parameters are based on the assumption that transparency of vegetation can be measured by determining the penetration of the laser beams through the canopy. These parameters are calculated for every single point within the 3D point cloud in order to consider the varying properties of the vegetation in an appropriate way. Exploratory Data Analysis (EDA) is performed to evaluate the influence of the additional LiDAR derived canopy transparency parameters for AGB estimation. The study is carried out in a 560 km(2) alpine area in Austria, where reference forest inventory data and LiDAR data are available. The investigations show that the introduction of the canopy transparency parameters does not change the results significantly according to R(2) (R(2) = 0.70 to R(2) = 0.71) in comparison to the results derived from, the semi-empirical model, which was originally developed for stem volume estimation.
Collapse
|
64
|
Automatic roof plane detection and analysis in airborne lidar point clouds for solar potential assessment. SENSORS 2009; 9:5241-62. [PMID: 22346695 PMCID: PMC3274168 DOI: 10.3390/s90705241] [Citation(s) in RCA: 96] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2009] [Revised: 06/25/2009] [Accepted: 07/01/2009] [Indexed: 11/17/2022]
Abstract
A relative height threshold is defined to separate potential roof points from the point cloud, followed by a segmentation of these points into homogeneous areas fulfilling the defined constraints of roof planes. The normal vector of each laser point is an excellent feature to decompose the point cloud into segments describing planar patches. An object-based error assessment is performed to determine the accuracy of the presented classification. It results in 94.4% completeness and 88.4% correctness. Once all roof planes are detected in the 3D point cloud, solar potential analysis is performed for each point. Shadowing effects of nearby objects are taken into account by calculating the horizon of each point within the point cloud. Effects of cloud cover are also considered by using data from a nearby meteorological station. As a result the annual sum of the direct and diffuse radiation for each roof plane is derived. The presented method uses the full 3D information for both feature extraction and solar potential analysis, which offers a number of new applications in fields where natural processes are influenced by the incoming solar radiation (e.g., evapotranspiration, distribution of permafrost). The presented method detected fully automatically a subset of 809 out of 1,071 roof planes where the arithmetic mean of the annual incoming solar radiation is more than 700 kWh/m2.
Collapse
|