1
|
Vallet A, Dupuy S, Verlynde M, Gaetano R. Generating high-resolution land use and land cover maps for the greater Mariño watershed in 2019 with machine learning. Sci Data 2024; 11:915. [PMID: 39179565 PMCID: PMC11344052 DOI: 10.1038/s41597-024-03750-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 08/05/2024] [Indexed: 08/26/2024] Open
Abstract
Land Use and Land Cover (LULC) maps are important tools for environmental planning and social-ecological modeling, as they provide critical information for evaluating risks, managing natural resources, and facilitating effective decision-making. This study aimed to generate a very high spatial resolution (0.5 m) and detailed (21 classes) LULC map for the greater Mariño watershed (Peru) in 2019, using the MORINGA processing chain. This new method for LULC mapping consisted in a supervised object-based LULC classification, using the random forest algorithm along with multi-sensor satellite imagery from which spectral and textural predictors were derived (a very high spatial resolution Pléiades image and a time serie of high spatial resolution Sentinel-2 images). The random forest classifier showed a very good performance and the LULC map was further improved through additional post-treatment steps that included cross-checking with external GIS data sources and manual correction using photointerpretation, resulting in a more accurate and reliable map. The final LULC provides new information for environmental management and monitoring in the greater Mariño watershed. With this study we contribute to the efforts to develop standardized and replicable methodologies for high-resolution and high-accuracy LULC mapping, which is crucial for informed decision-making and conservation strategies.
Collapse
Affiliation(s)
- Améline Vallet
- Université Paris-Saclay, CNRS, AgroParisTech, Ecologie Systématique et Evolution, 91190, Gif-sur-Yvette, France.
- Université Paris-Saclay, AgroParisTech, CNRS, Ecole des Ponts ParisTech, Cirad, EHESS, UMR CIRED, 94130, Nogent-sur-Marne, France.
| | - Stéphane Dupuy
- TETIS, Univ Montpellier, AgroParisTech, CIRAD, CNRS, INRAE, 34398, Montpellier, France
| | - Matthieu Verlynde
- Université Paris-Saclay, CNRS, AgroParisTech, Ecologie Systématique et Evolution, 91190, Gif-sur-Yvette, France
- Université Paris-Saclay, AgroParisTech, CNRS, Ecole des Ponts ParisTech, Cirad, EHESS, UMR CIRED, 94130, Nogent-sur-Marne, France
| | - Raffaele Gaetano
- TETIS, Univ Montpellier, AgroParisTech, CIRAD, CNRS, INRAE, 34398, Montpellier, France
| |
Collapse
|
2
|
Estrada JS, Fuentes A, Reszka P, Auat Cheein F. Machine learning assisted remote forestry health assessment: a comprehensive state of the art review. FRONTIERS IN PLANT SCIENCE 2023; 14:1139232. [PMID: 37332724 PMCID: PMC10272373 DOI: 10.3389/fpls.2023.1139232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 05/08/2023] [Indexed: 06/20/2023]
Abstract
Forests are suffering water stress due to climate change; in some parts of the globe, forests are being exposed to the highest temperatures historically recorded. Machine learning techniques combined with robotic platforms and artificial vision systems have been used to provide remote monitoring of the health of the forest, including moisture content, chlorophyll, and nitrogen estimation, forest canopy, and forest degradation, among others. However, artificial intelligence techniques evolve fast associated with the computational resources; data acquisition, and processing change accordingly. This article is aimed at gathering the latest developments in remote monitoring of the health of the forests, with special emphasis on the most important vegetation parameters (structural and morphological), using machine learning techniques. The analysis presented here gathered 108 articles from the last 5 years, and we conclude by showing the newest developments in AI tools that might be used in the near future.
Collapse
Affiliation(s)
- Juan Sebastián Estrada
- Department of Electronic Engineering, Universidad Tecnica Federico, Santamaria, Valparaíso, Chile
| | - Andrés Fuentes
- Department of Industrial Engeneering, Universidad Tecnica Federica, Santamaria, Valparaíso, Chile
| | - Pedro Reszka
- Faculty on Engineering and Science, Universidad Adolfo Ibáñez, Santiago, Chile
| | - Fernando Auat Cheein
- Department of Electronic Engineering, Universidad Tecnica Federico, Santamaria, Valparaíso, Chile
| |
Collapse
|
3
|
Valicharla SK, Li X, Greenleaf J, Turcotte R, Hayes C, Park YL. Precision Detection and Assessment of Ash Death and Decline Caused by the Emerald Ash Borer Using Drones and Deep Learning. PLANTS (BASEL, SWITZERLAND) 2023; 12:798. [PMID: 36840146 PMCID: PMC9964414 DOI: 10.3390/plants12040798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/21/2023] [Accepted: 02/04/2023] [Indexed: 06/18/2023]
Abstract
Emerald ash borer (Agrilus planipennis) is an invasive pest that has killed millions of ash trees (Fraxinus spp.) in the USA since its first detection in 2002. Although the current methods for trapping emerald ash borers (e.g., sticky traps and trap trees) and visual ground and aerial surveys are generally effective, they are inefficient for precisely locating and assessing the declining and dead ash trees in large or hard-to-access areas. This study was conducted to develop and evaluate a new tool for safe, efficient, and precise detection and assessment of ash decline and death caused by emerald ash borer by using aerial surveys with unmanned aerial systems (a.k.a., drones) and a deep learning model. Aerial surveys with drones were conducted to obtain 6174 aerial images including ash decline in the deciduous forests in West Virginia and Pennsylvania, USA. The ash trees in each image were manually annotated for training and validating deep learning models. The models were evaluated using the object recognition metrics: mean average precisions (mAP) and two average precisions (AP50 and AP75). Our comprehensive analyses with instance segmentation models showed that Mask2former was the most effective model for detecting declining and dead ash trees with 0.789, 0.617, and 0.542 for AP50, AP75, and mAP, respectively, on the validation dataset. A follow-up in-situ field study conducted in nine locations with various levels of ash decline and death demonstrated that deep learning along with aerial survey using drones could be an innovative tool for rapid, safe, and efficient detection and assessment of ash decline and death in large or hard-to-access areas.
Collapse
Affiliation(s)
- Sruthi Keerthi Valicharla
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV 26506, USA
| | - Xin Li
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV 26506, USA
| | - Jennifer Greenleaf
- Division of Plant and Soil Sciences, West Virginia University, Morgantown, WV 26506, USA
| | - Richard Turcotte
- Division of Plant and Soil Sciences, West Virginia University, Morgantown, WV 26506, USA
- USDA Forest Service, Forest Health Protection, Morgantown, WV 26505, USA
| | - Christopher Hayes
- USDA Forest Service, Forest Health Protection, Morgantown, WV 26505, USA
| | - Yong-Lak Park
- Division of Plant and Soil Sciences, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
4
|
The Use of UAV-Acquired Multiband Images for Detecting Rockfall-Induced Injuries at Tree Crown Level. FORESTS 2022. [DOI: 10.3390/f13071039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
In this paper, we present an identification of rockfall-injured trees based on multiband images obtained by an unmanned aerial vehicle (UAV). A survey with a multispectral camera was performed on three rockfall sites with versatile tree species (Fagus sylvatica L., Larix decidua Mill., Pinus sylvestris L., Picea abies (L.) Karsten, and Abies alba Mill.) and with different characterizations of rockfalls and rockfall-induced injuries. At one site, rockfall injuries were induced in the same year as the survey. At the second site, they were induced one year after the initial injuries, and at the third site, they were induced six years after the first injuries. At one site, surveys were performed three years in a row. Multiband images were used to extract different vegetation indices (VIs) at the tree crown level and were further studied to see which VIs can identify the injured trees and how successfully. A total of 14 VIs were considered, including individual multispectral bands (green, red, red edge, and near-infrared) by using regression models to differentiate between the injured and uninjured groups for a single year and for three consecutive years. The same model was also used for VI differentiations among the recorded injury groups and size of the injuries. The identification of injured trees based on VIs was possible at the sites where rockfall injuries were induced at least one year before the UAV survey, and they could still be identifiable six years after the initial injuries. At the site where injuries were induced only four months before the UAV survey, the identification of injured trees was not possible. VIs that could explain the largest variability (R2 > 0.3) between injured and uninjured trees were: inverse ratio index (IRVI), green–red vegetation index (GRVI), normalized difference vegetation index (NDVI), normalized ratio index (NRVI), and ratio vegetation index (RVI). RVI was the most successful, explaining 40% of the variance at two sites. R2 values only increased by a few percentages (up to 10%) when the VIs of injured trees were observed over a period of three years and mostly did not change significantly, thus not indicating if the vitality of the trees increased or decreased. Differentiation among the injured groups did not show promising results, while, on the other hand, there was a strong correlation between the VI values (RVI) and the size of the injury according to the basal area of the trees (so-called injury index). Both in the case of broadleaves and conifers at two sites, the R2 achieved a value of 0.82. The presented results indicate that the UAV-acquired multiband images at the tree crown level can be used for surveying rockfall protection forests in order to monitor their vitality, which is crucial for maintaining the protective effect through time and space.
Collapse
|
5
|
Recent Advances in Forest Insect Pests and Diseases Monitoring Using UAV-Based Data: A Systematic Review. FORESTS 2022. [DOI: 10.3390/f13060911] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Unmanned aerial vehicles (UAVs) are platforms that have been increasingly used over the last decade to collect data for forest insect pest and disease (FIPD) monitoring. These machines provide flexibility, cost efficiency, and a high temporal and spatial resolution of remotely sensed data. The purpose of this review is to summarize recent contributions and to identify knowledge gaps in UAV remote sensing for FIPD monitoring. A systematic review was performed using the preferred reporting items for systematic reviews and meta-analysis (PRISMA) protocol. We reviewed the full text of 49 studies published between 2015 and 2021. The parameters examined were the taxonomic characteristics, the type of UAV and sensor, data collection and pre-processing, processing and analytical methods, and software used. We found that the number of papers on this topic has increased in recent years, with most being studies located in China and Europe. The main FIPDs studied were pine wilt disease (PWD) and bark beetles (BB) using UAV multirotor architectures. Among the sensor types, multispectral and red–green–blue (RGB) bands were preferred for the monitoring tasks. Regarding the analytical methods, random forest (RF) and deep learning (DL) classifiers were the most frequently applied in UAV imagery processing. This paper discusses the advantages and limitations associated with the use of UAVs and the processing methods for FIPDs, and research gaps and challenges are presented.
Collapse
|
6
|
UAV-Based Characterization of Tree-Attributes and Multispectral Indices in an Uneven-Aged Mixed Conifer-Broadleaf Forest. REMOTE SENSING 2022. [DOI: 10.3390/rs14122775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Unmanned aerial vehicles (UAVs) have contributed considerably to forest monitoring. However, gaps in the knowledge still remain, particularly for natural forests. Species diversity, stand heterogeneity, and the irregular spatial arrangement of trees provide unique opportunities to improve our perspective of forest stands and the ecological processes that occur therein. In this study, we calculated individual tree metrics, including several multispectral indices, in order to discern the spectral reflectance of a natural stand as a pioneer area in Mexican forests. Using data obtained by UAV DJI 4, and in the free software environments OpenDroneMap and QGIS, we calculated tree height, crown area, number of trees and multispectral indices. Digital photogrammetric procedures, such as the ForestTools, Structure from Motion and Multi-View Stereo algorithms, yielded results that improved stand mapping and the estimation of stand attributes. Automated tree detection and quantification were limited by the presence of overlapping crowns but compensated by the novel stand density mapping and estimates of crown attributes. Height estimation was in line with expectations (R2 = 0.91, RMSE = 0.36) and is therefore a useful parameter with which to complement forest inventories. The diverse spectral indices applied yielded differential results regarding the potential vegetation activity present and were found to be complementary to each other. However, seasonal monitoring and careful estimation of photosynthetic activity are recommended in order to determine the seasonality of plant response. This research contributes to the monitoring of natural forest stands and, coupled with accurate in situ measurements, could refine forest productivity parameters as a strategy for the validity of results. The metrics are reliable and rapid and could serve as model inputs in modern inventories. Nevertheless, increased efforts in the configuration of new technologies and algorithms are required, including full consideration of the costs implied by their adoption.
Collapse
|
7
|
What Is the Effect of Quantitative Inversion of Photosynthetic Pigment Content in Populus euphratica Oliv. Individual Tree Canopy Based on Multispectral UAV Images? FORESTS 2022. [DOI: 10.3390/f13040542] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
It is highly necessary to apply unmanned aerial vehicle (UAV) remote sensing technology to forest health assessment. To prove the feasibility of quantitative inversion of photosynthetic pigment content (PPC) in Populus euphratica Oliv. individual tree canopy (PeITC) by using multispectral UAV images, in this study, Parrot Sequoia+ multispectral UAV system was manipulated to collect the images of Populus euphratica (Populus euphratica Oliv.) sample plots in Daliyabuyi Oasis from 2019 to 2020, and the canopy PPCs of five Populus euphratica sample trees per plot were determined in six plots. The Populus euphratica crown regions were extracted by grey wolf optimizer-OTSU (GWO-OTSU) multithreshold segmentation algorithm from the normalized difference vegetation index (NDVI) images of Populus euphratica sample plots obtained after preprocessing, and the PeITCs were segmented by multiresolution segmentation algorithm. The mean values of 27 spectral indices in the PeITCs were calculated in each plot, and the optimal model was constructed for quantitative estimation of the PPCs in the PeITCs, then the inversion results were compared and verified based on GF-6 and ZY1-02D satellite imageries respectively. The results were as follows. (1) The average value of canopy chlorophyll content (Chl) was 2.007 mg/g, the mean value of canopy carotenoid content (Car) was 0.703 mg/g. The coefficient of variation (C.V) of both were basically the same and they were both of strong variability. The measured PPCs of the PeITCs in Daliyabuyi Oasis was generally low. The average contents of chlorophyll and carotenoid in PeITC in June were more than twice those in August, while the mean ratio between them was significantly lower in June than in August. The measured PPCs had no obvious spatial distribution law. However, that could prove the rationality of sample selection in this study. (2) NDVI had the best effect of highlighting vegetation among all quadrats in the study area. Based on the GWO-OTSU multithreshold segmentation method, the canopy area of Populus euphratica could be quickly and effectively extracted from the quadrat NDVI map. The best segmentation effect of PeITCs was obtained based on a multiresolution segmentation method when the segmentation scale was 120, the shape index was 0.7, and the compactness index was 0.5. Compared with manual vectorization method of visual interpretation, the root mean square error (RMSE) and Pearson correlation coefficient (R) values of the mean NDVI values in PeITCs obtained by these two methods were 0.038 and 0.951. (3) Only 12 of the 27 spectral indices were significantly correlated with Chl and Car at the significance level of 0.02. Characteristics of the calibration set and validation set were basically consistent with those of the entire set. The classification and regression tree-decision tree (CART-DT) model performed best in the estimation of the PPCs in the PeITCs, in which, when estimating the Car, the calibration coefficient of determination (R2C) was 0.843, the calibration root mean square error (RMSEC) was 0.084, the calibration residual prediction deviation (RPDC) was 2.525, the validation coefficient of determination (R2V) was 0.670, the validation root mean square error (RMSEV) was 0.251, the validation residual prediction deviation (RPDV) was 1.741. (4) Qualitative comparison of spectral reflectance and NDVI values between GF-6 multispectral imagery and Parrot Sequoia+ multispectral image on the 172 PeITCs can show the reliability of Parrot Sequoia+ multispectral image. The comparison results of five PeITCs relative health degree judged by field vision judgment, measured SPAD value, predicted value of Chl (Chlpre), the red edge value calculated by ZY1-02D (ZY1-02Dred edge) and the Carotenoid Reflection Index 2 (CRI2) value calculated by ZY1-02D (ZY1-02DCRI2) can further prove the scientificity of inversion results to a certain extent. These results indicate that multispectral UAV images can be applied for quantitative inversion of PPC in PeITC, which could provide an indicator for the construction of a Populus euphratica individual tree health evaluation indicator system based on UAV remote sensing technology in the next step.
Collapse
|
8
|
Automated Parts-Based Model for Recognizing Human–Object Interactions from Aerial Imagery with Fully Convolutional Network. REMOTE SENSING 2022. [DOI: 10.3390/rs14061492] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Advanced aerial images have led to the development of improved human–object interaction recognition (HOI) methods for usage in surveillance, security, and public monitoring systems. Despite the ever-increasing rate of research being conducted in the field of HOI, the existing challenges of occlusion, scale variation, fast motion, and illumination variation continue to attract more researchers. In particular, accurate identification of human body parts, the involved objects, and robust features is the key to effective HOI recognition systems. However, identifying different human body parts and extracting their features is a tedious and rather ineffective task. Based on the assumption that only a few body parts are usually involved in a particular interaction, this article proposes a novel parts-based model for recognizing complex human–object interactions in videos and images captured using ground and aerial cameras. Gamma correction and non-local means denoising techniques have been used for pre-processing the video frames and Felzenszwalb’s algorithm has been utilized for image segmentation. After segmentation, twelve human body parts have been detected and five of them have been shortlisted based on their involvement in the interactions. Four kinds of features have been extracted and concatenated into a large feature vector, which has been optimized using the t-distributed stochastic neighbor embedding (t-SNE) technique. Finally, the interactions have been classified using a fully convolutional network (FCN). The proposed system has been validated on the ground and aerial videos of the VIRAT Video, YouTube Aerial, and SYSU 3D HOI datasets, achieving average accuracies of 82.55%, 86.63%, and 91.68% on these datasets, respectively.
Collapse
|
9
|
A Fast and Robust Algorithm with Reinforcement Learning for Large UAV Cluster Mission Planning. REMOTE SENSING 2022. [DOI: 10.3390/rs14061304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Large Unmanned Aerial Vehicle (UAV) clusters, containing hundreds of UAVs, have widely been used in the modern world. Therein, mission planning is the core of large UAV cluster collaborative systems. In this paper, we propose a mission planning method by introducing the Simple Attention Model (SAM) into Dynamic Information Reinforcement Learning (DIRL), named DIRL-SAM. To reduce the computational complexity of the original attention model, we derive the SAM with a lightweight interactive model to rapidly extract high-dimensional features of the cluster information. In DIRL, dynamic training conditions are considered to simulate different mission environments. Meanwhile, the data expansion in DIRL guarantees the convergence of the model in these dynamic environments, which improves the robustness of the algorithm. Finally, the simulation experiment results show that the proposed method can adaptively provide feasible mission planning schemes with second-level solution speed and that it exhibits excellent generalization performance in large-scale cluster planning problems.
Collapse
|