1
|
Cheng H, Zhu J, Wang S, Yan K, Wang H. A Study on Predicting the Deviation of Jet Trajectory Falling Point under the Influence of Random Wind. SENSORS (BASEL, SWITZERLAND) 2024; 24:3463. [PMID: 38894255 PMCID: PMC11174715 DOI: 10.3390/s24113463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 05/25/2024] [Accepted: 05/26/2024] [Indexed: 06/21/2024]
Abstract
As one of the main external factors affecting the fire extinguishing accuracy of sprinkler systems, it is necessary to analyze and study random wind. However, in practical applications, there is little research on the impact of random wind on sprinkler fire extinguishing points. To address this issue, a new random wind acquisition system was constructed in this paper, and a method for predicting jet trajectory falling points in Random Forest (RF) under the influence of random wind was proposed, and compared with the commonly used prediction model Support Vector Machine (SVM). The method in this article reduces the error in the x direction of the 50 m prediction result from 2.11 m to 1.53 m, the error in the y direction from 0.64 m to 0.6 m, and the total mean absolute error (MAE) from 31.3 to 23.5. Simultaneously, predict the falling points of jet trajectory at different distances under the influence of random wind, to demonstrate the feasibility of the proposed method in practical applications. The experimental results show that the system and method proposed in this article can effectively improve the influence of random wind on the falling points of a jet trajectory. In summary, the image acquisition system and error prediction method proposed in this article have many potential applications in fire extinguishing.
Collapse
Affiliation(s)
- Hengyu Cheng
- School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221006, China; (H.C.); (S.W.); (K.Y.); (H.W.)
| | - Jinsong Zhu
- School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221006, China; (H.C.); (S.W.); (K.Y.); (H.W.)
- China Academy of Safety Science and Technology, Beijing 100012, China
| | - Sining Wang
- School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221006, China; (H.C.); (S.W.); (K.Y.); (H.W.)
| | - Ke Yan
- School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221006, China; (H.C.); (S.W.); (K.Y.); (H.W.)
| | - Haojie Wang
- School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221006, China; (H.C.); (S.W.); (K.Y.); (H.W.)
| |
Collapse
|
2
|
Saleh A, Zulkifley MA, Harun HH, Gaudreault F, Davison I, Spraggon M. Forest fire surveillance systems: A review of deep learning methods. Heliyon 2024; 10:e23127. [PMID: 38163175 PMCID: PMC10754902 DOI: 10.1016/j.heliyon.2023.e23127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 11/03/2023] [Accepted: 11/27/2023] [Indexed: 01/03/2024] Open
Abstract
This review aims to critically examine the existing state-of-the-art forest fire detection systems that are based on deep learning methods. In general, forest fire incidences bring significant negative impact to the economy, environment, and society. One of the crucial mitigation actions that needs to be readied is an effective forest fire detection system that are able to automatically notify the relevant parties on the incidence of forest fire as early as possible. This review paper has examined in details 37 research articles that have implemented deep learning (DL) model for forest fire detection, which were published between January 2018 and 2023. In this paper, in depth analysis has been performed to identify the quantity and type of data that includes images and video datasets, as well as data augmentation methods and the deep model architecture. This paper is structured into five subsections, each of which focuses on a specific application of deep learning (DL) in the context of forest fire detection. These subsections include 1) classification, 2) detection, 3) detection and classification, 4) segmentation, and 5) segmentation and classification. To compare the model's performance, the methods were evaluated using comprehensive metrics like accuracy, mean average precision (mAP), F1-Score, mean pixel accuracy (MPA), etc. From the findings, of the usage of DL models for forest fire surveillance systems have yielded favourable outcomes, whereby the majority of studies managed to achieve accuracy rates that exceeds 90%. To further enhance the efficacy of these models, future research can explore the optimal fine-tuning of the hyper-parameters, integrate various satellite data, implement generative data augmentation techniques, and refine the DL model architecture. In conclusion, this paper highlights the potential of deep learning methods in enhancing forest fire detection that is crucial for forest fire management and mitigation.
Collapse
Affiliation(s)
- Azlan Saleh
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia (UKM), 43600 UKM, Bangi, Selangor, Malaysia
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia (UKM), 43600 UKM, Bangi, Selangor, Malaysia
| | - Hazimah Haspi Harun
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia (UKM), 43600 UKM, Bangi, Selangor, Malaysia
| | - Francis Gaudreault
- Rabdan Academy, 65, Al Inshirah, Al Sa'adah, Abu Dhabi, 22401, PO Box: 114646, United Arab Emirates
| | - Ian Davison
- Rabdan Academy, 65, Al Inshirah, Al Sa'adah, Abu Dhabi, 22401, PO Box: 114646, United Arab Emirates
| | - Martin Spraggon
- Rabdan Academy, 65, Al Inshirah, Al Sa'adah, Abu Dhabi, 22401, PO Box: 114646, United Arab Emirates
| |
Collapse
|
3
|
Mardani K, Vretos N, Daras P. Transformer-Based Fire Detection in Videos. SENSORS (BASEL, SWITZERLAND) 2023; 23:3035. [PMID: 36991746 PMCID: PMC10051572 DOI: 10.3390/s23063035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 02/20/2023] [Accepted: 03/07/2023] [Indexed: 06/19/2023]
Abstract
Fire detection in videos forms a valuable feature in surveillance systems, as its utilization can prevent hazardous situations. The combination of an accurate and fast model is necessary for the effective confrontation of this significant task. In this work, a transformer-based network for the detection of fire in videos is proposed. It is an encoder-decoder architecture that consumes the current frame that is under examination, in order to compute attention scores. These scores denote which parts of the input frame are more relevant for the expected fire detection output. The model is capable of recognizing fire in video frames and specifying its exact location in the image plane in real-time, as can be seen in the experimental results, in the form of segmentation mask. The proposed methodology has been trained and evaluated for two computer vision tasks, the full-frame classification task (fire/no fire in frames) and the fire localization task. In comparison with the state-of-the-art models, the proposed method achieves outstanding results in both tasks, with 97% accuracy, 20.4 fps processing time, 0.02 false positive rate for fire localization, and 97% for f-score and recall metrics in the full-frame classification task.
Collapse
|
4
|
Mukhiddinov M, Abdusalomov AB, Cho J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. SENSORS (BASEL, SWITZERLAND) 2022; 22:9384. [PMID: 36502081 PMCID: PMC9740073 DOI: 10.3390/s22239384] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/30/2022] [Accepted: 11/30/2022] [Indexed: 06/17/2023]
Abstract
Wildfire is one of the most significant dangers and the most serious natural catastrophe, endangering forest resources, animal life, and the human economy. Recent years have witnessed a rise in wildfire incidents. The two main factors are persistent human interference with the natural environment and global warming. Early detection of fire ignition from initial smoke can help firefighters react to such blazes before they become difficult to handle. Previous deep-learning approaches for wildfire smoke detection have been hampered by small or untrustworthy datasets, making it challenging to extrapolate the performances to real-world scenarios. In this study, we propose an early wildfire smoke detection system using unmanned aerial vehicle (UAV) images based on an improved YOLOv5. First, we curated a 6000-wildfire image dataset using existing UAV images. Second, we optimized the anchor box clustering using the K-mean++ technique to reduce classification errors. Then, we improved the network's backbone using a spatial pyramid pooling fast-plus layer to concentrate small-sized wildfire smoke regions. Third, a bidirectional feature pyramid network was applied to obtain a more accessible and faster multi-scale feature fusion. Finally, network pruning and transfer learning approaches were implemented to refine the network architecture and detection speed, and correctly identify small-scale wildfire smoke areas. The experimental results proved that the proposed method achieved an average precision of 73.6% and outperformed other one- and two-stage object detectors on a custom image dataset.
Collapse
|
5
|
FFireNet: Deep Learning Based Forest Fire Classification and Detection in Smart Cities. Symmetry (Basel) 2022. [DOI: 10.3390/sym14102155] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022] Open
Abstract
Forests are a vital natural resource that directly influences the ecosystem. Recently, forest fire has been a serious issue due to natural and man-made climate effects. For early forest fire detection, an artificial intelligence-based forest fire detection method in smart city application is presented to avoid major disasters. This research presents a review of the vision-based forest fire localization and classification methods. Furthermore, this work makes use of the forest fire detection dataset, which solves the classification problem of discriminating fire and no-fire images. This work proposes a deep learning method named FFireNet, by leveraging the pre-trained convolutional base of the MobileNetV2 model and adding fully connected layers to solve the new task, that is, the forest fire recognition problem, which helps in classifying images as forest fires based on extracted features which are symmetrical. The performance of the proposed solution for classifying fire and no-fire was evaluated using different performance metrics and compared with other CNN models. The results show that the proposed approach achieves 98.42% accuracy, 1.58% error rate, 99.47% recall, and 97.42% precision in classifying the fire and no-fire images. The outcomes of the proposed approach are promising for the forest fire classification problem considering the unique forest fire detection dataset.
Collapse
|
6
|
Yar H, Hussain T, Agarwal M, Khan ZA, Gupta SK, Baik SW. Optimized Dual Fire Attention Network and Medium-Scale Fire Classification Benchmark. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:6331-6343. [PMID: 36129860 DOI: 10.1109/tip.2022.3207006] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Vision-based fire detection systems have been significantly improved by deep models; however, higher numbers of false alarms and a slow inference speed still hinder their practical applicability in real-world scenarios. For a balanced trade-off between computational cost and accuracy, we introduce dual fire attention network (DFAN) to achieve effective yet efficient fire detection. The first attention mechanism highlights the most important channels from the features of an existing backbone model, yielding significantly emphasized feature maps. Then, a modified spatial attention mechanism is employed to capture spatial details and enhance the discrimination potential of fire and non-fire objects. We further optimize the DFAN for real-world applications by discarding a significant number of extra parameters using a meta-heuristic approach, which yields around 50% higher FPS values. Finally, we contribute a medium-scale challenging fire classification dataset by considering extremely diverse, highly similar fire/non-fire images and imbalanced classes, among many other complexities. The proposed dataset advances the traditional fire detection datasets by considering multiple classes to answer the following question: what is on fire? We perform experiments on four widely used fire detection datasets, and the DFAN provides the best results compared to 21 state-of-the-art methods. Consequently, our research provides a baseline for fire detection over edge devices with higher accuracy and better FPS values, and the proposed dataset extension provides indoor fire classes and a greater number of outdoor fire classes; these contributions can be used in significant future research. Our codes and dataset will be publicly available at https://github.com/tanveer-hussain/DFAN.
Collapse
|
7
|
Assessing the Impact of the Loss Function and Encoder Architecture for Fire Aerial Images Segmentation Using Deeplabv3+. REMOTE SENSING 2022. [DOI: 10.3390/rs14092023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Wildfire early detection and prevention had become a priority. Detection using Internet of Things (IoT) sensors, however, is expensive in practical situations. The majority of present wildfire detection research focuses on segmentation and detection. The developed machine learning models deploy appropriate image processing techniques to enhance the detection outputs. As a result, the time necessary for data processing is drastically reduced, as the time required rises exponentially with the size of the captured pictures. In a real-time fire emergency, it is critical to notice the fire pixels and warn the firemen as soon as possible to handle the problem more quickly. The present study addresses the challenge mentioned above by implementing an on-site detection system that detects fire pixels in real-time in the given scenario. The proposed approach is accomplished using Deeplabv3+, a deep learning architecture that is an enhanced version of an existing model. However, present work fine-tuned the Deeplabv3 model through various experimental trials that have resulted in improved performance. Two public aerial datasets, the Corsican dataset and FLAME, and one private dataset, Firefront Gestosa, were used for experimental trials in this work with different backbones. To conclude, the selected model trained with ResNet-50 and Dice loss attains a global accuracy of 98.70%, a mean accuracy of 89.54%, a mean IoU 86.38%, a weighted IoU of 97.51%, and a mean BF score of 93.86%.
Collapse
|
8
|
An Q, Chen X, Zhang J, Shi R, Yang Y, Huang W. A Robust Fire Detection Model via Convolution Neural Networks for Intelligent Robot Vision Sensing. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22082929. [PMID: 35458913 PMCID: PMC9025736 DOI: 10.3390/s22082929] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 03/30/2022] [Accepted: 04/08/2022] [Indexed: 05/29/2023]
Abstract
Accurate fire identification can help to control fires. Traditional fire detection methods are mainly based on temperature or smoke detectors. These detectors are susceptible to damage or interference from the outside environment. Meanwhile, most of the current deep learning methods are less discriminative with respect to dynamic fire and have lower detection precision when a fire changes. Therefore, we propose a dynamic convolution YOLOv5 fire detection method using a video sequence. Our method first uses the K-mean++ algorithm to optimize anchor box clustering; this significantly reduces the rate of classification error. Then, the dynamic convolution is introduced into the convolution layer of YOLOv5. Finally, pruning of the network heads of YOLOv5's neck and head is carried out to improve the detection speed. Experimental results verify that the proposed dynamic convolution YOLOv5 fire detection method demonstrates better performance than the YOLOv5 method in recall, precision and F1-score. In particular, compared with three other deep learning methods, the precision of the proposed algorithm is improved by 13.7%, 10.8% and 6.1%, respectively, while the F1-score is improved by 15.8%, 12% and 3.8%, respectively. The method described in this paper is applicable not only to short-range indoor fire identification but also to long-range outdoor fire detection.
Collapse
Affiliation(s)
- Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan 430223, China; (Q.A.); (Y.Y.); (W.H.)
| | - Xijiang Chen
- School of Safety Science and Emergency Management, Wuhan University of Technology, Wuhan 430079, China; (J.Z.); (R.S.)
| | - Junqian Zhang
- School of Safety Science and Emergency Management, Wuhan University of Technology, Wuhan 430079, China; (J.Z.); (R.S.)
| | - Ruizhe Shi
- School of Safety Science and Emergency Management, Wuhan University of Technology, Wuhan 430079, China; (J.Z.); (R.S.)
| | - Yuanjun Yang
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan 430223, China; (Q.A.); (Y.Y.); (W.H.)
| | - Wei Huang
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan 430223, China; (Q.A.); (Y.Y.); (W.H.)
| |
Collapse
|
9
|
Wang S, Han Y, Chen J, He X, Zhang Z, Liu X, Zhang K. Weed Density Extraction Based on Few-Shot Learning Through UAV Remote Sensing RGB and Multispectral Images in Ecological Irrigation Area. FRONTIERS IN PLANT SCIENCE 2022; 12:735230. [PMID: 35399196 PMCID: PMC8987725 DOI: 10.3389/fpls.2021.735230] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 12/31/2021] [Indexed: 05/24/2023]
Abstract
With the development of ecological irrigation area, a higher level of detection and control categories for weeds are currently required. In this article, an improved transfer neural network based on bionic optimization to detect weed density and crop growth is proposed, which used the pre-trained AlexNet network for transfer learning. Because the learning rate of the new addition layer is difficult to tune to the best, the weight and bias learning rate of the newly added fully connected layer is set with particle swarm optimization (PSO) and bat algorithm (BA) to find the optimal combination on the small data set. Data are transported to the convolutional neural network (CNN) by collecting red-green-blue (RGB) and 5-band multispectral images of 3 kinds of weeds and 3 kinds of crops as data sets, through cutting, rotating, and other operations. Finally, 6 kinds of classifications are implemented. At the same time, a self-constructed CNN based on model-agnostic meta-learning (MAML) is proposed in order to realize the learning of neural networks with small sample and high efficiency, and its accuracy is verified in the test set. The neural networks optimized by two bionic optimization algorithms are compared with the self-constructed CNN based on MAML and histogram of oriented gradient + support vector machine (HOG + SVM). The experimental results show that the combination of learning rate through BA is the best, and its accuracy can reach 99.39% for RGB images, 99.53% for multispectral images, and 96.02% for a 6-shot small sample. The purpose of the classification proposed in this article is to calculate the growth of various plants (including weeds and crops) in the farmland. And various plant densities can be accurately calculated through the plant density calculation formula and algorithm proposed in this article, which provides a basis for the application of variable herbicides by experimenting in different farmlands. Finally, an excellent cycle of ecological irrigation district can be promoted.
Collapse
Affiliation(s)
- Shubo Wang
- College of Engineering, China Agricultural University, Beijing, China
- Centre for Chemicals Application Technology, College of Science, China Agricultural University, Beijing, China
| | - Yu Han
- State Key Laboratory of Hydroscience and Engineering, Tsinghua University, Beijing, China
- College of Water Resources and Civil Engineering, China Agricultural University, Beijing, China
| | - Jian Chen
- College of Engineering, China Agricultural University, Beijing, China
| | - Xiongkui He
- Centre for Chemicals Application Technology, College of Science, China Agricultural University, Beijing, China
| | - Zichao Zhang
- College of Engineering, China Agricultural University, Beijing, China
- Key Laboratory of Urban Land Resources Monitoring and Simulation, Ministry of Natural Resources, Shenzhen, China
| | - Xuzan Liu
- College of Engineering, China Agricultural University, Beijing, China
| | - Kai Zhang
- College of Engineering, China Agricultural University, Beijing, China
| |
Collapse
|
10
|
Ghali R, Akhloufi MA, Mseddi WS. Deep Learning and Transformer Approaches for UAV-Based Wildfire Detection and Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22051977. [PMID: 35271126 PMCID: PMC8914964 DOI: 10.3390/s22051977] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 02/28/2022] [Accepted: 03/01/2022] [Indexed: 05/29/2023]
Abstract
Wildfires are a worldwide natural disaster causing important economic damages and loss of lives. Experts predict that wildfires will increase in the coming years mainly due to climate change. Early detection and prediction of fire spread can help reduce affected areas and improve firefighting. Numerous systems were developed to detect fire. Recently, Unmanned Aerial Vehicles were employed to tackle this problem due to their high flexibility, their low-cost, and their ability to cover wide areas during the day or night. However, they are still limited by challenging problems such as small fire size, background complexity, and image degradation. To deal with the aforementioned limitations, we adapted and optimized Deep Learning methods to detect wildfire at an early stage. A novel deep ensemble learning method, which combines EfficientNet-B5 and DenseNet-201 models, is proposed to identify and classify wildfire using aerial images. In addition, two vision transformers (TransUNet and TransFire) and a deep convolutional model (EfficientSeg) were employed to segment wildfire regions and determine the precise fire regions. The obtained results are promising and show the efficiency of using Deep Learning and vision transformers for wildfire classification and segmentation. The proposed model for wildfire classification obtained an accuracy of 85.12% and outperformed many state-of-the-art works. It proved its ability in classifying wildfire even small fire areas. The best semantic segmentation models achieved an F1-score of 99.9% for TransUNet architecture and 99.82% for TransFire architecture superior to recent published models. More specifically, we demonstrated the ability of these models to extract the finer details of wildfire using aerial images. They can further overcome current model limitations, such as background complexity and small wildfire areas.
Collapse
Affiliation(s)
- Rafik Ghali
- Perception, Robotics and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9, Canada;
| | - Moulay A. Akhloufi
- Perception, Robotics and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1A 3E9, Canada;
| | - Wided Souidene Mseddi
- SERCOM Laboratory, Ecole Polytechnique de Tunisie, Université de Carthage, BP 743, La Marsa 2078, Tunisia;
| |
Collapse
|
11
|
Real-Time Detection of Full-Scale Forest Fire Smoke Based on Deep Convolution Neural Network. REMOTE SENSING 2022. [DOI: 10.3390/rs14030536] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
To reduce the loss induced by forest fires, it is very important to detect the forest fire smoke in real time so that early and timely warning can be issued. Machine vision and image processing technology is widely used for detecting forest fire smoke. However, most of the traditional image detection algorithms require manual extraction of image features and, thus, are not real-time. This paper evaluates the effectiveness of using the deep convolutional neural network to detect forest fire smoke in real time. Several target detection deep convolutional neural network algorithms evaluated include the EfficientDet (EfficientDet: Scalable and Efficient Object Detection), Faster R-CNN (Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks), YOLOv3 (You Only Look Once V3), and SSD (Single Shot MultiBox Detector) advanced CNN (Convolutional Neural Networks) model. The YOLOv3 showed a detection speed up to 27 FPS, indicating it is a real-time smoke detector. By comparing these algorithms with the current existing forest fire smoke detection algorithms, it can be found that the deep convolutional neural network algorithms result in better smoke detection accuracy. In particular, the EfficientDet algorithm achieves an average detection accuracy of 95.7%, which is the best real-time forest fire smoke detection among the evaluated algorithms.
Collapse
|
12
|
Multi UAV Coverage Path Planning in Urban Environments. SENSORS 2021; 21:s21217365. [PMID: 34770670 PMCID: PMC8611648 DOI: 10.3390/s21217365] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 10/20/2021] [Accepted: 11/03/2021] [Indexed: 11/21/2022]
Abstract
Coverage path planning (CPP) is a field of study which objective is to find a path that covers every point of a certain area of interest. Recently, the use of Unmanned Aerial Vehicles (UAVs) has become more proficient in various applications such as surveillance, terrain coverage, mapping, natural disaster tracking, transport, and others. The aim of this paper is to design efficient coverage path planning collision-avoidance capable algorithms for single or multi UAV systems in cluttered urban environments. Two algorithms are developed and explored: one of them plans paths to cover a target zone delimited by a given perimeter with predefined coverage height and bandwidth, using a boustrophedon flight pattern, while the other proposed algorithm follows a set of predefined viewpoints, calculating a smooth path that ensures that the UAVs pass over the objectives. Both algorithms have been developed for a scalable number of UAVs, which fly in a triangular deformable leader-follower formation with the leader at its front. In the case of an even number of UAVs, there is no leader at the front of the formation and a virtual leader is used to plan the paths of the followers. The presented algorithms also have collision avoidance capabilities, powered by the Fast Marching Square algorithm. These algorithms are tested in various simulated urban and cluttered environments, and they prove capable of providing safe and smooth paths for the UAV formation in urban environments.
Collapse
|
13
|
Assessing the Impact of the Loss Function, Architecture and Image Type for Deep Learning-Based Wildfire Segmentation. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11157046] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Wildfires stand as one of the most relevant natural disasters worldwide, particularly more so due to the effect of climate change and its impact on various societal and environmental levels. In this regard, a significant amount of research has been done in order to address this issue, deploying a wide variety of technologies and following a multi-disciplinary approach. Notably, computer vision has played a fundamental role in this regard. It can be used to extract and combine information from several imaging modalities in regard to fire detection, characterization and wildfire spread forecasting. In recent years, there has been work pertaining to Deep Learning (DL)-based fire segmentation, showing very promising results. However, it is currently unclear whether the architecture of a model, its loss function, or the image type employed (visible, infrared, or fused) has the most impact on the fire segmentation results. In the present work, we evaluate different combinations of state-of-the-art (SOTA) DL architectures, loss functions, and types of images to identify the parameters most relevant to improve the segmentation results. We benchmark them to identify the top-performing ones and compare them to traditional fire segmentation techniques. Finally, we evaluate if the addition of attention modules on the best performing architecture can further improve the segmentation results. To the best of our knowledge, this is the first work that evaluates the impact of the architecture, loss function, and image type in the performance of DL-based wildfire segmentation models.
Collapse
|
14
|
Deep Learning in Forestry Using UAV-Acquired RGB Data: A Practical Review. REMOTE SENSING 2021. [DOI: 10.3390/rs13142837] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Forests are the planet’s main CO2 filtering agent as well as important economical, environmental and social assets. Climate change is exerting an increased stress, resulting in a need for improved research methodologies to study their health, composition or evolution. Traditionally, information about forests has been collected using expensive and work-intensive field inventories, but in recent years unoccupied autonomous vehicles (UAVs) have become very popular as they represent a simple and inexpensive way to gather high resolution data of large forested areas. In addition to this trend, deep learning (DL) has also been gaining much attention in the field of forestry as a way to include the knowledge of forestry experts into automatic software pipelines tackling problems such as tree detection or tree health/species classification. Among the many sensors that UAVs can carry, RGB cameras are fast, cost-effective and allow for straightforward data interpretation. This has resulted in a large increase in the amount of UAV-acquired RGB data available for forest studies. In this review, we focus on studies that use DL and RGB images gathered by UAVs to solve practical forestry research problems. We summarize the existing studies, provide a detailed analysis of their strengths paired with a critical assessment on common methodological problems and include other information, such as available public data and code resources that we believe can be useful for researchers that want to start working in this area. We structure our discussion using three main families of forestry problems: (1) individual Tree Detection, (2) tree Species Classification, and (3) forest Anomaly Detection (forest fires and insect Infestation).
Collapse
|
15
|
A Collaborative Region Detection and Grading Framework for Forest Fire Smoke Using Weakly Supervised Fine Segmentation and Lightweight Faster-RCNN. FORESTS 2021. [DOI: 10.3390/f12060768] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Forest fires are serious disasters that affect countries all over the world. With the progress of image processing, numerous image-based surveillance systems for fires have been installed in forests. The rapid and accurate detection and grading of fire smoke can provide useful information, which helps humans to quickly control and reduce forest losses. Currently, convolutional neural networks (CNN) have yielded excellent performance in image recognition. Previous studies mostly paid attention to CNN-based image classification for fire detection. However, the research of CNN-based region detection and grading of fire is extremely scarce due to a challenging task which locates and segments fire regions using image-level annotations instead of inaccessible pixel-level labels. This paper presents a novel collaborative region detection and grading framework for fire smoke using a weakly supervised fine segmentation and a lightweight Faster R-CNN. The multi-task framework can simultaneously implement the early-stage alarm, region detection, classification, and grading of fire smoke. To provide an accurate segmentation on image-level, we propose the weakly supervised fine segmentation method, which consists of a segmentation network and a decision network. We aggregate image-level information, instead of expensive pixel-level labels, from all training images into the segmentation network, which simultaneously locates and segments fire smoke regions. To train the segmentation network using only image-level annotations, we propose a two-stage weakly supervised learning strategy, in which a novel weakly supervised loss is proposed to roughly detect the region of fire smoke, and a new region-refining segmentation algorithm is further used to accurately identify this region. The decision network incorporating a residual spatial attention module is utilized to predict the category of forest fire smoke. To reduce the complexity of the Faster R-CNN, we first introduced a knowledge distillation technique to compress the structure of this model. To grade forest fire smoke, we used a 3-input/1-output fuzzy system to evaluate the severity level. We evaluated the proposed approach using a developed fire smoke dataset, which included five different scenes varying by the fire smoke level. The proposed method exhibited competitive performance compared to state-of-the-art methods.
Collapse
|
16
|
Low-Altitude Remote Sensing Opium Poppy Image Detection Based on Modified YOLOv3. REMOTE SENSING 2021. [DOI: 10.3390/rs13112130] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Poppy is a special medicinal plant. Its cultivation requires legal approval and strict supervision. Unauthorized cultivation of opium poppy is forbidden. Low-altitude inspection of poppy illegal cultivation through unmanned aerial vehicle is featured with the advantages of time-saving and high efficiency. However, a large amount of inspection image data collected need to be manually screened and analyzed. This process not only consumes a lot of manpower and material resources, but is also subjected to omissions and errors. In response to such a problem, this paper proposed an inspection method by adding a larger-scale detection box on the basis of the original YOLOv3 algorithm to improve the accuracy of small target detection. Specifically, ResNeXt group convolution was utilized to reduce the number of model parameters, and an ASPP module was added before the small-scale detection box to improve the model’s ability to extract local features and obtain contextual information. The test results on a self-created dataset showed that: the mAP (mean average precision) indicator of the Global Multiscale-YOLOv3 model was 0.44% higher than that of the YOLOv3 (MobileNet) algorithm; the total number of parameters of the proposed model was only 13.75% of that of the original YOLOv3 model and 35.04% of that of the lightweight network YOLOv3 (MobileNet). Overall, the Global Multiscale-YOLOv3 model had a reduced number of parameters and increased recognition accuracy. It provides technical support for the rapid and accurate image processing in low-altitude remote sensing poppy inspection.
Collapse
|
17
|
Unmanned Aerial Vehicles for Wildland Fires: Sensing, Perception, Cooperation and Assistance. DRONES 2021. [DOI: 10.3390/drones5010015] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Wildfires represent a significant natural risk causing economic losses, human death and environmental damage. In recent years, the world has seen an increase in fire intensity and frequency. Research has been conducted towards the development of dedicated solutions for wildland fire assistance and fighting. Systems were proposed for the remote detection and tracking of fires. These systems have shown improvements in the area of efficient data collection and fire characterization within small-scale environments. However, wildland fires cover large areas making some of the proposed ground-based systems unsuitable for optimal coverage. To tackle this limitation, unmanned aerial vehicles (UAV) and unmanned aerial systems (UAS) were proposed. UAVs have proven to be useful due to their maneuverability, allowing for the implementation of remote sensing, allocation strategies and task planning. They can provide a low-cost alternative for the prevention, detection and real-time support of firefighting. In this paper, previous works related to the use of UAV in wildland fires are reviewed. Onboard sensor instruments, fire perception algorithms and coordination strategies are considered. In addition, some of the recent frameworks proposing the use of both aerial vehicles and unmanned ground vehicles (UGV) for a more efficient wildland firefighting strategy at a larger scale are presented.
Collapse
|
18
|
Wildfire-Detection Method Using DenseNet and CycleGAN Data Augmentation-Based Remote Camera Imagery. REMOTE SENSING 2020. [DOI: 10.3390/rs12223715] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
To minimize the damage caused by wildfires, a deep learning-based wildfire-detection technology that extracts features and patterns from surveillance camera images was developed. However, many studies related to wildfire-image classification based on deep learning have highlighted the problem of data imbalance between wildfire-image data and forest-image data. This data imbalance causes model performance degradation. In this study, wildfire images were generated using a cycle-consistent generative adversarial network (CycleGAN) to eliminate data imbalances. In addition, a densely-connected-convolutional-networks-based (DenseNet-based) framework was proposed and its performance was compared with pre-trained models. While training with a train set containing an image generated by a GAN in the proposed DenseNet-based model, the best performance result value was realized among the models with an accuracy of 98.27% and an F1 score of 98.16, obtained using the test dataset. Finally, this trained model was applied to high-quality drone images of wildfires. The experimental results showed that the proposed framework demonstrated high wildfire-detection accuracy.
Collapse
|
19
|
Barmpoutis P, Papaioannou P, Dimitropoulos K, Grammalidis N. A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. SENSORS 2020; 20:s20226442. [PMID: 33187292 PMCID: PMC7697165 DOI: 10.3390/s20226442] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 11/07/2020] [Accepted: 11/10/2020] [Indexed: 11/16/2022]
Abstract
The environmental challenges the world faces nowadays have never been greater or more complex. Global areas covered by forests and urban woodlands are threatened by natural disasters that have increased dramatically during the last decades, in terms of both frequency and magnitude. Large-scale forest fires are one of the most harmful natural hazards affecting climate change and life around the world. Thus, to minimize their impacts on people and nature, the adoption of well-planned and closely coordinated effective prevention, early warning, and response approaches are necessary. This paper presents an overview of the optical remote sensing technologies used in early fire warning systems and provides an extensive survey on both flame and smoke detection algorithms employed by each technology. Three types of systems are identified, namely terrestrial, airborne, and spaceborne-based systems, while various models aiming to detect fire occurrences with high accuracy in challenging environments are studied. Finally, the strengths and weaknesses of fire detection systems based on optical remote sensing are discussed aiming to contribute to future research projects for the development of early warning fire systems.
Collapse
|
20
|
Early Fire Detection Based on Aerial 360-Degree Sensors, Deep Convolution Neural Networks and Exploitation of Fire Dynamic Textures. REMOTE SENSING 2020. [DOI: 10.3390/rs12193177] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The environmental challenges the world faces have never been greater or more complex. Global areas that are covered by forests and urban woodlands are threatened by large-scale forest fires that have increased dramatically during the last decades in Europe and worldwide, in terms of both frequency and magnitude. To this end, rapid advances in remote sensing systems including ground-based, unmanned aerial vehicle-based and satellite-based systems have been adopted for effective forest fire surveillance. In this paper, the recently introduced 360-degree sensor cameras are proposed for early fire detection, making it possible to obtain unlimited field of view captures which reduce the number of required sensors and the computational cost and make the systems more efficient. More specifically, once optical 360-degree raw data are obtained using an RGB 360-degree camera mounted on an unmanned aerial vehicle, we convert the equirectangular projection format images to stereographic images. Then, two DeepLab V3+ networks are applied to perform flame and smoke segmentation, respectively. Subsequently, a novel post-validation adaptive method is proposed exploiting the environmental appearance of each test image and reducing the false-positive rates. For evaluating the performance of the proposed system, a dataset, namely the “Fire detection 360-degree dataset”, consisting of 150 unlimited field of view images that contain both synthetic and real fire, was created. Experimental results demonstrate the great potential of the proposed system, which has achieved an F-score fire detection rate equal to 94.6%, hence reducing the number of required sensors. This indicates that the proposed method could significantly contribute to early fire detection.
Collapse
|
21
|
Jeong M, Park M, Nam J, Ko BC. Light-Weight Student LSTM for Real-Time Wildfire Smoke Detection. SENSORS 2020; 20:s20195508. [PMID: 32993003 PMCID: PMC7582303 DOI: 10.3390/s20195508] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 09/21/2020] [Accepted: 09/24/2020] [Indexed: 11/28/2022]
Abstract
As the need for wildfire detection increases, research on wildfire smoke detection combining low-cost cameras and deep learning technology is increasing. Camera-based wildfire smoke detection is inexpensive, allowing for a quick detection, and allows a smoke to be checked by the naked eye. However, because a surveillance system must rely only on visual characteristics, it often erroneously detects fog and clouds as smoke. In this study, a combination of a You-Only-Look-Once detector and a long short-term memory (LSTM) classifier is applied to improve the performance of wildfire smoke detection by reflecting on the spatial and temporal characteristics of wildfire smoke. However, because it is necessary to lighten the heavy LSTM model for real-time smoke detection, in this paper, we propose a new method for applying the teacher–student framework to deep LSTM. Through this method, a shallow student LSTM is designed to reduce the number of layers and cells constituting the LSTM model while maintaining the original deep LSTM performance. As the experimental results indicate, our proposed method achieves up to an 8.4-fold decrease in the number of parameters and a faster processing time than the teacher LSTM while maintaining a similar detection performance as deep LSTM using several state-of-the-art methods on a wildfire benchmark dataset.
Collapse
|
22
|
Yang Z, Bu L, Wang T, Yuan P, Jineng O. Indoor Video Flame Detection Based on Lightweight Convolutional Neural Network. PATTERN RECOGNITION AND IMAGE ANALYSIS 2020. [DOI: 10.1134/s1054661820030293] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
23
|
Li S, Yan Q, Liu P. An Efficient Fire Detection Method Based on Multiscale Feature Extraction, Implicit Deep Supervision and Channel Attention Mechanism. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:8467-8475. [PMID: 32813654 DOI: 10.1109/tip.2020.3016431] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Recent progress in vision-based fire detection is driven by convolutional neural networks. However, the existing methods fail to achieve a good tradeoff among accuracy, model size, and speed. In this paper, we propose an accurate fire detection method that achieves a better balance in the abovementioned aspects. Specifically, a multiscale feature extraction mechanism is employed to capture richer spatial details, which can enhance the discriminative ability of fire-like objects. Then, the implicit deep supervision mechanism is utilized to enhance the interaction among information flows through dense skip connections. Finally, a channel attention mechanism is employed to selectively emphasize the contribution between different feature maps. Experimental results demonstrate that our method achieves 95.3% accuracy, which outperforms the suboptimal method by 2.5%. Moreover, the speed and model size of our method are 3.76% faster on the GPU and 63.64% smaller than the suboptimal method, respectively.
Collapse
|
24
|
Abstract
Unmanned Aerial Vehicle (UAV) imagery is gaining a lot of momentum lately. Indeed, gathered information from a bird-point-of-view is particularly relevant for numerous applications, from agriculture to surveillance services. We herewith study visual saliency to verify whether there are tangible differences between this imagery and more conventional contents. We first describe typical and UAV contents based on their human saliency maps in a high-dimensional space, encompassing saliency map statistics, distribution characteristics, and other specifically designed features. Thanks to a large amount of eye tracking data collected on UAV, we stress the differences between typical and UAV videos, but more importantly within UAV sequences. We then designed a process to extract new visual attention biases in the UAV imagery, leading to the definition of a new dictionary of visual biases. We then conduct a benchmark on two different datasets, whose results confirm that the 20 defined biases are relevant as a low-complexity saliency prediction system.
Collapse
|
25
|
Computationally Efficient Wildfire Detection Method Using a Deep Convolutional Network Pruned via Fourier Analysis. SENSORS 2020; 20:s20102891. [PMID: 32443739 PMCID: PMC7287837 DOI: 10.3390/s20102891] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 05/14/2020] [Accepted: 05/15/2020] [Indexed: 11/23/2022]
Abstract
In this paper, we propose a deep convolutional neural network for camera based wildfire detection. We train the neural network via transfer learning and use window based analysis strategy to increase the fire detection rate. To achieve computational efficiency, we calculate frequency response of the kernels in convolutional and dense layers and eliminate those filters with low energy impulse response. Moreover, to reduce the storage for edge devices, we compare the convolutional kernels in Fourier domain and discard similar filters using the cosine similarity measure in the frequency domain. We test the performance of the neural network with a variety of wildfire video clips and the pruned system performs as good as the regular network in daytime wild fire detection, and it also works well on some night wild fire video clips.
Collapse
|
26
|
Abstract
Unmanned Aerial Systems, hereafter referred to as UAS, are of great use in hazard events such as wildfire due to their ability to provide high-resolution video imagery over areas deemed too dangerous for manned aircraft and ground crews. This aerial perspective allows for identification of ground-based hazards such as spot fires and fire lines, and to communicate this information with fire fighting crews. Current technology relies on visual interpretation of UAS imagery, with little to no computer-assisted automatic detection. With the help of big labeled data and the significant increase of computing power, deep learning has seen great successes on object detection with fixed patterns, such as people and vehicles. However, little has been done for objects, such as spot fires, with amorphous and irregular shapes. Additional challenges arise when data are collected via UAS as high-resolution aerial images or videos; an ample solution must provide reasonable accuracy with low delays. In this paper, we examined 4K ( 3840 × 2160 ) videos collected by UAS from a controlled burn and created a set of labeled video sets to be shared for public use. We introduce a coarse-to-fine framework to auto-detect wildfires that are sparse, small, and irregularly-shaped. The coarse detector adaptively selects the sub-regions that are likely to contain the objects of interest while the fine detector passes only the details of the sub-regions, rather than the entire 4K region, for further scrutiny. The proposed two-phase learning therefore greatly reduced time overhead and is capable of maintaining high accuracy. Compared against the real-time one-stage object backbone of YoloV3, the proposed methods improved the mean average precision(mAP) from 0 . 29 to 0 . 67 , with an average inference speed of 7.44 frames per second. Limitations and future work are discussed with regard to the design and the experiment results.
Collapse
|
27
|
Xu Z, Wang Q, Li D, Hu M, Yao N, Zhai G. Estimating Departure Time Using Thermal Camera and Heat Traces Tracking Technique. SENSORS 2020; 20:s20030782. [PMID: 32023963 PMCID: PMC7038398 DOI: 10.3390/s20030782] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 01/24/2020] [Accepted: 01/24/2020] [Indexed: 12/18/2022]
Abstract
Advancement in science and technology is playing an increasingly important role in solving difficult cases at present. Thermal cameras can help the police crack difficult cases by capturing the heat trace on the ground left by perpetrators, which cannot be spotted by the naked eye. Therefore, the purpose of this study is to establish a thermalfoot model using thermal imaging system to estimate the departure time. To this end, in the current work, we use a thermal camera to acquire the thermal sequence left on the floor, and convert it into the heat signal via image processing algorithm. We establish the model of thermalfoot print as we observe that the residual temperature would exponentially decrease with the departure time according to Newton’s Law of Cooling. The correlation coefficients of 107 thermalfoot models derived from the corresponding 107 heat signals are basically above 0.99. In a validation experiment, a residual analysis is conducted and the residuals between estimated departure time points and ground-truth times are almost within a certain range from −150 s to +150 s. The reverse accuracy of the thermalfoot model for estimating departure time at one-third, one-half, two-thirds, three-fourths, four-fifths, and five-sixths capture time points are 71.96%, 50.47%, 42.06%, 31.78%, 21.70%, and 11.21%, respectively. The results of comparison experiments with two subjective evaluation methods (subjective 1: we directly estimate the departure time according to obtained local curves; subjective 2: we utilize auxiliary means such as a ruler to estimate the departure time based on obtained local curves) further demonstrate the effectiveness of thermalfoot model for detecting the departure time inversely. Experimental results also demonstrated that the thermalfoot model has good performance on the departure time reversal within a short time window someone leaves, whereas it is probably only approximately 15% to accurately determine the departure time via thermalfoot model within a long time window someone leaves. The influence of outliers, ROI (Region of Interest) selection, ROI size, different capture time points and environment temperature on the performance of thermalfoot model on departure time reversal can be explored in the future work. Overall, the thermalfoot model can help the police solve crimes to some extent, which in turn brings more guarantees for people’s health, social security, and stability.
Collapse
Affiliation(s)
- Ziyi Xu
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai 200241, China; (Z.X.); (Q.W.)
- School of Statistics, East China Normal University, Shanghai 200241, China
| | - Quchao Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai 200241, China; (Z.X.); (Q.W.)
- School of Mathematical Sciences, East China Normal University, Shanghai 200241, China
| | - Duo Li
- Hangzhou HIKVISION Digital Technology Co., LTO., Hangzhou 310051, China;
- Institute of Image Communication and Information Processing, Shanghai Jiao Tong University, Shanghai 200240, China;
| | - Menghan Hu
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai 200241, China; (Z.X.); (Q.W.)
- Key Laboratory of Artificial Intelligence, Ministry of Education, Shanghai 200240, China
- Correspondence: ; Tel.: +86-021-54345196
| | - Nan Yao
- Shanghai Jianglai Data Technology Co., Ltd, Shanghai 200241, China;
| | - Guangtao Zhai
- Institute of Image Communication and Information Processing, Shanghai Jiao Tong University, Shanghai 200240, China;
| |
Collapse
|
28
|
Real-Time Detection of Ground Objects Based on Unmanned Aerial Vehicle Remote Sensing with Deep Learning: Application in Excavator Detection for Pipeline Safety. REMOTE SENSING 2020. [DOI: 10.3390/rs12010182] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Unmanned aerial vehicle (UAV) remote sensing and deep learning provide a practical approach to object detection. However, most of the current approaches for processing UAV remote-sensing data cannot carry out object detection in real time for emergencies, such as firefighting. This study proposes a new approach for integrating UAV remote sensing and deep learning for the real-time detection of ground objects. Excavators, which usually threaten pipeline safety, are selected as the target object. A widely used deep-learning algorithm, namely You Only Look Once V3, is first used to train the excavator detection model on a workstation and then deployed on an embedded board that is carried by a UAV. The recall rate of the trained excavator detection model is 99.4%, demonstrating that the trained model has a very high accuracy. Then, the UAV for an excavator detection system (UAV-ED) is further constructed for operational application. UAV-ED is composed of a UAV Control Module, a UAV Module, and a Warning Module. A UAV experiment with different scenarios was conducted to evaluate the performance of the UAV-ED. The whole process from the UAV observation of an excavator to the Warning Module (350 km away from the testing area) receiving the detection results only lasted about 1.15 s. Thus, the UAV-ED system has good performance and would benefit the management of pipeline safety.
Collapse
|
29
|
Tam WC, Fu EY, Peacock R, Reneke P, Wang J, Li J, Cleary T. Generating Synthetic Sensor Data to Facilitate Machine Learning Paradigm for Prediction of Building Fire Hazard. FIRE TECHNOLOGY 2020; 0:10.1007/s10694-020-01022-9. [PMID: 34429561 PMCID: PMC8381752 DOI: 10.1007/s10694-020-01022-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 07/13/2020] [Indexed: 06/13/2023]
Abstract
Using the zone fire model CFAST as the simulation engine, time series data for building sensors, such as heat detectors, smoke detectors, and other targets at any arbitrary locations in multi-room compartments with different geometric configurations, can be obtained. An automated process for creating inputs files and summarizing model results, CData, is being developed as a companion to CFAST. An example case is presented to demonstrate the use of CData where synthetic data is generated for a wide range of fire scenarios. Three machine learning algorithms: support vector machine (SVM), decision tree (DT), and random forest (RF), are used to develop classification models that can predict the location of a fire based on temperature data within a compartment. Results show that DT and RF have excellent performance on the prediction of fire location and achieve model accuracy in between 93 % and 96 %. For SVM, model performance is sensitive to the size of training data. Additional study shows that results obtained from DT and RT can be used to examine the importance of each input feature. This paper contributes a learning-by-synthesis approach to facilitate the utilization of a machine learning paradigm to enhance situational awareness for fire fighting in buildings.
Collapse
Affiliation(s)
- Wai Cheong Tam
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Eugene Yujun Fu
- Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
| | - Richard Peacock
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Paul Reneke
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Jun Wang
- Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
| | - Jiajia Li
- Department of Industrial Design, Guangdong University of Technology, China
| | - Thomas Cleary
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| |
Collapse
|
30
|
Post-Disaster Building Database Updating Using Automated Deep Learning: An Integration of Pre-Disaster OpenStreetMap and Multi-Temporal Satellite Data. REMOTE SENSING 2019. [DOI: 10.3390/rs11202427] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
First responders and recovery planners need accurate and quickly derived information about the status of buildings as well as newly built ones to both help victims and to make decisions for reconstruction processes after a disaster. Deep learning and, in particular, convolutional neural network (CNN)-based approaches have recently become state-of-the-art methods to extract information from remote sensing images, in particular for image-based structural damage assessment. However, they are predominantly based on manually extracted training samples. In the present study, we use pre-disaster OpenStreetMap building data to automatically generate training samples to train the proposed deep learning approach after the co-registration of the map and the satellite images. The proposed deep learning framework is based on the U-net design with residual connections, which has been shown to be an effective method to increase the efficiency of CNN-based models. The ResUnet is followed by a Conditional Random Field (CRF) implementation to further refine the results. Experimental analysis was carried out on selected very high resolution (VHR) satellite images representing various scenarios after the 2013 Super Typhoon Haiyan in both the damage and the recovery phases in Tacloban, the Philippines. The results show the robustness of the proposed ResUnet-CRF framework in updating the building map after a disaster for both damage and recovery situations by producing an overall F1-score of 84.2%.
Collapse
|
31
|
False Positive Decremented Research for Fire and Smoke Detection in Surveillance Camera using Spatial and Temporal Features Based on Deep Learning. ELECTRONICS 2019. [DOI: 10.3390/electronics8101167] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Fire must be extinguished early, as it leads to economic losses and losses of precious lives. Vision-based methods have many difficulties in algorithm research due to the atypical nature fire flame and smoke. In this study, we introduce a novel smoke detection algorithm that reduces false positive detection using spatial and temporal features based on deep learning from factory installed surveillance cameras. First, we calculated the global frame similarity and mean square error (MSE) to detect the moving of fire flame and smoke from input surveillance cameras. Second, we extracted the fire flame and smoke candidate area using the deep learning algorithm (Faster Region-based Convolutional Network (R-CNN)). Third, the final fire flame and smoke area was decided by local spatial and temporal information: frame difference, color, similarity, wavelet transform, coefficient of variation, and MSE. This research proposed a new algorithm using global and local frame features, which is well presented object information to reduce false positive based on the deep learning method. Experimental results show that the false positive detection of the proposed algorithm was reduced to about 99.9% in maintaining the smoke and fire detection performance. It was confirmed that the proposed method has excellent false detection performance.
Collapse
|
32
|
A Review on IoT Deep Learning UAV Systems for Autonomous Obstacle Detection and Collision Avoidance. REMOTE SENSING 2019. [DOI: 10.3390/rs11182144] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Advances in Unmanned Aerial Vehicles (UAVs), also known as drones, offer unprecedented opportunities to boost a wide array of large-scale Internet of Things (IoT) applications. Nevertheless, UAV platforms still face important limitations mainly related to autonomy and weight that impact their remote sensing capabilities when capturing and processing the data required for developing autonomous and robust real-time obstacle detection and avoidance systems. In this regard, Deep Learning (DL) techniques have arisen as a promising alternative for improving real-time obstacle detection and collision avoidance for highly autonomous UAVs. This article reviews the most recent developments on DL Unmanned Aerial Systems (UASs) and provides a detailed explanation on the main DL techniques. Moreover, the latest DL-UAV communication architectures are studied and their most common hardware is analyzed. Furthermore, this article enumerates the most relevant open challenges for current DL-UAV solutions, thus allowing future researchers to define a roadmap for devising the new generation affordable autonomous DL-UAV IoT solutions.
Collapse
|
33
|
A Deep Learning Approach on Building Detection from Unmanned Aerial Vehicle-Based Images in Riverbank Monitoring. SENSORS 2018; 18:s18113921. [PMID: 30441771 PMCID: PMC6264059 DOI: 10.3390/s18113921] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 11/07/2018] [Accepted: 11/12/2018] [Indexed: 12/03/2022]
Abstract
Buildings along riverbanks are likely to be affected by rising water levels, therefore the acquisition of accurate building information has great importance not only for riverbank environmental protection but also for dealing with emergency cases like flooding. UAV-based photographs are flexible and cloud-free compared to satellite images and can provide very high-resolution images up to centimeter level, while there exist great challenges in quickly and accurately detecting and extracting building from UAV images because there are usually too many details and distortions on UAV images. In this paper, a deep learning (DL)-based approach is proposed for more accurately extracting building information, in which the network architecture, SegNet, is used in the semantic segmentation after the network training on a completely labeled UAV image dataset covering multi-dimension urban settlement appearances along a riverbank area in Chongqing. The experiment results show that an excellent performance has been obtained in the detection of buildings from untrained locations with an average overall accuracy more than 90%. To verify the generality and advantage of the proposed method, the procedure is further evaluated by training and testing with another two open standard datasets which have a variety of building patterns and styles, and the final overall accuracies of building extraction are more than 93% and 95%, respectively.
Collapse
|
34
|
Low-Altitude Remote Sensing Based on Convolutional Neural Network for Weed Classification in Ecological Irrigation Area. ACTA ACUST UNITED AC 2018. [DOI: 10.1016/j.ifacol.2018.08.180] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|