1
|
Catargiu C, Cleju N, Ciocoiu IB. A Comparative Performance Evaluation of YOLO-Type Detectors on a New Open Fire and Smoke Dataset. SENSORS (BASEL, SWITZERLAND) 2024; 24:5597. [PMID: 39275508 PMCID: PMC11398105 DOI: 10.3390/s24175597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 08/22/2024] [Accepted: 08/28/2024] [Indexed: 09/16/2024]
Abstract
The paper introduces a new FireAndSmoke open dataset comprising over 22,000 images and 93,000 distinct instances compiled from 1200 YouTube videos and public Internet resources. The scenes include separate and combined fire and smoke scenarios and a curated set of difficult cases representing real-life circumstances when specific image patches may be erroneously detected as fire/smoke presence. The dataset has been constructed using both static pictures and video sequences, covering day/night, indoor/outdoor, urban/industrial/forest, low/high resolution, and single/multiple instance cases. A rigorous selection, preprocessing, and labeling procedure has been applied, adhering to the findability, accessibility, interoperability, and reusability specifications described in the literature. The performances of the YOLO-type family of object detectors have been compared in terms of class-wise Precision, Recall, Mean Average Precision (mAP), and speed. Experimental results indicate the recently introduced YOLO10 model as the top performer, with 89% accuracy and a mAP@50 larger than 91%.
Collapse
Affiliation(s)
- Constantin Catargiu
- Faculty of Electronics, Telecommunications and Information Technology, Gheorghe Asachi Technical University of Iasi, Bd. Carol I 11A, 700506 Iasi, Romania
| | - Nicolae Cleju
- Faculty of Electronics, Telecommunications and Information Technology, Gheorghe Asachi Technical University of Iasi, Bd. Carol I 11A, 700506 Iasi, Romania
| | - Iulian B Ciocoiu
- Faculty of Electronics, Telecommunications and Information Technology, Gheorghe Asachi Technical University of Iasi, Bd. Carol I 11A, 700506 Iasi, Romania
| |
Collapse
|
2
|
Buriboev AS, Rakhmanov K, Soqiyev T, Choi AJ. Improving Fire Detection Accuracy through Enhanced Convolutional Neural Networks and Contour Techniques. SENSORS (BASEL, SWITZERLAND) 2024; 24:5184. [PMID: 39204881 PMCID: PMC11360108 DOI: 10.3390/s24165184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/30/2024] [Accepted: 08/09/2024] [Indexed: 09/04/2024]
Abstract
In this study, a novel method combining contour analysis with deep CNN is applied for fire detection. The method was made for fire detection using two main algorithms: one which detects the color properties of the fires, and another which analyzes the shape through contour detection. To overcome the disadvantages of previous methods, we generate a new labeled dataset, which consists of small fire instances and complex scenarios. We elaborated the dataset by selecting regions of interest (ROI) for enhanced fictional small fires and complex environment traits extracted through color characteristics and contour analysis, to better train our model regarding those more intricate features. Results of the experiment showed that our improved CNN model outperformed other networks. The accuracy, precision, recall and F1 score were 99.4%, 99.3%, 99.4% and 99.5%, respectively. The performance of our new approach is enhanced in all metrics compared to the previous CNN model with an accuracy of 99.4%. In addition, our approach beats many other state-of-the-art methods as well: Dilated CNNs (98.1% accuracy), Faster R-CNN (97.8% accuracy) and ResNet (94.3%). This result suggests that the approach can be beneficial for a variety of safety and security applications ranging from home, business to industrial and outdoor settings.
Collapse
Affiliation(s)
- Abror Shavkatovich Buriboev
- School of Computing, Department of AI-Software, Gachon University, Seongnam-si 13306, Republic of Korea
- Department of Infocommunication Engineering, Tashkent University of Information Technologies, Tashkent 100084, Uzbekistan
| | - Khoshim Rakhmanov
- Department of Digital and Educational Technologies, Samarkand Branch of Tashkent University of Information Technologies, Samarkand 140100, Uzbekistan
| | - Temur Soqiyev
- Digital Technologies and Artificial Intelligence Research Institute, Tashkent 100125, Uzbekistan
| | - Andrew Jaeyong Choi
- School of Computing, Department of AI-Software, Gachon University, Seongnam-si 13306, Republic of Korea
| |
Collapse
|
3
|
Kim Y, Abebe AM, Kim J, Hong S, An K, Shim J, Baek J. Deep learning-based elaiosome detection in milk thistle seed for efficient high-throughput phenotyping. FRONTIERS IN PLANT SCIENCE 2024; 15:1395558. [PMID: 39129764 PMCID: PMC11310567 DOI: 10.3389/fpls.2024.1395558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 07/04/2024] [Indexed: 08/13/2024]
Abstract
Milk thistle, Silybum marianum (L.), is a well-known medicinal plant used for the treatment of liver diseases due to its high content of silymarin. The seeds contain elaiosome, a fleshy structure attached to the seeds, which is believed to be a rich source of many metabolites including silymarin. Segmentation of elaiosomes using only image analysis is difficult, and this makes it impossible to quantify the elaiosome phenotypes. This study proposes a new approach for semi-automated detection and segmentation of elaiosomes in milk thistle seed using the Detectron2 deep learning algorithm. One hundred manually labeled images were used to train the initial elaiosome detection model. This model was used to predict elaiosome from new datasets, and the precise predictions were manually selected and used as new labeled images for retraining the model. Such semi-automatic image labeling, i.e., using the prediction results of the previous stage for retraining the model, allowed the production of sufficient labeled data for retraining. Finally, a total of 6,000 labeled images were used to train Detectron2 for elaiosome detection and attained a promising result. The results demonstrate the effectiveness of Detectron2 in detecting milk thistle seed elaiosomes with an accuracy of 99.9%. The proposed method automatically detects and segments elaiosome from the milk thistle seed. The predicted mask images of elaiosome were used to analyze its area as one of the seed phenotypic traits along with other seed morphological traits by image-based high-throughput phenotyping in ImageJ. Enabling high-throughput phenotyping of elaiosome and other seed morphological traits will be useful for breeding milk thistle cultivars with desirable traits.
Collapse
Affiliation(s)
- Younguk Kim
- Gene Engineering Division, National Institute of Agricultural Sciences, Rural Development Administration, Jeonju, Republic of Korea
| | - Alebel Mekuriaw Abebe
- Gene Engineering Division, National Institute of Agricultural Sciences, Rural Development Administration, Jeonju, Republic of Korea
| | - Jaeyoung Kim
- Gene Engineering Division, National Institute of Agricultural Sciences, Rural Development Administration, Jeonju, Republic of Korea
| | - Suyoung Hong
- Genomics Division, National Institute of Agricultural Sciences, Rural Development Administration, Jeonju, Republic of Korea
| | | | | | - Jeongho Baek
- Gene Engineering Division, National Institute of Agricultural Sciences, Rural Development Administration, Jeonju, Republic of Korea
| |
Collapse
|
4
|
Zhang Z, Tan L, Robert TLK. An Improved Fire and Smoke Detection Method Based on YOLOv8n for Smart Factories. SENSORS (BASEL, SWITZERLAND) 2024; 24:4786. [PMID: 39123833 PMCID: PMC11314977 DOI: 10.3390/s24154786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 07/22/2024] [Accepted: 07/23/2024] [Indexed: 08/12/2024]
Abstract
Factories play a crucial role in economic and social development. However, fire disasters in factories greatly threaten both human lives and properties. Previous studies about fire detection using deep learning mostly focused on wildfire detection and ignored the fires that happened in factories. In addition, lots of studies focus on fire detection, while smoke, the important derivative of a fire disaster, is not detected by such algorithms. To better help smart factories monitor fire disasters, this paper proposes an improved fire and smoke detection method based on YOLOv8n. To ensure the quality of the algorithm and training process, a self-made dataset including more than 5000 images and their corresponding labels is created. Then, nine advanced algorithms are selected and tested on the dataset. YOLOv8n exhibits the best detection results in terms of accuracy and detection speed. ConNeXtV2 is then inserted into the backbone to enhance inter-channel feature competition. RepBlock and SimConv are selected to replace the original Conv and improve computational ability and memory bandwidth. For the loss function, CIoU is replaced by MPDIoU to ensure an efficient and accurate bounding box. Ablation tests show that our improved algorithm achieves better performance in all four metrics reflecting accuracy: precision, recall, F1, and mAP@50. Compared with the original model, whose four metrics are approximately 90%, the modified algorithm achieves above 95%. mAP@50 in particular reaches 95.6%, exhibiting an improvement of approximately 4.5%. Although complexity improves, the requirements of real-time fire and smoke monitoring are satisfied.
Collapse
Affiliation(s)
| | | | - Tiong Lee Kong Robert
- School of Civil & Environmental Engineering, Nanyang Technological University, Singapore 639798, Singapore; (Z.Z.); (L.T.)
| |
Collapse
|
5
|
Carletti V, Greco A, Saggese A, Vento B. A Smart Visual Sensor for Smoke Detection Based on Deep Neural Networks. SENSORS (BASEL, SWITZERLAND) 2024; 24:4519. [PMID: 39065916 PMCID: PMC11280520 DOI: 10.3390/s24144519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/20/2024] [Accepted: 07/11/2024] [Indexed: 07/28/2024]
Abstract
The automatic detection of smoke by analyzing the video stream acquired by traditional surveillance cameras is becoming a more and more interesting problem for the scientific community thanks to the necessity to prevent fires at the very early stages. The adoption of a smart visual sensor, namely a computer vision algorithm running in real time, allows one to overcome the limitations of standard physical sensors. Nevertheless, this is a very challenging problem, due to the strong similarity of the smoke with other environmental elements like clouds, fog and dust. In addition to this challenge, data available for training deep neural networks is limited and not fully representative of real environments. Within this context, in this paper we propose a new method for smoke detection based on the combination of motion and appearance analysis with a modern convolutional neural network (CNN). Moreover, we propose a new dataset, called the MIVIA Smoke Detection Dataset (MIVIA-SDD), publicly available for research purposes; it consists of 129 videos covering about 28 h of recordings. The proposed hybrid method, trained and evaluated on the proposed dataset, demonstrated to be very effective by achieving a 94% smoke recognition rate and, at the same time, a substantially lower false positive rate if compared with fully deep learning-based approaches (14% vs. 100%). Therefore, the proposed combination of motion and appearance analysis with deep learning CNNs can be further investigated to improve the precision of fire detection approaches.
Collapse
Affiliation(s)
- Vincenzo Carletti
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, 84084 Fisciano, Italy; (V.C.); (A.S.)
| | - Antonio Greco
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, 84084 Fisciano, Italy; (V.C.); (A.S.)
| | - Alessia Saggese
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, 84084 Fisciano, Italy; (V.C.); (A.S.)
| | - Bruno Vento
- Department of Electrical Engineering and Information Technology (DIETI), University of Napoli Federico II, 80138 Napoli, Italy;
| |
Collapse
|
6
|
Rahman A, Debnath T, Kundu D, Khan MSI, Aishi AA, Sazzad S, Sayduzzaman M, Band SS. Machine learning and deep learning-based approach in smart healthcare: Recent advances, applications, challenges and opportunities. AIMS Public Health 2024; 11:58-109. [PMID: 38617415 PMCID: PMC11007421 DOI: 10.3934/publichealth.2024004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 12/18/2023] [Indexed: 04/16/2024] Open
Abstract
In recent years, machine learning (ML) and deep learning (DL) have been the leading approaches to solving various challenges, such as disease predictions, drug discovery, medical image analysis, etc., in intelligent healthcare applications. Further, given the current progress in the fields of ML and DL, there exists the promising potential for both to provide support in the realm of healthcare. This study offered an exhaustive survey on ML and DL for the healthcare system, concentrating on vital state of the art features, integration benefits, applications, prospects and future guidelines. To conduct the research, we found the most prominent journal and conference databases using distinct keywords to discover scholarly consequences. First, we furnished the most current along with cutting-edge progress in ML-DL-based analysis in smart healthcare in a compendious manner. Next, we integrated the advancement of various services for ML and DL, including ML-healthcare, DL-healthcare, and ML-DL-healthcare. We then offered ML and DL-based applications in the healthcare industry. Eventually, we emphasized the research disputes and recommendations for further studies based on our observations.
Collapse
Affiliation(s)
- Anichur Rahman
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Tanoy Debnath
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
- Department of CSE, Green University of Bangladesh, 220/D, Begum Rokeya Sarani, Dhaka -1207, Bangladesh
| | - Dipanjali Kundu
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Md. Saikat Islam Khan
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Airin Afroj Aishi
- Department of Computing and Information System, Daffodil International University, Savar, Dhaka, Bangladesh
| | - Sadia Sazzad
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Mohammad Sayduzzaman
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Shahab S. Band
- Department of Information Management, International Graduate School of Artificial Intelligence, National Yunlin University of Science and Technology, Taiwan
| |
Collapse
|
7
|
Gubbi MR, Assis F, Chrispin J, Bell MAL. Deep learning in vivo catheter tip locations for photoacoustic-guided cardiac interventions. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S11505. [PMID: 38076439 PMCID: PMC10704189 DOI: 10.1117/1.jbo.29.s1.s11505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/27/2023] [Accepted: 10/23/2023] [Indexed: 12/18/2023]
Abstract
Significance Interventional cardiac procedures often require ionizing radiation to guide cardiac catheters to the heart. To reduce the associated risks of ionizing radiation, photoacoustic imaging can potentially be combined with robotic visual servoing, with initial demonstrations requiring segmentation of catheter tips. However, typical segmentation algorithms applied to conventional image formation methods are susceptible to problematic reflection artifacts, which compromise the required detectability and localization of the catheter tip. Aim We describe a convolutional neural network and the associated customizations required to successfully detect and localize in vivo photoacoustic signals from a catheter tip received by a phased array transducer, which is a common transducer for transthoracic cardiac imaging applications. Approach We trained a network with simulated photoacoustic channel data to identify point sources, which appropriately model photoacoustic signals from the tip of an optical fiber inserted in a cardiac catheter. The network was validated with an independent simulated dataset, then tested on data from the tips of cardiac catheters housing optical fibers and inserted into ex vivo and in vivo swine hearts. Results When validated with simulated data, the network achieved an F 1 score of 98.3% and Euclidean errors (mean ± one standard deviation) of 1.02 ± 0.84 mm for target depths of 20 to 100 mm. When tested on ex vivo and in vivo data, the network achieved F 1 scores as large as 100.0%. In addition, for target depths of 40 to 90 mm in the ex vivo and in vivo data, up to 86.7% of axial and 100.0% of lateral position errors were lower than the axial and lateral resolution, respectively, of the phased array transducer. Conclusions These results demonstrate the promise of the proposed method to identify photoacoustic sources in future interventional cardiology and cardiac electrophysiology applications.
Collapse
Affiliation(s)
- Mardava R. Gubbi
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
| | - Fabrizio Assis
- Johns Hopkins Medical Institutions, Division of Cardiology, Baltimore, Maryland, United States
| | - Jonathan Chrispin
- Johns Hopkins Medical Institutions, Division of Cardiology, Baltimore, Maryland, United States
| | - Muyinatu A. Lediju Bell
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| |
Collapse
|
8
|
Malebary SJ. Early Fire Detection Using Long Short-Term Memory-Based Instance Segmentation and Internet of Things for Disaster Management. SENSORS (BASEL, SWITZERLAND) 2023; 23:9043. [PMID: 38005432 PMCID: PMC10675321 DOI: 10.3390/s23229043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 11/02/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023]
Abstract
Fire outbreaks continue to cause damage despite the improvements in fire-detection tools and algorithms. As the human population and global warming continue to rise, fires have emerged as a significant worldwide issue. These factors may contribute to the greenhouse effect and climatic changes, among other detrimental consequences. It is still challenging to implement a well-performing and optimized approach, which is sufficiently accurate, and has tractable complexity and a low false alarm rate. A small fire and the identification of a fire from a long distance are also challenges in previously proposed techniques. In this study, we propose a novel hybrid model, called IS-CNN-LSTM, based on convolutional neural networks (CNN) to detect and analyze fire intensity. A total of 21 convolutional layers, 24 rectified linear unit (ReLU) layers, 6 pooling layers, 3 fully connected layers, 2 dropout layers, and a softmax layer are included in the proposed 57-layer CNN model. Our proposed model performs instance segmentation to distinguish between fire and non-fire events. To reduce the intricacy of the proposed model, we also propose a key-frame extraction algorithm. The proposed model uses Internet of Things (IoT) devices to alert the relevant person by calculating the severity of the fire. Our proposed model is tested on a publicly available dataset having fire and normal videos. The achievement of 95.25% classification accuracy, 0.09% false positive rate (FPR), 0.65% false negative rate (FNR), and a prediction time of 0.08 s validates the proposed system.
Collapse
Affiliation(s)
- Sharaf J Malebary
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, P.O. Box 344, Rabigh 21911, Saudi Arabia
| |
Collapse
|
9
|
Saydirasulovich SN, Mukhiddinov M, Djuraev O, Abdusalomov A, Cho YI. An Improved Wildfire Smoke Detection Based on YOLOv8 and UAV Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:8374. [PMID: 37896467 PMCID: PMC10610991 DOI: 10.3390/s23208374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/21/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Forest fires rank among the costliest and deadliest natural disasters globally. Identifying the smoke generated by forest fires is pivotal in facilitating the prompt suppression of developing fires. Nevertheless, succeeding techniques for detecting forest fire smoke encounter persistent issues, including a slow identification rate, suboptimal accuracy in detection, and challenges in distinguishing smoke originating from small sources. This study presents an enhanced YOLOv8 model customized to the context of unmanned aerial vehicle (UAV) images to address the challenges above and attain heightened precision in detection accuracy. Firstly, the research incorporates Wise-IoU (WIoU) v3 as a regression loss for bounding boxes, supplemented by a reasonable gradient allocation strategy that prioritizes samples of common quality. This strategic approach enhances the model's capacity for precise localization. Secondly, the conventional convolutional process within the intermediate neck layer is substituted with the Ghost Shuffle Convolution mechanism. This strategic substitution reduces model parameters and expedites the convergence rate. Thirdly, recognizing the challenge of inadequately capturing salient features of forest fire smoke within intricate wooded settings, this study introduces the BiFormer attention mechanism. This mechanism strategically directs the model's attention towards the feature intricacies of forest fire smoke, simultaneously suppressing the influence of irrelevant, non-target background information. The obtained experimental findings highlight the enhanced YOLOv8 model's effectiveness in smoke detection, proving an average precision (AP) of 79.4%, signifying a notable 3.3% enhancement over the baseline. The model's performance extends to average precision small (APS) and average precision large (APL), registering robust values of 71.3% and 92.6%, respectively.
Collapse
Affiliation(s)
| | - Mukhriddin Mukhiddinov
- Department of Communication and Digital Technologies, University of Management and Future Technologies, Tashkent 100208, Uzbekistan; (M.M.); (O.D.)
| | - Oybek Djuraev
- Department of Communication and Digital Technologies, University of Management and Future Technologies, Tashkent 100208, Uzbekistan; (M.M.); (O.D.)
| | - Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea;
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea;
| |
Collapse
|
10
|
Avazov K, Jamil MK, Muminov B, Abdusalomov AB, Cho YI. Fire Detection and Notification Method in Ship Areas Using Deep Learning and Computer Vision Approaches. SENSORS (BASEL, SWITZERLAND) 2023; 23:7078. [PMID: 37631614 PMCID: PMC10458310 DOI: 10.3390/s23167078] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/02/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023]
Abstract
Fire incidents occurring onboard ships cause significant consequences that result in substantial effects. Fires on ships can have extensive and severe wide-ranging impacts on matters such as the safety of the crew, cargo, the environment, finances, reputation, etc. Therefore, timely detection of fires is essential for quick responses and powerful mitigation. The study in this research paper presents a fire detection technique based on YOLOv7 (You Only Look Once version 7), incorporating improved deep learning algorithms. The YOLOv7 architecture, with an improved E-ELAN (extended efficient layer aggregation network) as its backbone, serves as the basis of our fire detection system. Its enhanced feature fusion technique makes it superior to all its predecessors. To train the model, we collected 4622 images of various ship scenarios and performed data augmentation techniques such as rotation, horizontal and vertical flips, and scaling. Our model, through rigorous evaluation, showcases enhanced capabilities of fire recognition to improve maritime safety. The proposed strategy successfully achieves an accuracy of 93% in detecting fires to minimize catastrophic incidents. Objects having visual similarities to fire may lead to false prediction and detection by the model, but this can be controlled by expanding the dataset. However, our model can be utilized as a real-time fire detector in challenging environments and for small-object detection. Advancements in deep learning models hold the potential to enhance safety measures, and our proposed model in this paper exhibits this potential. Experimental results proved that the proposed method can be used successfully for the protection of ships and in monitoring fires in ship port areas. Finally, we compared the performance of our method with those of recently reported fire-detection approaches employing widely used performance matrices to test the fire classification results achieved.
Collapse
Affiliation(s)
- Kuldoshbay Avazov
- Department of Computer Engineering, Gachon University, Seongnam-si 461-701, Republic of Korea; (K.A.)
| | - Muhammad Kafeel Jamil
- Department of Computer Engineering, Gachon University, Seongnam-si 461-701, Republic of Korea; (K.A.)
| | - Bahodir Muminov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | | | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Seongnam-si 461-701, Republic of Korea; (K.A.)
| |
Collapse
|
11
|
Carta F, Zidda C, Putzu M, Loru D, Anedda M, Giusto D. Advancements in Forest Fire Prevention: A Comprehensive Survey. SENSORS (BASEL, SWITZERLAND) 2023; 23:6635. [PMID: 37514928 PMCID: PMC10386475 DOI: 10.3390/s23146635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 07/17/2023] [Accepted: 07/21/2023] [Indexed: 07/30/2023]
Abstract
Nowadays, the challenges related to technological and environmental development are becoming increasingly complex. Among the environmentally significant issues, wildfires pose a serious threat to the global ecosystem. The damages inflicted upon forests are manifold, leading not only to the destruction of terrestrial ecosystems but also to climate changes. Consequently, reducing their impact on both people and nature requires the adoption of effective approaches for prevention, early warning, and well-coordinated interventions. This document presents an analysis of the evolution of various technologies used in the detection, monitoring, and prevention of forest fires from past years to the present. It highlights the strengths, limitations, and future developments in this field. Forest fires have emerged as a critical environmental concern due to their devastating effects on ecosystems and the potential repercussions on the climate. Understanding the evolution of technology in addressing this issue is essential to formulate more effective strategies for mitigating and preventing wildfires.
Collapse
Affiliation(s)
- Francesco Carta
- CNIT UdR, Department of Electrical and Electronic Engineering, University of Cagliari, 09123 Cagliari, Italy
| | - Chiara Zidda
- CNIT UdR, Department of Electrical and Electronic Engineering, University of Cagliari, 09123 Cagliari, Italy
| | - Martina Putzu
- CNIT UdR, Department of Electrical and Electronic Engineering, University of Cagliari, 09123 Cagliari, Italy
| | - Daniele Loru
- CNIT UdR, Department of Electrical and Electronic Engineering, University of Cagliari, 09123 Cagliari, Italy
| | - Matteo Anedda
- CNIT UdR, Department of Electrical and Electronic Engineering, University of Cagliari, 09123 Cagliari, Italy
| | - Daniele Giusto
- CNIT UdR, Department of Electrical and Electronic Engineering, University of Cagliari, 09123 Cagliari, Italy
| |
Collapse
|
12
|
Safarov F, Akhmedov F, Abdusalomov AB, Nasimov R, Cho YI. Real-Time Deep Learning-Based Drowsiness Detection: Leveraging Computer-Vision and Eye-Blink Analyses for Enhanced Road Safety. SENSORS (BASEL, SWITZERLAND) 2023; 23:6459. [PMID: 37514754 PMCID: PMC10384496 DOI: 10.3390/s23146459] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 07/05/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023]
Abstract
Drowsy driving can significantly affect driving performance and overall road safety. Statistically, the main causes are decreased alertness and attention of the drivers. The combination of deep learning and computer-vision algorithm applications has been proven to be one of the most effective approaches for the detection of drowsiness. Robust and accurate drowsiness detection systems can be developed by leveraging deep learning to learn complex coordinate patterns using visual data. Deep learning algorithms have emerged as powerful techniques for drowsiness detection because of their ability to learn automatically from given inputs and feature extractions from raw data. Eye-blinking-based drowsiness detection was applied in this study, which utilized the analysis of eye-blink patterns. In this study, we used custom data for model training and experimental results were obtained for different candidates. The blinking of the eye and mouth region coordinates were obtained by applying landmarks. The rate of eye-blinking and changes in the shape of the mouth were analyzed using computer-vision techniques by measuring eye landmarks with real-time fluctuation representations. An experimental analysis was performed in real time and the results proved the existence of a correlation between yawning and closed eyes, classified as drowsy. The overall performance of the drowsiness detection model was 95.8% accuracy for drowsy-eye detection, 97% for open-eye detection, 0.84% for yawning detection, 0.98% for right-sided falling, and 100% for left-sided falling. Furthermore, the proposed method allowed a real-time eye rate analysis, where the threshold served as a separator of the eye into two classes, the "Open" and "Closed" states.
Collapse
Affiliation(s)
- Furkat Safarov
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-si 461701, Republic of Korea; (F.S.); (F.A.)
| | - Farkhod Akhmedov
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-si 461701, Republic of Korea; (F.S.); (F.A.)
| | | | - Rashid Nasimov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| | - Young Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-si 461701, Republic of Korea; (F.S.); (F.A.)
| |
Collapse
|
13
|
Kim SY, Muminov A. Forest Fire Smoke Detection Based on Deep Learning Approaches and Unmanned Aerial Vehicle Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:5702. [PMID: 37420867 PMCID: PMC10304711 DOI: 10.3390/s23125702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 06/14/2023] [Accepted: 06/16/2023] [Indexed: 07/09/2023]
Abstract
Wildfire poses a significant threat and is considered a severe natural disaster, which endangers forest resources, wildlife, and human livelihoods. In recent times, there has been an increase in the number of wildfire incidents, and both human involvement with nature and the impacts of global warming play major roles in this. The rapid identification of fire starting from early smoke can be crucial in combating this issue, as it allows firefighters to respond quickly to the fire and prevent it from spreading. As a result, we proposed a refined version of the YOLOv7 model for detecting smoke from forest fires. To begin, we compiled a collection of 6500 UAV pictures of smoke from forest fires. To further enhance YOLOv7's feature extraction capabilities, we incorporated the CBAM attention mechanism. Then, we added an SPPF+ layer to the network's backbone to better concentrate smaller wildfire smoke regions. Finally, decoupled heads were introduced into the YOLOv7 model to extract useful information from an array of data. A BiFPN was used to accelerate multi-scale feature fusion and acquire more specific features. Learning weights were introduced in the BiFPN so that the network can prioritize the most significantly affecting characteristic mapping of the result characteristics. The testing findings on our forest fire smoke dataset revealed that the proposed approach successfully detected forest fire smoke with an AP50 of 86.4%, 3.9% higher than previous single- and multiple-stage object detectors.
Collapse
Affiliation(s)
- Soon-Young Kim
- Department of Physical Education, Gachon University, Seongnam 13120, Republic of Korea;
| | - Azamjon Muminov
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
14
|
Norkobil Saydirasulovich S, Abdusalomov A, Jamil MK, Nasimov R, Kozhamzharova D, Cho YI. A YOLOv6-Based Improved Fire Detection Approach for Smart City Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:3161. [PMID: 36991872 PMCID: PMC10051218 DOI: 10.3390/s23063161] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/10/2023] [Accepted: 03/11/2023] [Indexed: 06/19/2023]
Abstract
Authorities and policymakers in Korea have recently prioritized improving fire prevention and emergency response. Governments seek to enhance community safety for residents by constructing automated fire detection and identification systems. This study examined the efficacy of YOLOv6, a system for object identification running on an NVIDIA GPU platform, to identify fire-related items. Using metrics such as object identification speed, accuracy research, and time-sensitive real-world applications, we analyzed the influence of YOLOv6 on fire detection and identification efforts in Korea. We conducted trials using a fire dataset comprising 4000 photos collected through Google, YouTube, and other resources to evaluate the viability of YOLOv6 in fire recognition and detection tasks. According to the findings, YOLOv6's object identification performance was 0.98, with a typical recall of 0.96 and a precision of 0.83. The system achieved an MAE of 0.302%. These findings suggest that YOLOv6 is an effective technique for detecting and identifying fire-related items in photos in Korea. Multi-class object recognition using random forests, k-nearest neighbors, support vector, logistic regression, naive Bayes, and XGBoost was performed on the SFSC data to evaluate the system's capacity to identify fire-related objects. The results demonstrate that for fire-related objects, XGBoost achieved the highest object identification accuracy, with values of 0.717 and 0.767. This was followed by random forest, with values of 0.468 and 0.510. Finally, we tested YOLOv6 in a simulated fire evacuation scenario to gauge its practicality in emergencies. The results show that YOLOv6 can accurately identify fire-related items in real time within a response time of 0.66 s. Therefore, YOLOv6 is a viable option for fire detection and recognition in Korea. The XGBoost classifier provides the highest accuracy when attempting to identify objects, achieving remarkable results. Furthermore, the system accurately identifies fire-related objects while they are being detected in real-time. This makes YOLOv6 an effective tool to use in fire detection and identification initiatives.
Collapse
Affiliation(s)
| | - Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| | - Muhammad Kafeel Jamil
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| | - Rashid Nasimov
- Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
| | - Dinara Kozhamzharova
- Department of Information System, International Information Technology University, Almaty 050000, Kazakhstan
| | - Young-Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-Gu, Seongnam-Si 461-701, Gyeonggi-Do, Republic of Korea
| |
Collapse
|
15
|
Lu Y, Fan X, Zhang Y, Wang Y, Jiang X. Machine Learning Models Using SHapley Additive exPlanation for Fire Risk Assessment Mode and Effects Analysis of Stadiums. SENSORS (BASEL, SWITZERLAND) 2023; 23:2151. [PMID: 36850757 PMCID: PMC9964004 DOI: 10.3390/s23042151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 02/10/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
Machine learning methods can establish complex nonlinear relationships between input and response variables for stadium fire risk assessment. However, the output of machine learning models is considered very difficult due to their complex "black box" structure, which hinders their application in stadium fire risk assessment. The SHapley Additive exPlanations (SHAP) method makes a local approximation to the predictions of any regression or classification model so as to be faithful and interpretable, and assigns significant values (SHAP value) to each input variable for a given prediction. In this study, we designed an indicator attribute threshold interval to classify and quantify different fire risk category data, and then used a random forest model combined with SHAP strategy in order to establish a stadium fire risk assessment model. The main objective is to analyze the impact analysis of each risk characteristic on four different risk assessment models, so as to find the complex nonlinear relationship between risk characteristics and stadium fire risk. This helps managers to be able to make appropriate fire safety management and smart decisions before an incident occurs and in a targeted manner to reduce the incidence of fires. The experimental results show that the established interpretable random forest model provides 83% accuracy, 86% precision, and 85% recall for the stadium fire risk test dataset. The study also shows that the low level of data makes it difficult to identify the range of decision boundaries for Critical mode and Hazardous mode.
Collapse
Affiliation(s)
- Ying Lu
- School of Resource and Environmental Engineering, Wuhan University of Science and Technology, Wuhan 430081, China
- Hubei Industrial Safety Engineering Technology Research Center, Wuhan 430081, China
| | - Xiaopeng Fan
- School of Resource and Environmental Engineering, Wuhan University of Science and Technology, Wuhan 430081, China
| | - Yi Zhang
- School of Resource and Environmental Engineering, Wuhan University of Science and Technology, Wuhan 430081, China
| | - Yong Wang
- School of Resource and Environmental Engineering, Wuhan University of Science and Technology, Wuhan 430081, China
| | - Xuepeng Jiang
- School of Resource and Environmental Engineering, Wuhan University of Science and Technology, Wuhan 430081, China
| |
Collapse
|