1
|
Mao W, Li G, Li X. Improved Re-Parameterized Convolution for Wildlife Detection in Neighboring Regions of Southwest China. Animals (Basel) 2024; 14:1152. [PMID: 38672300 PMCID: PMC11047598 DOI: 10.3390/ani14081152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 03/19/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024] Open
Abstract
To autonomously detect wildlife images captured by camera traps on a platform with limited resources and address challenges such as filtering out photos without optimal objects, as well as classifying and localizing species in photos with objects, we introduce a specialized wildlife object detector tailored for camera traps. This detector is developed using a dataset acquired by the Saola Working Group (SWG) through camera traps deployed in Vietnam and Laos. Utilizing the YOLOv6-N object detection algorithm as its foundation, the detector is enhanced by a tailored optimizer for improved model performance. We deliberately introduce asymmetric convolutional branches to enhance the feature characterization capability of the Backbone network. Additionally, we streamline the Neck and use CIoU loss to improve detection performance. For quantitative deployment, we refine the RepOptimizer to train a pure VGG-style network. Experimental results demonstrate that our proposed method empowers the model to achieve an 88.3% detection accuracy on the wildlife dataset in this paper. This accuracy is 3.1% higher than YOLOv6-N, and surpasses YOLOv7-T and YOLOv8-N by 5.5% and 2.8%, respectively. The model consistently maintains its detection performance even after quantization to the INT8 precision, achieving an inference speed of only 6.15 ms for a single image on the NVIDIA Jetson Xavier NX device. The improvements we introduce excel in tasks related to wildlife image recognition and object localization captured by camera traps, providing practical solutions to enhance wildlife monitoring and facilitate efficient data acquisition. Our current work represents a significant stride toward a fully automated animal observation system in real-time in-field applications.
Collapse
Affiliation(s)
| | - Gang Li
- School of Mathematics and Computer Science, Dali University, Dali 671003, China; (W.M.); (X.L.)
| | | |
Collapse
|
2
|
Zhang X, Xuan C, Ma Y, Liu H, Xue J. Lightweight model-based sheep face recognition via face image recording channel. J Anim Sci 2024; 102:skae066. [PMID: 38477672 PMCID: PMC11277863 DOI: 10.1093/jas/skae066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 03/12/2024] [Indexed: 03/14/2024] Open
Abstract
The accurate identification of individual sheep is a crucial prerequisite for establishing digital sheep farms and precision livestock farming. Currently, deep learning technology provides an efficient and non-contact method for sheep identity recognition. In particular, convolutional neural networks can be used to learn features of sheep faces to determine their corresponding identities. However, the existing sheep face recognition models face problems such as large model size, and high computational costs, making it difficult to meet the requirements of practical applications. In response to these issues, we introduce a lightweight sheep face recognition model called YOLOv7-Sheep Face Recognition (YOLOv7-SFR). Considering the labor-intensive nature associated with manually capturing sheep face images, we developed a face image recording channel to streamline the process and improve efficiency. This study collected facial images of 50 Small-tailed Han sheep through a recording channel. The experimental sheep ranged in age from 1 to 3 yr, with an average weight of 63.1 kg. Employing data augmentation methods further enhanced the original images, resulting in a total of 22,000 sheep face images. Ultimately, a sheep face dataset was established. To achieve lightweight improvement and improve the performance of the recognition model, a variety of improvement strategies were adopted. Specifically, we introduced the shuffle attention module into the backbone and fused the Dyhead module with the model's detection head. By combining multiple attention mechanisms, we improved the model's ability to learn target features. Additionally, the traditional convolutions in the backbone and neck were replaced with depthwise separable convolutions. Finally, leveraging knowledge distillation, we enhanced its performance further by employing You Only Look Once version 7 (YOLOv7) as the teacher model and YOLOv7-SFR as the student model. The training results indicate that our proposed approach achieved the best performance on the sheep face dataset, with a mean average precision@0.5 of 96.9%. The model size and average recognition time were 11.3 MB and 3.6 ms, respectively. Compared to YOLOv7-tiny, YOLOv7-SFR showed a 2.1% improvement in mean average precision@0.5, along with a 5.8% reduction in model size and a 42.9% reduction in average recognition time. The research results are expected to drive the practical applications of sheep face recognition technology.
Collapse
Affiliation(s)
- Xiwen Zhang
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Inner Mongolia, Hohhot 010018, China
- Inner Mongolia Engineering Research Center for Intelligent Facilities in Prataculture and Livestock Breeding, Inner Mongolia, Hohhot 010018, China
| | - Chuanzhong Xuan
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Inner Mongolia, Hohhot 010018, China
- Inner Mongolia Engineering Research Center for Intelligent Facilities in Prataculture and Livestock Breeding, Inner Mongolia, Hohhot 010018, China
| | - Yanhua Ma
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Inner Mongolia, Hohhot 010018, China
| | - Haiyang Liu
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Inner Mongolia, Hohhot 010018, China
| | - Jing Xue
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Inner Mongolia, Hohhot 010018, China
| |
Collapse
|
3
|
Yang W, Liu T, Jiang P, Qi A, Deng L, Liu Z, He Y. A Forest Wildlife Detection Algorithm Based on Improved YOLOv5s. Animals (Basel) 2023; 13:3134. [PMID: 37835740 PMCID: PMC10571878 DOI: 10.3390/ani13193134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 10/01/2023] [Accepted: 10/03/2023] [Indexed: 10/15/2023] Open
Abstract
A forest wildlife detection algorithm based on an improved YOLOv5s network model is proposed to advance forest wildlife monitoring and improve detection accuracy in complex forest environments. This research utilizes a data set from the Hunan Hupingshan National Nature Reserve in China, to which data augmentation and expansion methods are applied to extensively train the proposed model. To enhance the feature extraction ability of the proposed model, a weighted channel stitching method based on channel attention is introduced. The Swin Transformer module is combined with a CNN network to add a Self-Attention mechanism, thus improving the perceptual field for feature extraction. Furthermore, a new loss function (DIOU_Loss) and an adaptive class suppression loss (L_BCE) are adopted to accelerate the model's convergence speed, reduce false detections in confusing categories, and increase its accuracy. When comparing our improved algorithm with the original YOLOv5s network model under the same experimental conditions and data set, significant improvements are observed, in particular, the mean average precision (mAP) is increased from 72.6% to 89.4%, comprising an accuracy improvement of 16.8%. Our improved algorithm also outperforms popular target detection algorithms, including YOLOv5s, YOLOv3, RetinaNet, and Faster-RCNN. Our proposed improvement measures can well address the challenges posed by the low contrast between background and targets, as well as occlusion and overlap, in forest wildlife images captured by trap cameras. These measures provide practical solutions for enhanced forest wildlife protection and facilitate efficient data acquisition.
Collapse
Affiliation(s)
| | - Tianyu Liu
- College of Mechanical and Electrical Engineering, Hunan Agricultural University, Changsha 410128, China
| | | | | | | | | | | |
Collapse
|
4
|
Zhang X, Xuan C, Ma Y, Su H. A high-precision facial recognition method for small-tailed Han sheep based on an optimised Vision Transformer. Animal 2023; 17:100886. [PMID: 37422932 DOI: 10.1016/j.animal.2023.100886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 06/05/2023] [Accepted: 06/08/2023] [Indexed: 07/11/2023] Open
Abstract
Accurate identification of individual animals plays a pivotal role in enhancing animal welfare and optimising farm production. Although Radio Frequency Identification technology has been widely applied in animal identification, this method still exhibits several limitations that make it difficult to meet current practical application requirements. In this study, we proposed ViT-Sheep, a sheep face recognition model based on the Vision Transformer (ViT) architecture, to facilitate precise animal management and enhance livestock welfare. Compared to Convolutional Neural Network (CNN), ViT is renowned for its competitive performance. The experimental procedure of this study consisted of three main steps. Firstly, we collected face images of 160 experimental sheep to construct the sheep face image dataset. Secondly, we developed two sets of sheep face recognition models based on CNN and ViT, respectively. To enhance the ability to learn sheep face biological features, we proposed targeted improvement strategies for the sheep face recognition model. Specifically, we introduced the LayerScale module into the encoder of the ViT-Base-16 model and employed transfer learning to improve recognition accuracy. Finally, we compared the training results of different recognition models and the ViT-Sheep model. The results demonstrated that our proposed method achieved the highest performance on the sheep face image dataset, with a recognition accuracy of 97.9%. This study demonstrates that ViT can successfully achieve sheep face recognition tasks with good robustness. Furthermore, the findings of this research will promote the practical application of artificial intelligence animal recognition technology in sheep production.
Collapse
Affiliation(s)
- Xiwen Zhang
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Inner Mongolia, Hohhot 010018, China; Inner Mongolia Engineering Research Center for Intelligent Facilities in Prataculture and Livestock Breeding, Inner Mongolia, Hohhot 010018, China
| | - Chuanzhong Xuan
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Inner Mongolia, Hohhot 010018, China; Inner Mongolia Engineering Research Center for Intelligent Facilities in Prataculture and Livestock Breeding, Inner Mongolia, Hohhot 010018, China.
| | - Yanhua Ma
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Inner Mongolia, Hohhot 010018, China
| | - He Su
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Inner Mongolia, Hohhot 010018, China
| |
Collapse
|
5
|
Lei J, Gao S, Rasool MA, Fan R, Jia Y, Lei G. Optimized Small Waterbird Detection Method Using Surveillance Videos Based on YOLOv7. Animals (Basel) 2023; 13:1929. [PMID: 37370439 DOI: 10.3390/ani13121929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/13/2023] [Accepted: 06/07/2023] [Indexed: 06/29/2023] Open
Abstract
Waterbird monitoring is the foundation of conservation and management strategies in almost all types of wetland ecosystems. China's improved wetland protection infrastructure, which includes remote devices for the collection of larger quantities of acoustic and visual data on wildlife species, increased the need for data filtration and analysis techniques. Object detection based on deep learning has emerged as a basic solution for big data analysis that has been tested in several application fields. However, these deep learning techniques have not yet been tested for small waterbird detection from real-time surveillance videos, which can address the challenge of waterbird monitoring in real time. We propose an improved detection method by adding an extra prediction head, SimAM attention module, and sequential frame to YOLOv7, termed as YOLOv7-waterbird, for real-time video surveillance devices to identify attention regions and perform waterbird monitoring tasks. With the Waterbird Dataset, the mean average precision (mAP) value of YOLOv7-waterbird was 67.3%, which was approximately 5% higher than that of the baseline model. Furthermore, the improved method achieved a recall of 87.9% (precision = 85%) and 79.1% for small waterbirds (defined as pixels less than 40 × 40), suggesting a better performance for small object detection than the original method. This algorithm could be used by the administration of protected areas or other groups to monitor waterbirds with higher accuracy using existing surveillance cameras and can aid in wildlife conservation to some extent.
Collapse
Affiliation(s)
- Jialin Lei
- School of Ecology and Nature Conservation, Beijing Forestry University, Beijing 100083, China
| | - Shuhui Gao
- Birdsdata Technology (Beijing) Co., Ltd., Beijing 100083, China
| | | | - Rong Fan
- School of Ecology and Nature Conservation, Beijing Forestry University, Beijing 100083, China
| | - Yifei Jia
- School of Ecology and Nature Conservation, Beijing Forestry University, Beijing 100083, China
| | - Guangchun Lei
- School of Ecology and Nature Conservation, Beijing Forestry University, Beijing 100083, China
| |
Collapse
|
6
|
Zhang X, Xuan C, Xue J, Chen B, Ma Y. LSR-YOLO: A High-Precision, Lightweight Model for Sheep Face Recognition on the Mobile End. Animals (Basel) 2023; 13:1824. [PMID: 37889716 PMCID: PMC10252084 DOI: 10.3390/ani13111824] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Revised: 05/25/2023] [Accepted: 05/28/2023] [Indexed: 07/30/2023] Open
Abstract
The accurate identification of sheep is crucial for breeding, behavioral research, food quality tracking, and disease prevention on modern farms. As a result of the time-consuming, expensive, and unreliable problems of traditional sheep-identification methods, relevant studies have built sheep face recognition models to recognize sheep through facial images. However, the existing sheep face recognition models face problems such as high computational costs, large model sizes, and weak practicality. In response to the above issues, this study proposes a lightweight sheep face recognition model named LSR-YOLO. Specifically, the ShuffleNetv2 module and Ghost module were used to replace the feature extraction module in the backbone and neck of YOLOv5s to reduce floating-point operations per second (FLOPs) and parameters. In addition, the coordinated attention (CA) module was introduced into the backbone to suppress non-critical information and improve the feature extraction ability of the recognition model. We collected facial images of 63 small-tailed Han sheep to construct a sheep face dataset and further evaluate the proposed method. Compared to YOLOv5s, the FLOPs and parameters of LSR-YOLO decreased by 25.5% and 33.4%, respectively. LSR-YOLO achieved the best performance on the sheep face dataset, and the mAP@0.5 reached 97.8% when the model size was only 9.5 MB. The experimental results show that LSR-YOLO has significant advantages in recognition accuracy and model size. Finally, we integrated LSR-YOLO into mobile devices and further developed a recognition system to achieve real-time recognition. The results show that LSR-YOLO is an effective method for identifying sheep. The method has high recognition accuracy and fast recognition speed, which gives it a high application value in mobile recognition and welfare breeding.
Collapse
Affiliation(s)
- Xiwen Zhang
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China; (X.Z.); (J.X.); (B.C.); (Y.M.)
- Inner Mongolia Engineering Research Center for Intelligent Facilities in Prataculture and Livestock Breeding, Hohhot 010018, China
| | - Chuanzhong Xuan
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China; (X.Z.); (J.X.); (B.C.); (Y.M.)
- Inner Mongolia Engineering Research Center for Intelligent Facilities in Prataculture and Livestock Breeding, Hohhot 010018, China
| | - Jing Xue
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China; (X.Z.); (J.X.); (B.C.); (Y.M.)
| | - Boyuan Chen
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China; (X.Z.); (J.X.); (B.C.); (Y.M.)
| | - Yanhua Ma
- College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China; (X.Z.); (J.X.); (B.C.); (Y.M.)
| |
Collapse
|
7
|
Binta Islam S, Valles D, Hibbitts TJ, Ryberg WA, Walkup DK, Forstner MRJ. Animal Species Recognition with Deep Convolutional Neural Networks from Ecological Camera Trap Images. Animals (Basel) 2023; 13:ani13091526. [PMID: 37174563 PMCID: PMC10177479 DOI: 10.3390/ani13091526] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/16/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Accurate identification of animal species is necessary to understand biodiversity richness, monitor endangered species, and study the impact of climate change on species distribution within a specific region. Camera traps represent a passive monitoring technique that generates millions of ecological images. The vast numbers of images drive automated ecological analysis as essential, given that manual assessment of large datasets is laborious, time-consuming, and expensive. Deep learning networks have been advanced in the last few years to solve object and species identification tasks in the computer vision domain, providing state-of-the-art results. In our work, we trained and tested machine learning models to classify three animal groups (snakes, lizards, and toads) from camera trap images. We experimented with two pretrained models, VGG16 and ResNet50, and a self-trained convolutional neural network (CNN-1) with varying CNN layers and augmentation parameters. For multiclassification, CNN-1 achieved 72% accuracy, whereas VGG16 reached 87%, and ResNet50 attained 86% accuracy. These results demonstrate that the transfer learning approach outperforms the self-trained model performance. The models showed promising results in identifying species, especially those with challenging body sizes and vegetation.
Collapse
Affiliation(s)
- Sazida Binta Islam
- Ingram School of Engineering, Texas State University, San Marcos, TX 78666, USA
| | - Damian Valles
- Ingram School of Engineering, Texas State University, San Marcos, TX 78666, USA
| | - Toby J Hibbitts
- Natural Resources Institute, Texas A&M University, College Station, TX 77843, USA
- Biodiversity Research and Teaching Collections, Texas A&M University, College Station, TX 77843, USA
| | - Wade A Ryberg
- Natural Resources Institute, Texas A&M University, College Station, TX 77843, USA
| | - Danielle K Walkup
- Natural Resources Institute, Texas A&M University, College Station, TX 77843, USA
| | | |
Collapse
|
8
|
Jia L, Tian Y, Zhang J. Neural architecture search based on packed samples for identifying animals in camera trap images. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08247-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
9
|
Vélez J, McShea W, Shamon H, Castiblanco‐Camacho PJ, Tabak MA, Chalmers C, Fergus P, Fieberg J. An evaluation of platforms for processing camera‐trap data using artificial intelligence. Methods Ecol Evol 2022. [DOI: 10.1111/2041-210x.14044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
- Juliana Vélez
- Department of Fisheries, Wildlife and Conservation Biology University of Minnesota Saint Paul Minnesota USA
- Conservation Ecology Center Smithsonian's National Zoo and Conservation Biology Institute Front Royal Virginia USA
| | - William McShea
- Conservation Ecology Center Smithsonian's National Zoo and Conservation Biology Institute Front Royal Virginia USA
| | - Hila Shamon
- Conservation Ecology Center Smithsonian's National Zoo and Conservation Biology Institute Front Royal Virginia USA
| | | | | | - Carl Chalmers
- School of Computer Science and Mathematics Liverpool John Moores University Liverpool UK
| | - Paul Fergus
- School of Computer Science and Mathematics Liverpool John Moores University Liverpool UK
| | - John Fieberg
- Department of Fisheries, Wildlife and Conservation Biology University of Minnesota Saint Paul Minnesota USA
| |
Collapse
|
10
|
Instance segmentation and tracking of animals in wildlife videos: SWIFT - segmentation with filtering of tracklets. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
11
|
Kerry RG, Montalbo FJP, Das R, Patra S, Mahapatra GP, Maurya GK, Nayak V, Jena AB, Ukhurebor KE, Jena RC, Gouda S, Majhi S, Rout JR. An overview of remote monitoring methods in biodiversity conservation. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2022; 29:80179-80221. [PMID: 36197618 PMCID: PMC9534007 DOI: 10.1007/s11356-022-23242-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
Conservation of biodiversity is critical for the coexistence of humans and the sustenance of other living organisms within the ecosystem. Identification and prioritization of specific regions to be conserved are impossible without proper information about the sites. Advanced monitoring agencies like the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) had accredited that the sum total of species that are now threatened with extinction is higher than ever before in the past and are progressing toward extinct at an alarming rate. Besides this, the conceptualized global responses to these crises are still inadequate and entail drastic changes. Therefore, more sophisticated monitoring and conservation techniques are required which can simultaneously cover a larger surface area within a stipulated time frame and gather a large pool of data. Hence, this study is an overview of remote monitoring methods in biodiversity conservation via a survey of evidence-based reviews and related studies, wherein the description of the application of some technology for biodiversity conservation and monitoring is highlighted. Finally, the paper also describes various transformative smart technologies like artificial intelligence (AI) and/or machine learning algorithms for enhanced working efficiency of currently available techniques that will aid remote monitoring methods in biodiversity conservation.
Collapse
Affiliation(s)
- Rout George Kerry
- Department of Biotechnology, Utkal University, Vani Vihar, Bhubaneswar, Odisha 751004 India
| | | | - Rajeswari Das
- Department of Soil Science and Agricultural Chemistry, School of Agriculture, GIET University, Gunupur, Rayagada, Odisha 765022 India
| | - Sushmita Patra
- Indian Council of Agricultural Research-Directorate of Foot and Mouth Disease-International Centre for Foot and Mouth Disease, Arugul, Bhubaneswar, Odisha 752050 India
| | | | - Ganesh Kumar Maurya
- Zoology Section, Mahila MahaVidyalya, Banaras Hindu University, Varanasi, 221005 India
| | - Vinayak Nayak
- Indian Council of Agricultural Research-Directorate of Foot and Mouth Disease-International Centre for Foot and Mouth Disease, Arugul, Bhubaneswar, Odisha 752050 India
| | - Atala Bihari Jena
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115 USA
| | | | - Ram Chandra Jena
- Department of Pharmaceutical Sciences, Utkal University, Vani Vihar, Bhubaneswar, Odisha 751004 India
| | - Sushanto Gouda
- Department of Zoology, Mizoram University, Aizawl, 796009 India
| | - Sanatan Majhi
- Department of Biotechnology, Utkal University, Vani Vihar, Bhubaneswar, Odisha 751004 India
| | - Jyoti Ranjan Rout
- School of Biological Sciences, AIPH University, Bhubaneswar, Odisha 752101 India
| |
Collapse
|
12
|
Song S, Liu T, Wang H, Hasi B, Yuan C, Gao F, Shi H. Using Pruning-Based YOLO v3 Deep Learning Algorithm for Accurate Detection of Sheep Face. Animals (Basel) 2022; 12:ani12111465. [PMID: 35681929 PMCID: PMC9179321 DOI: 10.3390/ani12111465] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Revised: 05/08/2022] [Accepted: 06/03/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary The identification of individual animals is an important step in the history of precision breeding. It has a great role in both breeding and genetic management. The continuous development of computer vision and deep learning technologies provides new possibilities for the establishment of accurate breeding models. This helps to achieve high productivity and precise management in precision agriculture. Here, we demonstrate that sheep faces can be recognized based on the YOLOv3 target detection network. A model compression method based on K-means clustering algorithm and combined with channel pruning and layer pruning is applied to individual sheep identification. In addition, the results show that the proposed non-contact sheep face recognition method can identify sheep quickly and accurately. Abstract Accurate identification of sheep is important for achieving precise animal management and welfare farming in large farms. In this study, a sheep face detection method based on YOLOv3 model pruning is proposed, abbreviated as YOLOv3-P in the text. The method is used to identify sheep in pastures, reduce stress and achieve welfare farming. Specifically, in this study, we chose to collect Sunit sheep face images from a certain pasture in Xilin Gol League Sunit Right Banner, Inner Mongolia, and used YOLOv3, YOLOv4, Faster R-CNN, SSD and other classical target recognition algorithms to train and compare the recognition results, respectively. Ultimately, the choice was made to optimize YOLOv3. The mAP was increased from 95.3% to 96.4% by clustering the anchor frames in YOLOv3 using the sheep face dataset. The mAP of the compressed model was also increased from 96.4% to 97.2%. The model size was also reduced to 1/4 times the size of the original model. In addition, we restructured the original dataset and performed a 10-fold cross-validation experiment with a value of 96.84% for mAP. The results show that clustering the anchor boxes and compressing the model using this dataset is an effective method for identifying sheep. The method is characterized by low memory requirement, high-recognition accuracy and fast recognition speed, which can accurately identify sheep and has important applications in precision animal management and welfare farming.
Collapse
Affiliation(s)
- Shuang Song
- College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin 300384, China; (S.S.); (C.Y.); (F.G.)
| | - Tonghai Liu
- College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin 300384, China; (S.S.); (C.Y.); (F.G.)
- Correspondence: (T.L.); (H.S.); Tel.: +86-13920136245 (T.L.); +86-15049181288 (H.S.)
| | - Hai Wang
- Institute of Grassland Research, Chinese Academy of Agricultural Sciences, Hohhot 010010, China; (H.W.); (B.H.)
| | - Bagen Hasi
- Institute of Grassland Research, Chinese Academy of Agricultural Sciences, Hohhot 010010, China; (H.W.); (B.H.)
| | - Chuangchuang Yuan
- College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin 300384, China; (S.S.); (C.Y.); (F.G.)
| | - Fangyu Gao
- College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin 300384, China; (S.S.); (C.Y.); (F.G.)
| | - Hongxiao Shi
- Institute of Grassland Research, Chinese Academy of Agricultural Sciences, Hohhot 010010, China; (H.W.); (B.H.)
- Correspondence: (T.L.); (H.S.); Tel.: +86-13920136245 (T.L.); +86-15049181288 (H.S.)
| |
Collapse
|
13
|
Abstract
Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to manual coding, the results are often stale by the time they get to the ecologists. Using the Internet of Things (IoT) combined with deep learning represents a good solution for both these problems, as the images can be classified automatically, and the results immediately made available to ecologists. This paper proposes an IoT architecture that uses deep learning on edge devices to convey animal classification results to a mobile app using the LoRaWAN low-power, wide-area network. The primary goal of the proposed approach is to reduce the cost of the wildlife monitoring process for ecologists, and to provide real-time animal sightings data from the camera traps in the field. Camera trap image data consisting of 66,400 images were used to train the InceptionV3, MobileNetV2, ResNet18, EfficientNetB1, DenseNet121, and Xception neural network models. While performance of the trained models was statistically different (Kruskal–Wallis: Accuracy H(5) = 22.34, p < 0.05; F1-score H(5) = 13.82, p = 0.0168), there was only a 3% difference in the F1-score between the worst (MobileNet V2) and the best model (Xception). Moreover, the models made similar errors (Adjusted Rand Index (ARI) > 0.88 and Adjusted Mutual Information (AMU) > 0.82). Subsequently, the best model, Xception (Accuracy = 96.1%; F1-score = 0.87; F1-Score = 0.97 with oversampling), was optimized and deployed on the Raspberry Pi, Google Coral, and Nvidia Jetson edge devices using both TenorFlow Lite and TensorRT frameworks. Optimizing the models to run on edge devices reduced the average macro F1-Score to 0.7, and adversely affected the minority classes, reducing their F1-score to as low as 0.18. Upon stress testing, by processing 1000 images consecutively, Jetson Nano, running a TensorRT model, outperformed others with a latency of 0.276 s/image (s.d. = 0.002) while consuming an average current of 1665.21 mA. Raspberry Pi consumed the least average current (838.99 mA) with a ten times worse latency of 2.83 s/image (s.d. = 0.036). Nano was the only reasonable option as an edge device because it could capture most animals whose maximum speeds were below 80 km/h, including goats, lions, ostriches, etc. While the proposed architecture is viable, unbalanced data remain a challenge and the results can potentially be improved by using object detection to reduce imbalances and by exploring semi-supervised learning.
Collapse
|
14
|
Abstract
AbstractObserving and quantifying primate behavior in the wild is challenging. Human presence affects primate behavior and habituation of new, especially terrestrial, individuals is a time-intensive process that carries with it ethical and health concerns, especially during the recent pandemic when primates are at even greater risk than usual. As a result, wildlife researchers, including primatologists, have increasingly turned to new technologies to answer questions and provide important data related to primate conservation. Tools and methods should be chosen carefully to maximize and improve the data that will be used to answer the research questions. We review here the role of four indirect methods—camera traps, acoustic monitoring, drones, and portable field labs—and improvements in machine learning that offer rapid, reliable means of combing through large datasets that these methods generate. We describe key applications and limitations of each tool in primate conservation, and where we anticipate primate conservation technology moving forward in the coming years.
Collapse
|
15
|
AI Enabled IoRT Framework for Rodent Activity Monitoring in a False Ceiling Environment. SENSORS 2021; 21:s21165326. [PMID: 34450767 PMCID: PMC8398580 DOI: 10.3390/s21165326] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 07/28/2021] [Accepted: 08/02/2021] [Indexed: 11/16/2022]
Abstract
Routine rodent inspection is essential to curbing rat-borne diseases and infrastructure damages within the built environment. Rodents find false ceilings to be a perfect spot to seek shelter and construct their habitats. However, a manual false ceiling inspection for rodents is laborious and risky. This work presents an AI-enabled IoRT framework for rodent activity monitoring inside a false ceiling using an in-house developed robot called “Falcon”. The IoRT serves as a bridge between the users and the robots, through which seamless information sharing takes place. The shared images by the robots are inspected through a Faster RCNN ResNet 101 object detection algorithm, which is used to automatically detect the signs of rodent inside a false ceiling. The efficiency of the rodent activity detection algorithm was tested in a real-world false ceiling environment, and detection accuracy was evaluated with the standard performance metrics. The experimental results indicate that the algorithm detects rodent signs and 3D-printed rodents with a good confidence level.
Collapse
|
16
|
Brown LJ, Davy CM. Evaluation of spot patterns and carapace abnormalities of an Endangered freshwater turtle, Clemmys guttata, as a potential tool for population assignment. ENDANGER SPECIES RES 2021. [DOI: 10.3354/esr01120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Many of the world’s contemporary species of turtle are extinct or threatened with extinction due to habitat loss, increases in anthropogenic sources of mortality, and poaching (illegal collection). The slow life-history strategy of most turtle species magnifies the effects of poaching because the loss of even a few mature individuals can impact population growth. Returning poached turtles to their population of origin, where possible, can mitigate these effects, but identifying the origin of these individuals can be challenging. We hypothesized that spot patterns might allow assignment of Endangered spotted turtles Clemmys guttata to their population of origin. We characterized and compared spot patterns from carapace photographs of 126 individuals from 10 sites. To explore other types of information these photographs might provide, we also documented carapacial scute abnormalities and quantified their association with genetic diversity and latitude. Spot pattern similarity was not higher within populations than among populations and did not accurately differentiate populations. Carapacial scute abnormalities occurred in 82% of turtles and were not correlated with estimates of neutral genetic diversity. Abnormalities were positively correlated with latitude, implicating thermal stress during the early stages of development in the generation of some scute deformities. However, this relationship became non-significant when line (scute seam) abnormalities were excluded from the data, suggesting a different primary cause for the more severe scute deformities. Further research should continue to investigate the drivers of these deformities, as monitoring shifts in the frequency of scute deformities may provide relevant information for conservation and recovery of endangered turtles.
Collapse
Affiliation(s)
- LJ Brown
- Wildlife Research and Monitoring Section, Ontario Ministry of Natural Resources and Forestry, Trent University, Peterborough, Ontario K9L 0G2, Canada
| | - CM Davy
- Wildlife Research and Monitoring Section, Ontario Ministry of Natural Resources and Forestry, Trent University, Peterborough, Ontario K9L 0G2, Canada
- Department of Biology, Trent University, Peterborough, Ontario K9L 0G2, Canada
| |
Collapse
|
17
|
New Online Resource on the 3Rs Principles of Animal Research for Wildlife Biologists, Ecologists, and Conservation Managers. CONSERVATION 2021. [DOI: 10.3390/conservation1020009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The Earth’s biodiversity is in crisis. Without radical action to conserve habitats, the current rate of species extinction is predicted to accelerate even further. Efficient species conservation requires planning, management, and continuous biodiversity monitoring through wildlife research. Conservation biology was built on the utilitarian principle, where the well-being of species, populations, and ecosystems is given priority over the well-being of individual animals. However, this tenet has been increasingly under discussion and it has been argued that wildlife researchers need to safeguard the welfare of the individual animals traditionally subjected to invasive or lethal research procedures. The 3Rs principles of animal use (Replacement, Reduction, and Refinement) have become the cornerstone of ethical scientific conduct that could minimize the potential negative impact of research practices. One of the obvious strategies to implement the 3Rs in wildlife studies is to use non-invasive or non-lethal research methods. However, in contrast to toxicological or pharmacological research on laboratory animal models, up to now no 3Rs databases or online resources designed specifically for wildlife biologists, ecologists, and conservation managers have been available. To aid the implementation of the 3Rs principles into research on wildlife, I developed an online resource whose structure is outlined in this paper. The website contains a curated database of peer-reviewed articles that have implemented non-invasive or non-lethal research methods that could be used as a guideline for future studies.
Collapse
|
18
|
Schindler F, Steinhage V. Identification of animals and recognition of their actions in wildlife videos using deep learning techniques. ECOL INFORM 2021. [DOI: 10.1016/j.ecoinf.2021.101215] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
19
|
UIOT-FMT: A Universal Format for Collection and Aggregation of Data from Smart Devices. SENSORS 2020; 20:s20226662. [PMID: 33233751 PMCID: PMC7699945 DOI: 10.3390/s20226662] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 11/09/2020] [Accepted: 11/14/2020] [Indexed: 11/16/2022]
Abstract
Information Technology (IT) has become an essential part of our lives and due to the emergence of the Internet-of-Things (IoT), technology has encompassed a majority of things that humans rely on in their daily lives. Furthermore, as IT becomes more relevant in daily lives, the need for IT to serve public emergency services has become more important. However, due to the infancy status of IoT, there is a need for a data consortium that would prove to be best used in servicing policing in a technological driven society. This paper will discuss the plausibility of creating a universal format for use in carrying out public services, such as emergency response by the police and regular law maintenance. In this research we will discuss what the police requires in their line-of-duty and how smart devices can be used to satisfy those needs. A data formatting framework is developed and demonstrated, with the goal of showing what can be done to unifying data from smart city sensors.
Collapse
|