1
|
Shermeister B, Mor D, Levy O. Leveraging camera traps and artificial intelligence to explore thermoregulation behaviour. J Anim Ecol 2024. [PMID: 39039745 DOI: 10.1111/1365-2656.14139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 06/06/2024] [Indexed: 07/24/2024]
Abstract
Behavioural thermoregulation has critical ecological and physiological consequences that profoundly influence individual fitness and species distributions, particularly in the context of climate change. However, field monitoring of this behaviour remains labour-intensive and time-consuming. With the rise of camera-based surveys and artificial intelligence (AI) approaches in computer vision, we should try to build better tools for characterizing animals' behavioural thermoregulation. In this study, we developed a deep learning framework to automate the detection and classification of thermoregulation behaviour. We used lizards, the Rough-tail rock agama (Laudakia vulgaris), as a model animal for thermoregulation. We colour-marked the lizards and curated a diverse dataset of images captured by trail cameras under semi-natural conditions. Subsequently, we trained an object-detection model to identify lizards and image classification models to determine their microclimate usage (activity in sun or shade), which may indicate thermoregulation preferences. We then evaluated the performance of each model and analysed how the classification of thermoregulating lizards performed under different solar conditions (sun or shade), times of day and marking colours. Our framework's models achieved high scores in several performance metrics. The behavioural thermoregulation classification model performed significantly better on sun-basking lizards, achieving the highest classification accuracy with white-marked lizards. Moreover, the hours of activity and the microclimate choices (sun vs shade-seeking behaviour) of lizards, generated by our framework, are closely aligned with manually annotated data. Our study underscores the potential of AI in effectively tracking behavioural thermoregulation, offering a promising new direction for camera trap studies. This approach can potentially reduce the labour and time associated with ecological data collection and analysis and help gain a deeper understanding of species' thermal preferences and risks of climate change on species behaviour.
Collapse
Affiliation(s)
- Ben Shermeister
- Faculty of Life Sciences, School of Zoology, Tel Aviv University, Tel Aviv, Israel
| | - Danny Mor
- Faculty of Life Sciences, School of Zoology, Tel Aviv University, Tel Aviv, Israel
| | - Ofir Levy
- Faculty of Life Sciences, School of Zoology, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
2
|
Brickson L, Zhang L, Vollrath F, Douglas-Hamilton I, Titus AJ. Elephants and algorithms: a review of the current and future role of AI in elephant monitoring. J R Soc Interface 2023; 20:20230367. [PMID: 37963556 PMCID: PMC10645515 DOI: 10.1098/rsif.2023.0367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 10/23/2023] [Indexed: 11/16/2023] Open
Abstract
Artificial intelligence (AI) and machine learning (ML) present revolutionary opportunities to enhance our understanding of animal behaviour and conservation strategies. Using elephants, a crucial species in Africa and Asia's protected areas, as our focal point, we delve into the role of AI and ML in their conservation. Given the increasing amounts of data gathered from a variety of sensors like cameras, microphones, geophones, drones and satellites, the challenge lies in managing and interpreting this vast data. New AI and ML techniques offer solutions to streamline this process, helping us extract vital information that might otherwise be overlooked. This paper focuses on the different AI-driven monitoring methods and their potential for improving elephant conservation. Collaborative efforts between AI experts and ecological researchers are essential in leveraging these innovative technologies for enhanced wildlife conservation, setting a precedent for numerous other species.
Collapse
Affiliation(s)
| | | | - Fritz Vollrath
- Save the Elephants, Nairobi, Kenya
- Department of Biology, University of Oxford, Oxford, UK
| | | | - Alexander J. Titus
- Colossal Biosciences, Dallas, TX, USA
- Information Sciences Institute, University of Southern California, Los Angeles, USA
| |
Collapse
|
3
|
Binta Islam S, Valles D, Hibbitts TJ, Ryberg WA, Walkup DK, Forstner MRJ. Animal Species Recognition with Deep Convolutional Neural Networks from Ecological Camera Trap Images. Animals (Basel) 2023; 13:ani13091526. [PMID: 37174563 PMCID: PMC10177479 DOI: 10.3390/ani13091526] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/16/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Accurate identification of animal species is necessary to understand biodiversity richness, monitor endangered species, and study the impact of climate change on species distribution within a specific region. Camera traps represent a passive monitoring technique that generates millions of ecological images. The vast numbers of images drive automated ecological analysis as essential, given that manual assessment of large datasets is laborious, time-consuming, and expensive. Deep learning networks have been advanced in the last few years to solve object and species identification tasks in the computer vision domain, providing state-of-the-art results. In our work, we trained and tested machine learning models to classify three animal groups (snakes, lizards, and toads) from camera trap images. We experimented with two pretrained models, VGG16 and ResNet50, and a self-trained convolutional neural network (CNN-1) with varying CNN layers and augmentation parameters. For multiclassification, CNN-1 achieved 72% accuracy, whereas VGG16 reached 87%, and ResNet50 attained 86% accuracy. These results demonstrate that the transfer learning approach outperforms the self-trained model performance. The models showed promising results in identifying species, especially those with challenging body sizes and vegetation.
Collapse
Affiliation(s)
- Sazida Binta Islam
- Ingram School of Engineering, Texas State University, San Marcos, TX 78666, USA
| | - Damian Valles
- Ingram School of Engineering, Texas State University, San Marcos, TX 78666, USA
| | - Toby J Hibbitts
- Natural Resources Institute, Texas A&M University, College Station, TX 77843, USA
- Biodiversity Research and Teaching Collections, Texas A&M University, College Station, TX 77843, USA
| | - Wade A Ryberg
- Natural Resources Institute, Texas A&M University, College Station, TX 77843, USA
| | - Danielle K Walkup
- Natural Resources Institute, Texas A&M University, College Station, TX 77843, USA
| | | |
Collapse
|
4
|
Leorna S, Brinkman T. Human vs. machine: Detecting wildlife in camera trap images. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
5
|
Liu X, Wang D, Li Y, Guan X, Qin C. Detection of Green Asparagus Using Improved Mask R-CNN for Automatic Harvesting. SENSORS (BASEL, SWITZERLAND) 2022; 22:9270. [PMID: 36501972 PMCID: PMC9741112 DOI: 10.3390/s22239270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 11/24/2022] [Accepted: 11/26/2022] [Indexed: 06/17/2023]
Abstract
Advancements in deep learning and computer vision have led to the discovery of numerous effective solutions to challenging problems in the field of agricultural automation. With the aim to improve the detection precision in the autonomous harvesting process of green asparagus, in this article, we proposed the DA-Mask RCNN model, which utilizes the depth information in the region proposal network. Firstly, the deep residual network and feature pyramid network were combined to form the backbone network. Secondly, the DA-Mask RCNN model added a depth filter to aid the softmax function in anchor classification. Afterwards, the region proposals were further processed by the detection head unit. The training and test images were mainly acquired from different regions in the basin of the Yangtze River. During the capturing process, various weather and illumination conditions were taken into account, including sunny weather, sunny but overshadowed conditions, cloudy weather, and daytime greenhouse conditions as well as nighttime greenhouse conditions. Performance experiments, comparison experiments, and ablation experiments were carried out using the five constructed datasets to verify the effectiveness of the proposed model. Precision, recall, and F1-score values were applied to evaluate the performances of different approaches. The overall experimental results demonstrate that the balance of the precision and speed of the proposed DA-Mask RCNN model outperform those of existing algorithms.
Collapse
Affiliation(s)
- Xiangpeng Liu
- College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 201418, China
| | - Danning Wang
- College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 201418, China
| | - Yani Li
- School of Engineering and Telecommunications, University of New South Wales, Sydney 2052, Australia
| | - Xiqiang Guan
- College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 201418, China
| | - Chengjin Qin
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
6
|
Aktas K, Kiisk M, Giammanco A, Anbarjafari G, Mägi M. A Comparison of Neural Networks and Center of Gravity in Muon Hit Position Estimation. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1659. [PMID: 36421514 PMCID: PMC9689399 DOI: 10.3390/e24111659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 11/08/2022] [Accepted: 11/13/2022] [Indexed: 06/16/2023]
Abstract
The performance of cosmic-ray tomography systems is largely determined by their tracking accuracy. With conventional scintillation detector technology, good precision can be achieved with a small pitch between the elements of the detector array. Improving the resolution implies increasing the number of read-out channels, which in turn increases the complexity and cost of the tracking detectors. As an alternative to that, a scintillation plate detector coupled with multiple silicon photomultipliers could be used as a technically simple solution. In this paper, we present a comparison between two deep-learning-based methods and a conventional Center of Gravity (CoG) algorithm, used to calculate cosmic-ray muon hit positions on the plate detector using the signals from the photomultipliers. In this study, we generated a dataset of muon hits on a detector plate using the Monte Carlo simulation toolkit GEANT4. We demonstrate that two deep-learning-based methods outperform the conventional CoG algorithm by a significant margin. Our proposed algorithm, Fully Connected Network, produces a 0.72 mm average error measured in Euclidean distance between the actual and predicted hit coordinates, showing great improvement in comparison with CoG, which yields 1.41 mm on the same dataset. Additionally, we investigated the effects of different sensor configurations on performance.
Collapse
Affiliation(s)
- Kadir Aktas
- iCV Research Lab., Institute of Technology, University of Tartu, 51009 Tartu, Estonia
| | - Madis Kiisk
- Institute of Physics, University of Tartu, 51009 Tartu, Estonia
- GScan Ltd., Mäealuse 2/1, 12618 Tallinn, Estonia
| | - Andrea Giammanco
- Centre for Cosmology, Particle Physics and Phenomenology (CP3), Université Catholique de Louvain, B-1348 Louvain la Neuve, Belgium
| | - Gholamreza Anbarjafari
- iCV Research Lab., Institute of Technology, University of Tartu, 51009 Tartu, Estonia
- Higher Education Institute, Yildiz Technical University, Istanbul 34349, Turkey
| | - Märt Mägi
- GScan Ltd., Mäealuse 2/1, 12618 Tallinn, Estonia
| |
Collapse
|
7
|
Animal Detection and Classification from Camera Trap Images Using Different Mainstream Object Detection Architectures. Animals (Basel) 2022; 12:ani12151976. [PMID: 35953964 PMCID: PMC9367452 DOI: 10.3390/ani12151976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 07/23/2022] [Accepted: 08/02/2022] [Indexed: 11/17/2022] Open
Abstract
Simple Summary The imagery captured by cameras provides important information for wildlife research and conservation. Deep learning technology can assist ecologists in automatically identifying and processing imagery captured from camera traps, improving research capabilities and efficiency. Currently, many general deep learning architectures have been proposed but few have evaluated their applicability for use in real camera trap scenarios. Our study constructed the Northeast Tiger and Leopard National Park wildlife dataset (NTLNP dataset) for the first time and compared the real-world application performance of three currently mainstream object detection models. We hope this study provides a reference on the applicability of the AI technique in wild real-life scenarios and truly help ecologists to conduct wildlife conservation, management, and research more effectively. Abstract Camera traps are widely used in wildlife surveys and biodiversity monitoring. Depending on its triggering mechanism, a large number of images or videos are sometimes accumulated. Some literature has proposed the application of deep learning techniques to automatically identify wildlife in camera trap imagery, which can significantly reduce manual work and speed up analysis processes. However, there are few studies validating and comparing the applicability of different models for object detection in real field monitoring scenarios. In this study, we firstly constructed a wildlife image dataset of the Northeast Tiger and Leopard National Park (NTLNP dataset). Furthermore, we evaluated the recognition performance of three currently mainstream object detection architectures and compared the performance of training models on day and night data separately versus together. In this experiment, we selected YOLOv5 series models (anchor-based one-stage), Cascade R-CNN under feature extractor HRNet32 (anchor-based two-stage), and FCOS under feature extractors ResNet50 and ResNet101 (anchor-free one-stage). The experimental results showed that performance of the object detection models of the day-night joint training is satisfying. Specifically, the average result of our models was 0.98 mAP (mean average precision) in the animal image detection and 88% accuracy in the animal video classification. One-stage YOLOv5m achieved the best recognition accuracy. With the help of AI technology, ecologists can extract information from masses of imagery potentially quickly and efficiently, saving much time.
Collapse
|
8
|
Aktas K, Ignjatovic V, Ilic D, Marjanovic M, Anbarjafari G. Deep convolutional neural networks for detection of abnormalities in chest X-rays trained on the very large dataset. SIGNAL, IMAGE AND VIDEO PROCESSING 2022; 17:1035-1041. [PMID: 35873389 PMCID: PMC9296894 DOI: 10.1007/s11760-022-02309-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 04/10/2022] [Accepted: 06/28/2022] [Indexed: 06/15/2023]
Abstract
One of the main challenges in the current pandemic is the detection of coronavirus. Conventional techniques (PT-PCR) have their limitations such as long response time and limited accessibility. On the other hand, X-ray machines are widely available and they are already digitized in the health systems. Thus, their usage is faster and more available. Therefore, in this research, we evaluate how well deep CNNs do when it comes to classifying normal versus pathological chest X-rays. Compared to the previous research, we trained our network on the largest number of images, 103,468 in total, including 5 classes such as COPD signs, COVID, normal, others and Pneumonia. We achieved COVID accuracy of 97% and overall accuracy of 81%. Additionally, we achieved classification accuracy of 84% for categorization into normal (78%) and abnormal (88%).
Collapse
Affiliation(s)
- Kadir Aktas
- iCV Research Lab, Institute of Technology, University of Tartu, 51009 Tartu, Estonia
- iVCV OÜ, 51011 Tartu, Estonia
| | | | - Dragan Ilic
- Singidunum University, Belgrade, 11010 Serbia
| | | | - Gholamreza Anbarjafari
- iCV Research Lab, Institute of Technology, University of Tartu, 51009 Tartu, Estonia
- iVCV OÜ, 51011 Tartu, Estonia
- PwC Advisory, Helsinki, Finland
- Yildiz Technical University, Istanbul, Turkey
| |
Collapse
|
9
|
Efficient Data-Driven Crop Pest Identification Based on Edge Distance-Entropy for Sustainable Agriculture. SUSTAINABILITY 2022. [DOI: 10.3390/su14137825] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Human agricultural activities are always accompanied by pests and diseases, which have brought great losses to the production of crops. Intelligent algorithms based on deep learning have achieved some achievements in the field of pest control, but relying on a large amount of data to drive consumes a lot of resources, which is not conducive to the sustainable development of smart agriculture. The research in this paper starts with data, and is committed to finding efficient data, solving the data dilemma, and helping sustainable agricultural development. Starting from the data, this paper proposed an Edge Distance-Entropy data evaluation method, which can be used to obtain efficient crop pests, and the data consumption is reduced by 5% to 15% compared with the existing methods. The experimental results demonstrate that this method can obtain efficient crop pest data, and only use about 60% of the data to achieve 100% effect. Compared with other data evaluation methods, the method proposed in this paper achieve state-of-the-art results. The work conducted in this paper solves the dilemma of the existing intelligent algorithms for pest control relying on a large amount of data, and has important practical significance for realizing the sustainable development of modern smart agriculture.
Collapse
|
10
|
Fang C, Zheng H, Yang J, Deng H, Zhang T. Study on Poultry Pose Estimation Based on Multi-Parts Detection. Animals (Basel) 2022; 12:ani12101322. [PMID: 35625168 PMCID: PMC9137532 DOI: 10.3390/ani12101322] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 05/15/2022] [Accepted: 05/19/2022] [Indexed: 11/22/2022] Open
Abstract
Simple Summary Poultry farming is an important part of China’s agriculture system. The automatic estimation of poultry posture can help to analyze the movement, behavior, and even health of poultry. In this study, a poultry pose-estimation system was designed, which realized the automatic pose estimation of a single broiler chicken using a multi-part detection method. The experimental results show that this method can obtain better pose-estimation results for a single broiler chicken with respect to precision, recall, and F1 score. The pose-estimation system designed in this study provides a new means to provide help for poultry pose/behavior researchers in the future. Abstract Poultry pose estimation is a prerequisite for evaluating abnormal behavior and disease prediction in poultry. Accurate pose-estimation enables poultry producers to better manage their poultry. Because chickens are group-fed, how to achieve automatic poultry pose recognition has become a problematic point for accurate monitoring in large-scale farms. To this end, based on computer vision technology, this paper uses a deep neural network (DNN) technique to estimate the posture of a single broiler chicken. This method compared the pose detection results with the Single Shot MultiBox Detector (SSD) algorithm, You Only Look Once (YOLOV3) algorithm, RetinaNet algorithm, and Faster_R-CNN algorithm. Preliminary tests show that the method proposed in this paper achieves a 0.0128 standard deviation of precision and 0.9218 ± 0.0048 of confidence (95%) and a 0.0266 standard deviation of recall and 0.8996 ± 0.0099 of confidence (95%). By successfully estimating the pose of broiler chickens, it is possible to facilitate the detection of abnormal behavior of poultry. Furthermore, the method can be further improved to increase the overall success rate of verification.
Collapse
Affiliation(s)
- Cheng Fang
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.); (H.Z.); (J.Y.); (H.D.)
| | - Haikun Zheng
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.); (H.Z.); (J.Y.); (H.D.)
| | - Jikang Yang
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.); (H.Z.); (J.Y.); (H.D.)
| | - Hongfeng Deng
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.); (H.Z.); (J.Y.); (H.D.)
| | - Tiemin Zhang
- College of Engineering, South China Agricultural University, 483 Wushan Road, Guangzhou 510642, China; (C.F.); (H.Z.); (J.Y.); (H.D.)
- National Engineering Research Center for Breeding Swine Industry, Guangzhou 510642, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou 510642, China
- Correspondence:
| |
Collapse
|
11
|
Towards Automated Detection and Localization of Red Deer Cervus elaphus Using Passive Acoustic Sensors during the Rut. REMOTE SENSING 2022. [DOI: 10.3390/rs14102464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Passive acoustic sensors have the potential to become a valuable complementary component in red deer Cervus elaphus monitoring providing deeper insight into the behavior of stags during the rutting period. Automation of data acquisition and processing is crucial for adaptation and wider uptake of acoustic monitoring. Therefore, an automated data processing workflow concept for red deer call detection and localization was proposed and demonstrated. The unique dataset of red deer calls during the rut in September 2021 was collected with four GPS time-synchronized microphones. Five supervised machine learning algorithms were tested and compared for the detection of red deer rutting calls where the support-vector-machine-based approach demonstrated the best performance of −96.46% detection accuracy. For sound source location, a hyperbolic localization approach was applied. A novel approach based on cross-correlation and spectral feature similarity was proposed for sound delay assessment in multiple microphones resulting in the median localization error of 16 m, thus providing a solution for automated sound source localization—the main challenge in the automation of the data processing workflow. The automated approach outperformed manual sound delay assessment by a human expert where the median localization error was 43 m. Artificial sound records with a known location in the pilot territory were used for localization performance testing.
Collapse
|