1
|
Hou G, Li R, Tian M, Ding J, Zhang X, Yang B, Chen C, Huang R, Yin Y. Improving Efficiency: Automatic Intelligent Weighing System as a Replacement for Manual Pig Weighing. Animals (Basel) 2024; 14:1614. [PMID: 38891661 PMCID: PMC11171250 DOI: 10.3390/ani14111614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 05/27/2024] [Accepted: 05/27/2024] [Indexed: 06/21/2024] Open
Abstract
To verify the accuracy of AIWS, we weighed 106 pen growing-finishing pigs' weights using both the manual and AIWS methods, respectively. Accuracy was evaluated based on the values of MAE, MAPE, and RMSE. In the growth experiment, manual weighing was conducted every two weeks and AIWS predicted weight data was recorded daily, followed by fitting the growth curves. The results showed that MAE, MAPE, and RMSE values for 60 to 120 kg pigs were 3.48 kg, 3.71%, and 4.43 kg, respectively. The correlation coefficient r between the AIWS and manual method was 0.9410, and R2 was 0.8854. The two were extremely significant correlations (p < 0.001). In growth curve fitting, the AIWS method has lower AIC and BIC values than the manual method. The Logistic model by AIWS was the best-fit model. The age and body weight at the inflection point of the best-fit model were 164.46 d and 93.45 kg, respectively. The maximum growth rate was 831.66 g/d. In summary, AIWS can accurately predict pigs' body weights in actual production and has a better fitting effect on the growth curves of growing-finishing pigs. This study suggested that it was feasible for AIWS to replace manual weighing to measure the weight of 50 to 120 kg live pigs in large-scale farming.
Collapse
Affiliation(s)
- Gaifeng Hou
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Rui Li
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Mingzhou Tian
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Jing Ding
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Xingfu Zhang
- College of Computer Science and Technology, Heilongjiang Institute of Technology, Harbin 150050, China;
- Beijing Focused Loong Technology Co., Ltd., Beijing 100086, China
| | - Bin Yang
- Key Laboratory of Visual Perception and Artificial Intelligence of Hunan Province, College of Electrical and Information Engineering, Hunan University, Changsha 410082, China;
| | - Chunyu Chen
- College of Information and Communication, Harbin Engineering University, Harbin 150001, China;
| | - Ruilin Huang
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Yulong Yin
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| |
Collapse
|
2
|
Voogt AM, Schrijver RS, Temürhan M, Bongers JH, Sijm DTHM. Opportunities for Regulatory Authorities to Assess Animal-Based Measures at the Slaughterhouse Using Sensor Technology and Artificial Intelligence: A Review. Animals (Basel) 2023; 13:3028. [PMID: 37835634 PMCID: PMC10571985 DOI: 10.3390/ani13193028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/19/2023] [Accepted: 09/20/2023] [Indexed: 10/15/2023] Open
Abstract
Animal-based measures (ABMs) are the preferred way to assess animal welfare. However, manual scoring of ABMs is very time-consuming during the meat inspection. Automatic scoring by using sensor technology and artificial intelligence (AI) may bring a solution. Based on review papers an overview was made of ABMs recorded at the slaughterhouse for poultry, pigs and cattle and applications of sensor technology to measure the identified ABMs. Also, relevant legislation and work instructions of the Dutch Regulatory Authority (RA) were scanned on applied ABMs. Applications of sensor technology in a research setting, on farm or at the slaughterhouse were reported for 10 of the 37 ABMs identified for poultry, 4 of 32 for cattle and 13 of 41 for pigs. Several applications are related to aspects of meat inspection. However, by European law meat inspection must be performed by an official veterinarian, although there are exceptions for the post mortem inspection of poultry. The examples in this study show that there are opportunities for using sensor technology by the RA to support the inspection and to give more insight into animal welfare risks. The lack of external validation for multiple commercially available systems is a point of attention.
Collapse
Affiliation(s)
- Annika M. Voogt
- Office for Risk Assessment & Research (BuRO), Netherlands Food and Consumer Product Safety Authority (NVWA), P.O. Box 43006, 3540 AA Utrecht, The Netherlands
| | | | | | | | | |
Collapse
|
3
|
Liu Z, Hua J, Xue H, Tian H, Chen Y, Liu H. Body Weight Estimation for Pigs Based on 3D Hybrid Filter and Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2023; 23:7730. [PMID: 37765787 PMCID: PMC10537768 DOI: 10.3390/s23187730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 08/29/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023]
Abstract
The measurement of pig weight holds significant importance for producers as it plays a crucial role in managing pig growth, health, and marketing, thereby facilitating informed decisions regarding scientific feeding practices. On one hand, the conventional manual weighing approach is characterized by inefficiency and time consumption. On the other hand, it has the potential to induce heightened stress levels in pigs. This research introduces a hybrid 3D point cloud denoising approach for precise pig weight estimation. By integrating statistical filtering and DBSCAN clustering techniques, we mitigate weight estimation bias and overcome limitations in feature extraction. The convex hull technique refines the dataset to the pig's back, while voxel down-sampling enhances real-time efficiency. Our model integrates pig back parameters with a convolutional neural network (CNN) for accurate weight estimation. Experimental analysis indicates that the mean absolute error (MAE), mean absolute percent error (MAPE), and root mean square error (RMSE) of the weight estimation model proposed in this research are 12.45 kg, 5.36%, and 12.91 kg, respectively. In contrast to the currently available weight estimation methods based on 2D and 3D techniques, the suggested approach offers the advantages of simplified equipment configuration and reduced data processing complexity. These benefits are achieved without compromising the accuracy of weight estimation. Consequently, the proposed method presents an effective monitoring solution for precise pig feeding management, leading to reduced human resource losses and improved welfare in pig breeding.
Collapse
Affiliation(s)
- Zihao Liu
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China
| | - Jingyi Hua
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
| | - Hongxiang Xue
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China
| | - Haonan Tian
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
| | - Yang Chen
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
| | - Haowei Liu
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China
| |
Collapse
|
4
|
Kühnemund A, Götz S, Recke G. Automatic Detection of Group Recumbency in Pigs via AI-Supported Camera Systems. Animals (Basel) 2023; 13:2205. [PMID: 37444003 DOI: 10.3390/ani13132205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/23/2023] [Accepted: 06/29/2023] [Indexed: 07/15/2023] Open
Abstract
The resting behavior of rearing pigs provides information about their perception of the current temperature. A pen that is too cold or too warm can impact the well-being of the animals as well as their physical development. Previous studies that have automatically recorded animal behavior often utilized body posture. However, this method is error-prone because hidden animals (so-called false positives) strongly influence the results. In the present study, a method was developed for the automated identification of time periods in which all pigs are lying down using video recordings (an AI-supported camera system). We used velocity data (measured by the camera) of pigs in the pen to identify these periods. To determine the threshold value for images with the highest probability of containing only recumbent pigs, a dataset with 9634 images and velocity values was used. The resulting velocity threshold (0.0006020622 m/s) yielded an accuracy of 94.1%. Analysis of the testing dataset revealed that recumbent pigs were correctly identified based on velocity values derived from video recordings. This represents an advance toward automated detection from the previous manual detection method.
Collapse
Affiliation(s)
- Alexander Kühnemund
- Hochschule Osnabrück, Fachbereich Landwirtschaftliche Betriebswirtschaftslehre, Oldenburger Landstraße 24, 49090 Osnabrück, Germany
| | - Sven Götz
- VetVise GmbH, Bünteweg 2, 30559 Hannover, Germany
| | - Guido Recke
- Hochschule Osnabrück, Fachbereich Landwirtschaftliche Betriebswirtschaftslehre, Oldenburger Landstraße 24, 49090 Osnabrück, Germany
| |
Collapse
|
5
|
Marshall K, Poole J, Oyieng E, Ouma E, Kugonza DR. A farmer-friendly tool for estimation of weights of pigs kept by smallholder farmers in Uganda. Trop Anim Health Prod 2023; 55:219. [PMID: 37219661 DOI: 10.1007/s11250-023-03561-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 03/29/2023] [Indexed: 05/24/2023]
Abstract
Pig keeping is important to the livelihoods of many rural Ugandans. Pigs are typically sold based on live weight or a carcass weight derived from this; however this weight is commonly estimated due to the lack of access to scales. Here, we explore the development of a weigh band for more accurate weight determination and potentially increased farmer bargaining power on sale price. Pig weights and varied body measurements (heart girth, height, and length) were collected on 764 pigs of different ages, sex, and breed types, from 157 smallholder pig keeping households in Central and Western Uganda. Mixed-effects linear regression analyses, with household as a random effect and the varied body measurements as a fixed effect, were performed to determine the best single predictor for cube root of weight (transformation of weight for normality), for 749 pigs ranging between 0 and 125 kg. The most predictive single body measurement was heart girth, where weight in kg = (0.4011 + heart girth in cm × 0.0381)3. This model was found to be most suitable for pigs between 5 and 110 kg, notably more accurate than farmers' estimates, but still with somewhat broad confidence intervals (for example, ±11.5 kg for pigs with a predicted weight of 51.3 kg). We intend to pilot test a weigh band based on this model before deciding on whether it is suitable for wider scaling.
Collapse
Affiliation(s)
- Karen Marshall
- International Livestock Research Institute, P.O Box 30709 - 00100, Nairobi, Kenya.
| | - Jane Poole
- International Livestock Research Institute, P.O Box 30709 - 00100, Nairobi, Kenya
| | - Edwin Oyieng
- International Livestock Research Institute, P.O Box 30709 - 00100, Nairobi, Kenya
| | - Emily Ouma
- International Livestock Research Institute, c/o Bioversity International, P.O Box 24384, Kampala, Uganda
| | - Donald R Kugonza
- School of Agricultural Sciences, College of Agricultural and Environmental Sciences, Makerere University, P.O Box 7062, Kampala, Uganda
| |
Collapse
|
6
|
Cominotte A, Fernandes A, Dórea J, Rosa G, Torres R, Pereira G, Baldassini W, Machado Neto O. Use of Biometric Images to Predict Body Weight and Hot Carcass Weight of Nellore Cattle. Animals (Basel) 2023; 13:ani13101679. [PMID: 37238109 DOI: 10.3390/ani13101679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/18/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023] Open
Abstract
The objective of this study was to evaluate different methods of predicting body weight (BW) and hot carcass weight (HCW) from biometric measurements obtained through three-dimensional images of Nellore cattle. We collected BW and HCW of 1350 male Nellore cattle (bulls and steers) from four different experiments. Three-dimensional images of each animal were obtained using the Kinect® model 1473 sensor (Microsoft Corporation, Redmond, WA, USA). Models were compared based on root mean square error estimation and concordance correlation coefficient. The predictive quality of the approaches used multiple linear regression (MLR); least absolute shrinkage and selection operator (LASSO); partial least square (PLS), and artificial neutral network (ANN) and was affected not only by the conditions (set) but also by the objective (BW vs. HCW). The most stable for BW was the ANN (Set 1: RMSEP = 19.68; CCC = 0.73; Set 2: RMSEP = 27.22; CCC = 0.66; Set 3: RMSEP = 27.23; CCC = 0.70; Set 4: RMSEP = 33.74; CCC = 0.74), which showed predictive quality regardless of the set analyzed. However, when evaluating predictive quality for HCW, the models obtained by LASSO and PLS showed greater quality over the different sets. Overall, the use of three-dimensional images was able to predict BW and HCW in Nellore cattle.
Collapse
Affiliation(s)
- Alexandre Cominotte
- Department of Animal Science, University of Wisconsin, Madison, WI 53706, USA
- School of Agricultural and Veterinarian Sciences, São Paulo State University, Jaboticabal 14884-900, SP, Brazil
| | - Arthur Fernandes
- Department of Animal Science, University of Wisconsin, Madison, WI 53706, USA
| | - João Dórea
- Department of Animal Science, University of Wisconsin, Madison, WI 53706, USA
| | - Guilherme Rosa
- Department of Animal Science, University of Wisconsin, Madison, WI 53706, USA
- Department of Biostatistics and Medical Informatics, University of Wisconsin, Madison, WI 53706, USA
| | - Rodrigo Torres
- School of Veterinary and Animal Science, São Paulo State University, Botucatu 18618-681, SP, Brazil
| | - Guilherme Pereira
- School of Veterinary and Animal Science, São Paulo State University, Botucatu 18618-681, SP, Brazil
| | - Welder Baldassini
- School of Agricultural and Veterinarian Sciences, São Paulo State University, Jaboticabal 14884-900, SP, Brazil
- School of Veterinary and Animal Science, São Paulo State University, Botucatu 18618-681, SP, Brazil
| | - Otávio Machado Neto
- School of Agricultural and Veterinarian Sciences, São Paulo State University, Jaboticabal 14884-900, SP, Brazil
- School of Veterinary and Animal Science, São Paulo State University, Botucatu 18618-681, SP, Brazil
| |
Collapse
|
7
|
Kadlec R, Indest S, Castro K, Waqar S, Campos LM, Amorim ST, Bi Y, Hanigan MD, Morota G. Automated acquisition of top-view dairy cow depth image data using an RGB-D sensor camera. Transl Anim Sci 2022; 6:txac163. [PMID: 36601061 PMCID: PMC9801406 DOI: 10.1093/tas/txac163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
Animal dimensions are essential indicators for monitoring their growth rate, diet efficiency, and health status. A computer vision system is a recently emerging precision livestock farming technology that overcomes the previously unresolved challenges pertaining to labor and cost. Depth sensor cameras can be used to estimate the depth or height of an animal, in addition to two-dimensional information. Collecting top-view depth images is common in evaluating body mass or conformational traits in livestock species. However, in the depth image data acquisition process, manual interventions are involved in controlling a camera from a laptop or where detailed steps for automated data collection are not documented. Furthermore, open-source image data acquisition implementations are rarely available. The objective of this study was to 1) investigate the utility of automated top-view dairy cow depth data collection methods using picture- and video-based methods, 2) evaluate the performance of an infrared cut lens, 3) and make the source code available. Both methods can automatically perform animal detection, trigger recording, capture depth data, and terminate recording for individual animals. The picture-based method takes only a predetermined number of images whereas the video-based method uses a sequence of frames as a video. For the picture-based method, we evaluated 3- and 10-picture approaches. The depth sensor camera was mounted 2.75 m above-the-ground over a walk-through scale between the milking parlor and the free-stall barn. A total of 150 Holstein and 100 Jersey cows were evaluated. A pixel location where the depth was monitored was set up as a point of interest. More than 89% of cows were successfully captured using both picture- and video-based methods. The success rates of the picture- and video-based methods further improved to 92% and 98%, respectively, when combined with an infrared cut lens. Although both the picture-based method with 10 pictures and the video-based method yielded accurate results for collecting depth data on cows, the former was more efficient in terms of data storage. The current study demonstrates automated depth data collection frameworks and a Python implementation available to the community, which can help facilitate the deployment of computer vision systems for dairy cows.
Collapse
Affiliation(s)
- Robert Kadlec
- Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Sam Indest
- Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Kayla Castro
- Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Shayan Waqar
- Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Leticia M Campos
- School of Animal Sciences, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Sabrina T Amorim
- School of Animal Sciences, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Ye Bi
- School of Animal Sciences, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Mark D Hanigan
- School of Animal Sciences, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | | |
Collapse
|
8
|
Caffarini JG, Bresolin T, Dorea JRR. Predicting ribeye area and circularity in live calves through 3D image analyses of body surface. J Anim Sci 2022; 100:skac242. [PMID: 35852484 PMCID: PMC9495505 DOI: 10.1093/jas/skac242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 07/19/2022] [Indexed: 07/21/2023] Open
Abstract
The use of sexed semen at dairy farms has improved heifer replacement over the last decade by allowing greater control over the number of retained females and enabling the selection of dams with superior genetics. Alternatively, beef semen can be used in genetically inferior dairy cows to produce crossbred (beef x dairy) animals that can be sold at a higher price. Although crossbreeding became profitable for dairy farmers, meat cuts from beef x dairy crosses often lack quality and shape uniformity. Technologies for quickly predicting carcass traits for animal grouping before harvest may improve meat cut uniformity in crossbred cattle. Our objective was to develop a deep learning approach for predicting ribeye area and circularity of live animals through 3D body surface images using two neural networks: 1) nested Pyramid Scene Parsing Network (nPSPNet) for extracting features and 2) Convolutional Neural Network (CNN) for estimating ribeye area and circularity from these features. A group of 56 calves were imaged using an Intel RealSense D435 camera. A total of 327 depth images were captured from 30 calves and labeled with masks outlining the calf body to train the nPSPNet for feature extraction. Additional 42,536 depth images were taken from the remaining 26 calves along with three ultrasound images collected for each calf from the 12/13th ribs. The ultrasound images (three by calf) were manually segmented to calculate the average ribeye area and circularity and then paired with the depth images for CNN training. We implemented a nested cross-validation approach, in which all images for one calf were removed (leave-one-out, LOO), and the remaining calves were further divided into training (70%) and validation (30%) sets within each LOO iteration. The proposed model predicted ribeye area with an average coefficient of determination (R2) of 0.74% and 7.3% mean absolute error of prediction (MAEP) and the ribeye circularity with an average R2 of 0.87% and 2.4% MAEP. Our results indicate that computer vision systems could be used to predict ribeye area and circularity in live animals, allowing optimal management decisions toward smart animal grouping in beef x dairy crosses and purebred.
Collapse
Affiliation(s)
- Joseph G Caffarini
- Department of Neurology, University of Wisconsin-Madison, Madison, WI 53703, USA
- Department of Animal and Dairy Sciences, University of Wisconsin-Madison, –Madison, WI 53703, USA
| | - Tiago Bresolin
- Department of Animal and Dairy Sciences, University of Wisconsin-Madison, –Madison, WI 53703, USA
| | | |
Collapse
|
9
|
Camacho-Pérez E, Chay-Canul AJ, Garcia-Guendulain JM, Rodríguez-Abreo O. Towards the Estimation of Body Weight in Sheep Using Metaheuristic Algorithms from Biometric Parameters in Microsystems. MICROMACHINES 2022; 13:1325. [PMID: 36014248 PMCID: PMC9415317 DOI: 10.3390/mi13081325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 08/02/2022] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Abstract
The Body Weight (BW) of sheep is an important indicator for producers. Genetic management, nutrition, and health activities can benefit from weight monitoring. This article presents a polynomial model with an adjustable degree for estimating the weight of sheep from the biometric parameters of the animal. Computer vision tools were used to measure these parameters, obtaining a margin of error of less than 5%. A polynomial model is proposed after the parameters were obtained, where a coefficient and an unknown exponent go with each biometric variable. Two metaheuristic algorithms determine the values of these constants. The first is the most extended algorithm, the Genetic Algorithm (GA). Subsequently, the Cuckoo Search Algorithm (CSA) has a similar performance to the GA, which indicates that the value obtained by the GA is not a local optimum due to the poor parameter selection in the GA. The results show a Root-Mean-Squared Error (RMSE) of 7.68% for the GA and an RMSE of 7.55% for the CSA, proving the feasibility of the mathematical model for estimating the weight from biometric parameters. The proposed mathematical model, as well as the estimation of the biometric parameters can be easily adapted to an embedded microsystem.
Collapse
Affiliation(s)
- Enrique Camacho-Pérez
- Tecnológico Nacional de México/Instituto Tecnológico Superior Progreso, Progreso 97320, Mexico
- Red de Investigación OAC Optimización, Automatización y Control, El Marques 76240, Mexico
| | - Alfonso Juventino Chay-Canul
- División Académica de Ciencias Agropecuarias, Universidad Juárez Autónoma de Tabasco, km 25, Carretera Villahermosa-Teapa, R/A La Huasteca, Colonia Centro Tabasco 86280, Mexico
| | - Juan Manuel Garcia-Guendulain
- Red de Investigación OAC Optimización, Automatización y Control, El Marques 76240, Mexico
- Industrial Technologies Division, Universidad Politécnica de Querétaro, El Marques 76240, Mexico
| | - Omar Rodríguez-Abreo
- Red de Investigación OAC Optimización, Automatización y Control, El Marques 76240, Mexico
- Industrial Technologies Division, Universidad Politécnica de Querétaro, El Marques 76240, Mexico
| |
Collapse
|
10
|
Jacobs M, Remus A, Gaillard C, Menendez HM, Tedeschi LO, Neethirajan S, Ellis JL. ASAS-NANP symposium: mathematical modeling in animal nutrition: limitations and potential next steps for modeling and modelers in the animal sciences. J Anim Sci 2022; 100:skac132. [PMID: 35419602 PMCID: PMC9171330 DOI: 10.1093/jas/skac132] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 04/08/2022] [Indexed: 11/12/2022] Open
Abstract
The field of animal science, and especially animal nutrition, relies heavily on modeling to accomplish its day-to-day objectives. New data streams ("big data") and the exponential increase in computing power have allowed the appearance of "new" modeling methodologies, under the umbrella of artificial intelligence (AI). However, many of these modeling methodologies have been around for decades. According to Gartner, technological innovation follows five distinct phases: technology trigger, peak of inflated expectations, trough of disillusionment, slope of enlightenment, and plateau of productivity. The appearance of AI certainly elicited much hype within agriculture leading to overpromised plug-and-play solutions in a field heavily dependent on custom solutions. The threat of failure can become real when advertising a disruptive innovation as sustainable. This does not mean that we need to abandon AI models. What is most necessary is to demystify the field and place a lesser emphasis on the technology and more on business application. As AI becomes increasingly more powerful and applications start to diverge, new research fields are introduced, and opportunities arise to combine "old" and "new" modeling technologies into hybrids. However, sustainable application is still many years away, and companies and universities alike do well to remain at the forefront. This requires investment in hardware, software, and analytical talent. It also requires a strong connection to the outside world to test, that which does, and does not work in practice and a close view of when the field of agriculture is ready to take its next big steps. Other research fields, such as engineering and automotive, have shown that the application power of AI can be far reaching but only if a realistic view of models as whole is maintained. In this review, we share our view on the current and future limitations of modeling and potential next steps for modelers in the animal sciences. First, we discuss the inherent dependencies and limitations of modeling as a human process. Then, we highlight how models, fueled by AI, can play an enhanced sustainable role in the animal sciences ecosystem. Lastly, we provide recommendations for future animal scientists on how to support themselves, the farmers, and their field, considering the opportunities and challenges the technological innovation brings.
Collapse
Affiliation(s)
- Marc Jacobs
- FR Analytics B.V., 7642 AP Wierden, The Netherlands
| | - Aline Remus
- Sherbrooke Research and Development Centre, Sherbrooke, QC J1M 1Z3, Canada
| | | | - Hector M Menendez
- Department of Animal Science, South Dakota State University, Rapid City, SD 57702, USA
| | - Luis O Tedeschi
- Department of Animal Science, Texas A&M University, College Station, TX 77843-2471, USA
| | - Suresh Neethirajan
- Farmworx, Adaptation Physiology, Animal Sciences Group, Wageningen University, 6700 AH, The Netherlands
| | - Jennifer L Ellis
- Department of Animal Biosciences, University of Guelph, Guelph, ON N1G 2W1, Canada
| |
Collapse
|
11
|
Ramirez BC, Hayes MD, Condotta ICFS, Leonard SM. Impact of housing environment and management on pre-/post-weaning piglet productivity. J Anim Sci 2022; 100:6609155. [PMID: 35708591 PMCID: PMC9202573 DOI: 10.1093/jas/skac142] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 04/11/2022] [Indexed: 11/24/2022] Open
Abstract
The complex environment surrounding young pigs reared in intensive housing systems directly influences their productivity and livelihood. Much of the seminal literature utilized housing and husbandry practices that have since drastically evolved through advances in genetic potential, nutrition, health, and technology. This review focuses on the environmental interaction and responses of pigs during the first 8 wk of life, separated into pre-weaning (creep areas) and post-weaning (nursery or wean-finish) phases. Further, a perspective on instrumentation and precision technologies for animal-based (physiological and behavioral) and environmental measures documents current approaches and future possibilities. A warm microclimate for piglets during the early days of life, especially the first 12 h, is critical. While caretaker interventions can mitigate the extent of hypothermia, low birth weight remains a dominant risk factor for mortality. Post-weaning, the thermoregulation capabilities have improved, but subsequent transportation, nutritional, and social stressors enhance the requisite need for a warm, low draft environment with the proper flooring. A better understanding of the individual environmental factors that affect young pigs as well as the creation of comprehensive environment indices or improved, non-contact sensing technology is needed to better evaluate and manage piglet environments. Such enhanced understanding and evaluation of pig–environment interaction could lead to innovative environmental control and husbandry interventions to foster healthy and productive pigs.
Collapse
Affiliation(s)
- Brett C Ramirez
- Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, IA 50011, USA
| | - Morgan D Hayes
- Department of Biosystems and Agricultural Engineering, University of Kentucky, Lexington, KY 40546, USA
| | - Isabella C F S Condotta
- Department of Animal Sciences, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| | - Suzanne M Leonard
- Department of Animal Science, North Carolina State University, Raleigh, NC 27695, USA
| |
Collapse
|
12
|
Menendez HM, Brennan JR, Gaillard C, Ehlert K, Quintana J, Neethirajan S, Remus A, Jacobs M, Teixeira IAMA, Turner BL, Tedeschi LO. ASAS-NANP SYMPOSIUM: MATHEMATICAL MODELING IN ANIMAL NUTRITION: Opportunities and Challenges of Confined and Extensive Precision Livestock Production. J Anim Sci 2022; 100:6577180. [PMID: 35511692 PMCID: PMC9171331 DOI: 10.1093/jas/skac160] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 04/28/2022] [Indexed: 11/18/2022] Open
Abstract
Modern animal scientists, industry, and managers have never faced a more complex world. Precision livestock technologies have altered management in confined operations to meet production, environmental, and consumer goals. Applications of precision technologies have been limited in extensive systems such as rangelands due to lack of infrastructure, electrical power, communication, and durability. However, advancements in technology have helped to overcome many of these challenges. Investment in precision technologies is growing within the livestock sector, requiring the need to assess opportunities and challenges associated with implementation to enhance livestock production systems. In this review, precision livestock farming and digital livestock farming are explained in the context of a logical and iterative five-step process to successfully integrate precision livestock measurement and management tools, emphasizing the need for precision system models (PSMs). This five-step process acts as a guide to realize anticipated benefits from precision technologies and avoid unintended consequences. Consequently, the synthesis of precision livestock and modeling examples and key case studies help highlight past challenges and current opportunities within confined and extensive systems. Successfully developing PSM requires appropriate model(s) selection that aligns with desired management goals and precision technology capabilities. Therefore, it is imperative to consider the entire system to ensure that precision technology integration achieves desired goals while remaining economically and managerially sustainable. Achieving long-term success using precision technology requires the next generation of animal scientists to obtain additional skills to keep up with the rapid pace of technology innovation. Building workforce capacity and synergistic relationships between research, industry, and managers will be critical. As the process of precision technology adoption continues in more challenging and harsh, extensive systems, it is likely that confined operations will benefit from required advances in precision technology and PSMs, ultimately strengthening the benefits from precision technology to achieve short- and long-term goals.
Collapse
Affiliation(s)
- H M Menendez
- Department of Animal Science (Menendez, Brennan, Quintana); Department of Natural Resource Management (Ehlert); South Dakota State University, 711 N. Creek Drive, Rapid City, South Dakota, 57702, USA
| | - J R Brennan
- Department of Animal Science (Menendez, Brennan, Quintana); Department of Natural Resource Management (Ehlert); South Dakota State University, 711 N. Creek Drive, Rapid City, South Dakota, 57702, USA
| | - C Gaillard
- Institut Agro, PEGASE, INRAE, 35590 Saint Gilles, France
| | - K Ehlert
- Department of Animal Science (Menendez, Brennan, Quintana); Department of Natural Resource Management (Ehlert); South Dakota State University, 711 N. Creek Drive, Rapid City, South Dakota, 57702, USA
| | - J Quintana
- Department of Animal Science (Menendez, Brennan, Quintana); Department of Natural Resource Management (Ehlert); South Dakota State University, 711 N. Creek Drive, Rapid City, South Dakota, 57702, USA
| | - Suresh Neethirajan
- Farmworx, Adaptation Physiology, Animal Sciences Group, Wageningen University, 6700 AH, The Netherlands
| | - A Remus
- Sherbrooke Research and Development Centre, 2000 College Street, Sherbrooke, QC J1M 1Z3, Canada
| | - M Jacobs
- FR Analytics B.V., 7642 AP Wierden, The Netherlands
| | - I A M A Teixeira
- Department of Animal, Veterinary, and Food Sciences, University of Idaho, Twin Falls, ID 83301, USA
| | - B L Turner
- Department of Agriculture, Agribusiness, and Environmental Science, and King Ranch® Institute for Ranch Management, Texas A&M University-Kingsville, 700 University Blvd MSC 228, Kingsville, TX 78363, USA
| | - L O Tedeschi
- Department of Animal Science, Texas A&M University, College Station, TX 77843-2471, USA
| |
Collapse
|
13
|
Assessing the Feasibility of Using Kinect 3D Images to Predict Light Lamb Carcasses Composition from Leg Volume. Animals (Basel) 2021; 11:ani11123595. [PMID: 34944370 PMCID: PMC8698004 DOI: 10.3390/ani11123595] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/15/2021] [Accepted: 12/16/2021] [Indexed: 01/04/2023] Open
Abstract
This study aimed to evaluate the accuracy of the leg volume obtained by the Microsoft Kinect sensor to predict the composition of light lamb carcasses. The trial was performed on carcasses of twenty-two male lambs (17.6 ± 1.8 kg, body weight). The carcasses were split into eight cuts, divided into three groups according to their commercial value: high-value, medium value, and low-value group. Linear, area, and volume of leg measurements were obtained to predict carcass and cuts composition. The leg volume was acquired by two different methodologies: 3D image reconstruction using a Microsoft Kinect sensor and Archimedes principle. The correlation between these two leg measurements was significant (r = 0.815, p < 0.01). The models to predict cuts and carcass traits that include leg Kinect 3D sensor volume are very good in predicting the weight of the medium value and leg cuts (R2 of 0.763 and 0.829, respectively). Furthermore, the model, which includes the Kinect leg volume, explained 85% of its variation for the carcass muscle. The results of this study confirm the good ability to estimate cuts and carcass traits of light lamb carcasses with leg volume obtained with the Kinect 3D sensor.
Collapse
|
14
|
Kaewtapee C, Thepparak S, Rakangtong C, Bunchasak C, Supratak A. Objective scoring of footpad dermatitis in broilers using video image segmentation and a deep learning approach: vcamera-based scoring system. Br Poult Sci 2021; 63:427-433. [PMID: 34870524 DOI: 10.1080/00071668.2021.2013439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
1. Footpad dermatitis (FPD) can be used as an important indicator of animal welfare and for economic evaluation; however, human scoring is subjective, biased and labour intensive. This paper proposed a novel deep learning approach that can automatically determine the severity of FPD based on images of chicken's feet. 2. This approach first determined the areas of the FPD lesion, normal parts of each foot and the background, using a deep segmentation model. The proportion of the FPD for the chicken's two feet was calculated by dividing the number of FPD pixels by the number of feet pixels. The proportion was then categorised using a five-point score for FPD. The approach was evaluated from 244 images of the left and right footpads using five-fold cross-validation. These images were collected at a commercial slaughter plant and scored by trained observers. 3. The result showed that this approach achieved an overall accuracy and a macro F1-score of 0.82. The per-class F1-scores from all FPD scores (scores 0 to 4) were similar (0.85, 0.80, 0,80, 0,80, and 0.87, respectively), which demonstrated that this approach performed equally well for all classes of scores. 4. The results suggested that image segmentation and a deep learning approach can be used to automate the process of scoring FPD based on chicken foot images, which can help to minimise the subjective bias inherent in manual scoring.
Collapse
Affiliation(s)
- C Kaewtapee
- Department of Animal Science, Faculty of Agriculture, Kasetsart University, 50 Wgam Wong Wan Rd., Latyao, Chatuchak, Bangkok 10900 Thailand
| | - S Thepparak
- Department of Animal Science, Faculty of Agriculture, Kasetsart University, 50 Wgam Wong Wan Rd., Latyao, Chatuchak, Bangkok 10900 Thailand
| | - C Rakangtong
- Department of Animal Science, Faculty of Agriculture, Kasetsart University, 50 Wgam Wong Wan Rd., Latyao, Chatuchak, Bangkok 10900 Thailand
| | - C Bunchasak
- Department of Animal Science, Faculty of Agriculture, Kasetsart University, 50 Wgam Wong Wan Rd., Latyao, Chatuchak, Bangkok 10900 Thailand
| | - A Supratak
- Computer Science Academic Group, Faculty of Information and Communication Technology, Mahidol University, 999 Phuttamonthon 4 Road, Salaya, Nakhon Pathom 73170 Thailand
| |
Collapse
|
15
|
Okayama T, Kubota Y, Toyoda A, Kohari D, Noguchi G. Estimating body weight of pigs from posture analysis using a depth camera. Anim Sci J 2021; 92:e13626. [PMID: 34472660 DOI: 10.1111/asj.13626] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 07/30/2021] [Accepted: 08/04/2021] [Indexed: 11/26/2022]
Abstract
A noninvasive method for estimating the body weight (BW) of a pig considering its posture using a low-cost depth camera (Kinect v2) was proposed. A total of 150 pigs were used, and 738 depth images (point clouds) were obtained for them. The pig "volume" was calculated from the pig point cloud, and it was found to have a very high correlation to BW. To evaluate the posture of a pig quantitatively, seven posture angles were calculated based on the "spine" extracted from a pig point cloud. We found the posture angles representing the height of the head position correlated with the accuracy of BW estimation using the "volume." Based on this finding, we proposed an "adjusted volume," which was adjusted based on the relationship between the posture angles and the estimation error. The BW of pigs was estimated using the simple regression model with the "adjusted volume," and the MAPE and RMSPE were 4.87% and 6.13%, respectively. The accuracy of the suggested model was similar to that of the volume-based estimation models of other studies that used only data with an appropriate pig posture for BW estimation.
Collapse
Affiliation(s)
- Tsuyoshi Okayama
- College of Agriculture, Ibaraki University, Ami, Ibaraki, Japan.,United Graduate School of Agricultural Science, Tokyo University of Agriculture and Technology, Fuchu, Japan.,Ibaraki University Cooperation between Agriculture and Medical Science (IUCAM), Ami, Ibaraki, Japan
| | - Yoshifumi Kubota
- Central Research Institute for Feed and Livestock, ZEN-NOH, Tsukuba, Japan
| | - Atsushi Toyoda
- College of Agriculture, Ibaraki University, Ami, Ibaraki, Japan.,United Graduate School of Agricultural Science, Tokyo University of Agriculture and Technology, Fuchu, Japan.,Ibaraki University Cooperation between Agriculture and Medical Science (IUCAM), Ami, Ibaraki, Japan
| | - Daisuke Kohari
- College of Agriculture, Ibaraki University, Ami, Ibaraki, Japan.,United Graduate School of Agricultural Science, Tokyo University of Agriculture and Technology, Fuchu, Japan.,Ibaraki University Cooperation between Agriculture and Medical Science (IUCAM), Ami, Ibaraki, Japan
| | - Go Noguchi
- Central Research Institute for Feed and Livestock, ZEN-NOH, Tsukuba, Japan
| |
Collapse
|
16
|
Gómez Y, Stygar AH, Boumans IJMM, Bokkers EAM, Pedersen LJ, Niemi JK, Pastell M, Manteca X, Llonch P. A Systematic Review on Validated Precision Livestock Farming Technologies for Pig Production and Its Potential to Assess Animal Welfare. Front Vet Sci 2021; 8:660565. [PMID: 34055949 PMCID: PMC8160240 DOI: 10.3389/fvets.2021.660565] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 04/19/2021] [Indexed: 11/13/2022] Open
Abstract
Several precision livestock farming (PLF) technologies, conceived for optimizing farming processes, are developed to detect the physical and behavioral changes of animals continuously and in real-time. The aim of this review was to explore the capacity of existing PLF technologies to contribute to the assessment of pig welfare. In a web search for commercially available PLF for pigs, 83 technologies were identified. A literature search was conducted, following systematic review guidelines (PRISMA), to identify studies on the validation of sensor technologies for assessing animal-based welfare indicators. Two validation levels were defined: internal (evaluation during system building within the same population that were used for system building) and external (evaluation on a different population than during system building). From 2,463 articles found, 111 were selected, which validated some PLF that could be applied to the assessment of animal-based welfare indicators of pigs (7% classified as external, and 93% as internal validation). From our list of commercially available PLF technologies, only 5% had been externally validated. The more often validated technologies were vision-based solutions (n = 45), followed by load-cells (n = 28; feeders and drinkers, force plates and scales), accelerometers (n = 14) and microphones (n = 14), thermal cameras (n = 10), photoelectric sensors (n = 5), radio-frequency identification (RFID) for tracking (n = 2), infrared thermometers (n = 1), and pyrometer (n = 1). Externally validated technologies were photoelectric sensors (n = 2), thermal cameras (n = 2), microphone (n = 1), load-cells (n = 1), RFID (n = 1), and pyrometer (n = 1). Measured traits included activity and posture-related behavior, feeding and drinking, other behavior, physical condition, and health. In conclusion, existing PLF technologies are potential tools for on-farm animal welfare assessment in pig production. However, validation studies are lacking for an important percentage of market available tools, and in particular research and development need to focus on identifying the feature candidates of the measures (e.g., deviations from diurnal pattern, threshold levels) that are valid signals of either negative or positive animal welfare. An important gap identified are the lack of technologies to assess affective states (both positive and negative states).
Collapse
Affiliation(s)
- Yaneth Gómez
- Department of Animal and Food Science, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Anna H. Stygar
- Bioeconomy and Environment, Natural Resources Institute Finland (Luke), Helsinki, Finland
| | - Iris J. M. M. Boumans
- Animal Production Systems Group, Wageningen University and Research, Wageningen, Netherlands
| | - Eddie A. M. Bokkers
- Animal Production Systems Group, Wageningen University and Research, Wageningen, Netherlands
| | | | - Jarkko K. Niemi
- Bioeconomy and Environment, Natural Resources Institute Finland (Luke), Helsinki, Finland
| | - Matti Pastell
- Production Systems, Natural Resources Institute Finland (Luke), Helsinki, Finland
| | - Xavier Manteca
- Department of Animal and Food Science, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Pol Llonch
- Department of Animal and Food Science, Universitat Autònoma de Barcelona, Barcelona, Spain
| |
Collapse
|
17
|
Zhang J, Zhuang Y, Ji H, Teng G. Pig Weight and Body Size Estimation Using a Multiple Output Regression Convolutional Neural Network: A Fast and Fully Automatic Method. SENSORS 2021; 21:s21093218. [PMID: 34066410 PMCID: PMC8124602 DOI: 10.3390/s21093218] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 04/23/2021] [Accepted: 04/27/2021] [Indexed: 11/30/2022]
Abstract
Pig weight and body size are important indicators for producers. Due to the increasing scale of pig farms, it is increasingly difficult for farmers to quickly and automatically obtain pig weight and body size. Due to this problem, we focused on a multiple output regression convolutional neural network (CNN) to estimate pig weight and body size. DenseNet201, ResNet152 V2, Xception and MobileNet V2 were modified into multiple output regression CNNs and trained on modeling data. By comparing the estimated performance of each model on test data, modified Xception was selected as the optimal estimation model. Based on pig height, body shape, and contour, the mean absolute error (MAE) of the model to estimate body weight (BW), shoulder width (SW), shoulder height (SH), hip width (HW), hip width (HH), and body length (BL) were 1.16 kg, 0.33 cm, 1.23 cm, 0.38 cm, 0.66 cm, and 0.75 cm, respectively. The coefficient of determination (R2) value between the estimated and measured results was in the range of 0.9879–0.9973. Combined with the LabVIEW software development platform, this method can estimate pig weight and body size accurately, quickly, and automatically. This work contributes to the automatic management of pig farms.
Collapse
Affiliation(s)
- Jianlong Zhang
- College of Water Resources & Civil Engineering, China Agricultural University, Beijing 100083, China; (J.Z.); (Y.Z.); (H.J.)
- Key Laboratory of Agricultural Engineering in Structure and Environment, Ministry of Agriculture and Rural Affairs, Beijing 100083, China
| | - Yanrong Zhuang
- College of Water Resources & Civil Engineering, China Agricultural University, Beijing 100083, China; (J.Z.); (Y.Z.); (H.J.)
- Key Laboratory of Agricultural Engineering in Structure and Environment, Ministry of Agriculture and Rural Affairs, Beijing 100083, China
| | - Hengyi Ji
- College of Water Resources & Civil Engineering, China Agricultural University, Beijing 100083, China; (J.Z.); (Y.Z.); (H.J.)
- Key Laboratory of Agricultural Engineering in Structure and Environment, Ministry of Agriculture and Rural Affairs, Beijing 100083, China
| | - Guanghui Teng
- College of Water Resources & Civil Engineering, China Agricultural University, Beijing 100083, China; (J.Z.); (Y.Z.); (H.J.)
- Key Laboratory of Agricultural Engineering in Structure and Environment, Ministry of Agriculture and Rural Affairs, Beijing 100083, China
- Beijing Engineering Research Center on Animal Healthy Environment, Beijing 100083, China
- Correspondence:
| |
Collapse
|
18
|
Weight and volume estimation of poultry and products based on computer vision systems: a review. Poult Sci 2021; 100:101072. [PMID: 33752071 PMCID: PMC8010860 DOI: 10.1016/j.psj.2021.101072] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 01/14/2021] [Accepted: 02/04/2021] [Indexed: 01/10/2023] Open
Abstract
The appearance, size, and weight of poultry meat and eggs are essential for production economics and vital in the poultry sector. These external characteristics influence their market price and consumers' preference and choice. With technological developments, there is an increase in the application and importance of vision systems in the agricultural sector. Computer vision has become a promising tool in the real-time automation of poultry weighing and processing systems. Owing to its noninvasive and nonintrusive nature and its capacity to present a wide range of information, computer vision systems can be applied in the size, mass, volume determination, and sorting and grading of poultry products. This review article gives a detailed summary of the current advances in measuring poultry products' external characteristics based on computer vision systems. An overview of computer vision systems is discussed and summarized. A comprehensive presentation of the application of computer vision-based systems for assessing poultry meat and eggs was provided, that is, weight and volume estimation, sorting, and classification. Finally, the challenges and potential future trends in size, weight, and volume estimation of poultry products are reported.
Collapse
|
19
|
Yu H, Lee K, Morota G. Forecasting dynamic body weight of nonrestrained pigs from images using an RGB-D sensor camera. Transl Anim Sci 2021; 5:txab006. [PMID: 33659861 PMCID: PMC7906448 DOI: 10.1093/tas/txab006] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 01/12/2021] [Indexed: 11/13/2022] Open
Abstract
Average daily gain is an indicator of the growth rate, feed efficiency, and current health status of livestock species including pigs. Continuous monitoring of daily gain in pigs aids producers to optimize their growth performance while ensuring animal welfare and sustainability, such as reducing stress reactions and feed waste. Computer vision has been used to predict live body weight from video images without direct handling of the pig. In most studies, videos were taken while pigs were immobilized at a weighing station or feeding area to facilitate data collection. An alternative approach is to capture videos while pigs are allowed to move freely within their own housing environment, which can be easily applied to the production system as no special imaging station needs to be established. The objective of this study was to establish a computer vision system by collecting RGB-D videos to capture top-view red, green, and blue (RGB) and depth images of nonrestrained, growing pigs to predict their body weight over time. Over a period of 38 d, eight growers were video recorded for approximately 3 min/d, at the rate of six frames per second, and manually weighed using an electronic scale. An image-processing pipeline in Python using OpenCV was developed to process the images. Specifically, each pig within the RGB frame was segmented by a thresholding algorithm, and the contour of the pig was identified to extract its length and width. The height of a pig was estimated from the depth images captured by the infrared depth sensor. Quality control included removing pigs that were touching the fence and sitting, as well as those showing extremely distorted shape or motion blur owing to their frequent movement. Fitting all of the morphological image descriptors simultaneously in linear mixed models yielded prediction coefficients of determination of 0.72-0.98, 0.65-0.95, 0.51-0.94, and 0.49-0.93 for 1-, 2-, 3-, and 4-d ahead forecasting, respectively, of body weight in time series cross-validation. Based on the results, we conclude that our RGB-D sensor-based imaging system coupled with the Python image-processing pipeline could potentially provide an effective approach to predict the live body weight of nonrestrained pigs from images.
Collapse
Affiliation(s)
- Haipeng Yu
- Department of Animal and Poultry Sciences, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Kiho Lee
- Division of Animal Sciences, University of Missouri, Columbia, MO, USA
| | - Gota Morota
- Department of Animal and Poultry Sciences, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA.,Center for Advanced Innovation in Agriculture, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| |
Collapse
|
20
|
Application of depth sensor to estimate body mass and morphometric assessment in Nellore heifers. Livest Sci 2021. [DOI: 10.1016/j.livsci.2021.104442] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
21
|
Wang Z, Shadpour S, Chan E, Rotondo V, Wood KM, Tulpan D. ASAS-NANP SYMPOSIUM: Applications of machine learning for livestock body weight prediction from digital images. J Anim Sci 2021; 99:6149204. [PMID: 33626149 PMCID: PMC7904040 DOI: 10.1093/jas/skab022] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 01/25/2021] [Indexed: 01/01/2023] Open
Abstract
Monitoring, recording, and predicting livestock body weight (BW) allows for timely intervention in diets and health, greater efficiency in genetic selection, and identification of optimal times to market animals because animals that have already reached the point of slaughter represent a burden for the feedlot. There are currently two main approaches (direct and indirect) to measure the BW in livestock. Direct approaches include partial-weight or full-weight industrial scales placed in designated locations on large farms that measure passively or dynamically the weight of livestock. While these devices are very accurate, their acquisition, intended purpose and operation size, repeated calibration and maintenance costs associated with their placement in high-temperature variability, and corrosive environments are significant and beyond the affordability and sustainability limits of small and medium size farms and even of commercial operators. As a more affordable alternative to direct weighing approaches, indirect approaches have been developed based on observed or inferred relationships between biometric and morphometric measurements of livestock and their BW. Initial indirect approaches involved manual measurements of animals using measuring tapes and tubes and the use of regression equations able to correlate such measurements with BW. While such approaches have good BW prediction accuracies, they are time consuming, require trained and skilled farm laborers, and can be stressful for both animals and handlers especially when repeated daily. With the concomitant advancement of contactless electro-optical sensors (e.g., 2D, 3D, infrared cameras), computer vision (CV) technologies, and artificial intelligence fields such as machine learning (ML) and deep learning (DL), 2D and 3D images have started to be used as biometric and morphometric proxies for BW estimations. This manuscript provides a review of CV-based and ML/DL-based BW prediction methods and discusses their strengths, weaknesses, and industry applicability potential.
Collapse
Affiliation(s)
- Zhuoyi Wang
- Department of Animal Biosciences, Centre for Genetic Improvement of Livestock, University of Guelph, Guelph, Ontario, Canada
| | - Saeed Shadpour
- Department of Animal Biosciences, Centre for Genetic Improvement of Livestock, University of Guelph, Guelph, Ontario, Canada
| | - Esther Chan
- Department of Animal Biosciences, Centre for Genetic Improvement of Livestock, University of Guelph, Guelph, Ontario, Canada
| | - Vanessa Rotondo
- Department of Animal Biosciences, University of Guelph, Guelph, Ontario, Canada
| | - Katharine M Wood
- Department of Animal Biosciences, University of Guelph, Guelph, Ontario, Canada
| | - Dan Tulpan
- Department of Animal Biosciences, Centre for Genetic Improvement of Livestock, University of Guelph, Guelph, Ontario, Canada
| |
Collapse
|
22
|
|
23
|
Fernandes AFA, Dórea JRR, Rosa GJDM. Image Analysis and Computer Vision Applications in Animal Sciences: An Overview. Front Vet Sci 2020; 7:551269. [PMID: 33195522 PMCID: PMC7609414 DOI: 10.3389/fvets.2020.551269] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 09/15/2020] [Indexed: 11/13/2022] Open
Abstract
Computer Vision, Digital Image Processing, and Digital Image Analysis can be viewed as an amalgam of terms that very often are used to describe similar processes. Most of this confusion arises because these are interconnected fields that emerged with the development of digital image acquisition. Thus, there is a need to understand the connection between these fields, how a digital image is formed, and the differences regarding the many sensors available, each best suited for different applications. From the advent of the charge-coupled devices demarking the birth of digital imaging, the field has advanced quite fast. Sensors have evolved from grayscale to color with increasingly higher resolution and better performance. Also, many other sensors have appeared, such as infrared cameras, stereo imaging, time of flight sensors, satellite, and hyperspectral imaging. There are also images generated by other signals, such as sound (ultrasound scanners and sonars) and radiation (standard x-ray and computed tomography), which are widely used to produce medical images. In animal and veterinary sciences, these sensors have been used in many applications, mostly under experimental conditions and with just some applications yet developed on commercial farms. Such applications can range from the assessment of beef cuts composition to live animal identification, tracking, behavior monitoring, and measurement of phenotypes of interest, such as body weight, condition score, and gait. Computer vision systems (CVS) have the potential to be used in precision livestock farming and high-throughput phenotyping applications. We believe that the constant measurement of traits through CVS can reduce management costs and optimize decision-making in livestock operations, in addition to opening new possibilities in selective breeding. Applications of CSV are currently a growing research area and there are already commercial products available. However, there are still challenges that demand research for the successful development of autonomous solutions capable of delivering critical information. This review intends to present significant developments that have been made in CVS applications in animal and veterinary sciences and to highlight areas in which further research is still needed before full deployment of CVS in breeding programs and commercial farms.
Collapse
Affiliation(s)
| | | | - Guilherme Jordão de Magalhães Rosa
- Department of Animal and Dairy Sciences, University of Wisconsin-Madison, Madison, WI, United States.,Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
24
|
Fernandes AFA, Dórea JRR, Valente BD, Fitzgerald R, Herring W, Rosa GJM. Comparison of data analytics strategies in computer vision systems to predict pig body composition traits from 3D images. J Anim Sci 2020; 98:skaa250. [PMID: 32770242 PMCID: PMC7447136 DOI: 10.1093/jas/skaa250] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 07/31/2020] [Indexed: 12/17/2022] Open
Abstract
Computer vision systems (CVS) have been shown to be a powerful tool for the measurement of live pig body weight (BW) with no animal stress. With advances in precision farming, it is now possible to evaluate the growth performance of individual pigs more accurately. However, important traits such as muscle and fat deposition can still be evaluated only via ultrasound, computed tomography, or dual-energy x-ray absorptiometry. Therefore, the objectives of this study were: 1) to develop a CVS for prediction of live BW, muscle depth (MD), and back fat (BF) from top view 3D images of finishing pigs and 2) to compare the predictive ability of different approaches, such as traditional multiple linear regression, partial least squares, and machine learning techniques, including elastic networks, artificial neural networks, and deep learning (DL). A dataset containing over 12,000 images from 557 finishing pigs (average BW of 120 ± 12 kg) was split into training and testing sets using a 5-fold cross-validation (CV) technique so that 80% and 20% of the dataset were used for training and testing in each fold. Several image features, such as volume, area, length, widths, heights, polar image descriptors, and polar Fourier transforms, were extracted from the images and used as predictor variables in the different approaches evaluated. In addition, DL image encoders that take raw 3D images as input were also tested. This latter method achieved the best overall performance, with the lowest mean absolute scaled error (MASE) and root mean square error for all traits, and the highest predictive squared correlation (R2). The median predicted MASE achieved by this method was 2.69, 5.02, and 13.56, and R2 of 0.86, 0.50, and 0.45, for BW, MD, and BF, respectively. In conclusion, it was demonstrated that it is possible to successfully predict BW, MD, and BF via CVS on a fully automated setting using 3D images collected in farm conditions. Moreover, DL algorithms simplified and optimized the data analytics workflow, with raw 3D images used as direct inputs, without requiring prior image processing.
Collapse
Affiliation(s)
- Arthur F A Fernandes
- Department of Animal and Dairy Sciences, University of Wisconsin-Madison, Madison, WI
| | - João R R Dórea
- Department of Animal and Dairy Sciences, University of Wisconsin-Madison, Madison, WI
| | | | | | | | - Guilherme J M Rosa
- Department of Animal and Dairy Sciences, University of Wisconsin-Madison, Madison, WI
| |
Collapse
|
25
|
Abstract
BACKGROUND The shape of pig scapula is complex and is important for sow robustness and health. To better understand the relationship between 3D shape of the scapula and functional traits, it is necessary to build a model that explains most of the morphological variation between animals. This requires point correspondence, i.e. a map that explains which points represent the same piece of tissue among individuals. The objective of this study was to further develop an automated computational pipeline for the segmentation of computed tomography (CT) scans to incorporate 3D modelling of the scapula, and to develop a genetic prediction model for 3D morphology. RESULTS The surface voxels of the scapula were identified on 2143 CT-scanned pigs, and point correspondence was established by predicting the coordinates of 1234 semi-landmarks on each animal, using the coherent point drift algorithm. A subsequent principal component analysis showed that the first 10 principal components covered more than 80% of the total variation in 3D shape of the scapula. Using principal component scores as phenotypes in a genetic model, estimates of heritability ranged from 0.4 to 0.8 (with standard errors from 0.07 to 0.08). To validate the entire computational pipeline, a statistical model was trained to predict scapula shape based on marker genotype data. The mean prediction reliability averaged over the whole scapula was equal to 0.18 (standard deviation = 0.05) with a higher reliability in convex than in concave regions. CONCLUSIONS Estimates of heritability of the principal components were high and indicated that the computational pipeline that processes CT data to principal component phenotypes was associated with little error. Furthermore, we showed that it is possible to predict the 3D shape of scapula based on marker genotype data. Taken together, these results show that the proposed computational pipeline closes the gap between a point cloud representing the shape of an animal and its underlying genetic components.
Collapse
Affiliation(s)
- Øyvind Nordbø
- Norsvin SA, Storhamargata 44, 2317, Hamar, Norway.
- Geno SA, Storhamargata 44, 2317, Hamar, Norway.
| |
Collapse
|
26
|
T. Psota E, Schmidt T, Mote B, C. Pérez L. Long-Term Tracking of Group-Housed Livestock Using Keypoint Detection and MAP Estimation for Individual Animal Identification. SENSORS 2020; 20:s20133670. [PMID: 32630011 PMCID: PMC7374513 DOI: 10.3390/s20133670] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 06/08/2020] [Accepted: 06/16/2020] [Indexed: 02/05/2023]
Abstract
Tracking individual animals in a group setting is a exigent task for computer vision and animal science researchers. When the objective is months of uninterrupted tracking and the targeted animals lack discernible differences in their physical characteristics, this task introduces significant challenges. To address these challenges, a probabilistic tracking-by-detection method is proposed. The tracking method uses, as input, visible keypoints of individual animals provided by a fully-convolutional detector. Individual animals are also equipped with ear tags that are used by a classification network to assign unique identification to instances. The fixed cardinality of the targets is leveraged to create a continuous set of tracks and the forward-backward algorithm is used to assign ear-tag identification probabilities to each detected instance. Tracking achieves real-time performance on consumer-grade hardware, in part because it does not rely on complex, costly, graph-based optimizations. A publicly available, human-annotated dataset is introduced to evaluate tracking performance. This dataset contains 15 half-hour long videos of pigs with various ages/sizes, facility environments, and activity levels. Results demonstrate that the proposed method achieves an average precision and recall greater than 95% across the entire dataset. Analysis of the error events reveals environmental conditions and social interactions that are most likely to cause errors in real-world deployments.
Collapse
Affiliation(s)
- Eric T. Psota
- Department of Electrical and Computer Engineering, University of Nebraska–Lincoln, Lincoln, NE 68505, USA;
- Correspondence:
| | - Ty Schmidt
- Department of Animal Science, University of Nebraska–Lincoln, Lincoln, NE 68588, USA; (T.S.); (B.M.)
| | - Benny Mote
- Department of Animal Science, University of Nebraska–Lincoln, Lincoln, NE 68588, USA; (T.S.); (B.M.)
| | - Lance C. Pérez
- Department of Electrical and Computer Engineering, University of Nebraska–Lincoln, Lincoln, NE 68505, USA;
| |
Collapse
|
27
|
Cominotte A, Fernandes A, Dorea J, Rosa G, Ladeira M, van Cleef E, Pereira G, Baldassini W, Machado Neto O. Automated computer vision system to predict body weight and average daily gain in beef cattle during growing and finishing phases. Livest Sci 2020. [DOI: 10.1016/j.livsci.2019.103904] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
28
|
Huang L, Guo H, Rao Q, Hou Z, Li S, Qiu S, Fan X, Wang H. Body Dimension Measurements of Qinchuan Cattle with Transfer Learning from LiDAR Sensing. SENSORS 2019; 19:s19225046. [PMID: 31752400 PMCID: PMC6891291 DOI: 10.3390/s19225046] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 11/10/2019] [Accepted: 11/18/2019] [Indexed: 02/07/2023]
Abstract
For the time-consuming and stressful body measuring task of Qinchuan cattle and farmers, the demand for the automatic measurement of body dimensions has become more and more urgent. It is necessary to explore automatic measurements with deep learning to improve breeding efficiency and promote the development of industry. In this paper, a novel approach to measuring the body dimensions of live Qinchuan cattle with on transfer learning is proposed. Deep learning of the Kd-network was trained with classical three-dimensional (3D) point cloud datasets (PCD) of the ShapeNet datasets. After a series of processes of PCD sensed by the light detection and ranging (LiDAR) sensor, the cattle silhouettes could be extracted, which after augmentation could be applied as an input layer to the Kd-network. With the output of a convolutional layer of the trained deep model, the output layer of the deep model could be applied to pre-train the full connection network. The TrAdaBoost algorithm was employed to transfer the pre-trained convolutional layer and full connection of the deep model. To classify and recognize the PCD of the cattle silhouette, the average accuracy rate after training with transfer learning could reach up to 93.6%. On the basis of silhouette extraction, the candidate region of the feature surface shape could be extracted with mean curvature and Gaussian curvature. After the computation of the FPFH (fast point feature histogram) of the surface shape, the center of the feature surface could be recognized and the body dimensions of the cattle could finally be calculated. The experimental results showed that the comprehensive error of body dimensions was close to 2%, which could provide a feasible approach to the non-contact observations of the bodies of large physique livestock without any human intervention.
Collapse
Affiliation(s)
- Lvwen Huang
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
- Key Laboratory of Agricultural Internet of Things, Ministry of Agriculture and Rural Affairs, Yangling, Xianyang 712100, China
- Correspondence: (L.H.); (S.L.); Tel.: +86-137-0922-3117 (L.H.); +86-137-5997-2183 (S.L.)
| | - Han Guo
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
| | - Qinqin Rao
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
| | - Zixia Hou
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
| | - Shuqin Li
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
- Key Laboratory of Agricultural Internet of Things, Ministry of Agriculture and Rural Affairs, Yangling, Xianyang 712100, China
- Correspondence: (L.H.); (S.L.); Tel.: +86-137-0922-3117 (L.H.); +86-137-5997-2183 (S.L.)
| | - Shicheng Qiu
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (H.G.); (Q.R.); (Z.H.); (S.Q.)
| | - Xinyun Fan
- College of Computer Science, Wuhan University, Wuhan 430072, China;
| | - Hongyan Wang
- Western E-commerce Co., Ltd., Yinchuan 750004, China;
| |
Collapse
|
29
|
Multi-Pig Part Detection and Association with a Fully-Convolutional Network. SENSORS 2019; 19:s19040852. [PMID: 30791377 PMCID: PMC6413214 DOI: 10.3390/s19040852] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 01/15/2019] [Accepted: 02/16/2019] [Indexed: 01/06/2023]
Abstract
Computer vision systems have the potential to provide automated, non-invasive monitoring of livestock animals, however, the lack of public datasets with well-defined targets and evaluation metrics presents a significant challenge for researchers. Consequently, existing solutions often focus on achieving task-specific objectives using relatively small, private datasets. This work introduces a new dataset and method for instance-level detection of multiple pigs in group-housed environments. The method uses a single fully-convolutional neural network to detect the location and orientation of each animal, where both body part locations and pairwise associations are represented in the image space. Accompanying this method is a new dataset containing 2000 annotated images with 24,842 individually annotated pigs from 17 different locations. The proposed method achieves over 99% precision and over 96% recall when detecting pigs in environments previously seen by the network during training. To evaluate the robustness of the trained network, it is also tested on environments and lighting conditions unseen in the training set, where it achieves 91% precision and 67% recall. The dataset is publicly available for download.
Collapse
|