1
|
Jiang S, Zhang G, Shen Z, Zhong P, Tan J, Liu J. Pig Weight Estimation Method Based on a Framework Combining Mask R-CNN and Ensemble Regression Model. Animals (Basel) 2024; 14:2122. [PMID: 39061584 PMCID: PMC11273399 DOI: 10.3390/ani14142122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 07/15/2024] [Accepted: 07/18/2024] [Indexed: 07/28/2024] Open
Abstract
Using computer vision technology to estimate pig live weight is an important method to realize pig welfare. But there are two key issues that affect pigs' weight estimation: one is the uneven illumination, which leads to unclear contour extraction of pigs, and the other is the bending of the pig body, which leads to incorrect pig body information. For the first one, Mask R-CNN was used to extract the contour of the pig, and the obtained mask image was converted into a binary image from which we were able to obtain a more accurate contour image. For the second one, the body length, hip width and the distance from the camera to the pig back were corrected by XGBoost and actual measured information. Then we analyzed the rationality of the extracted features. Three feature combination strategies were used to predict pig weight. In total, 1505 back images of 39 pigs obtained using Azure kinect DK were used in the numerical experiments. The highest prediction accuracy is XGBoost, with an MAE of 0.389, RMSE of 0.576, MAPE of 0.318% and R2 of 0.995. We also recommend using the Mask R-CNN + RFR method because it has fairly high precision in each strategy. The experimental results show that our proposed method has excellent performance in live weight estimation of pigs.
Collapse
Affiliation(s)
- Sheng Jiang
- College of Science, China Agricultural University, Beijing 100083, China; (S.J.); (Z.S.); (P.Z.)
- National Innovation Center for Digital Fishery, China Agricultural University, Beijing 100083, China;
| | - Guoxu Zhang
- National Innovation Center for Digital Fishery, China Agricultural University, Beijing 100083, China;
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Zhencai Shen
- College of Science, China Agricultural University, Beijing 100083, China; (S.J.); (Z.S.); (P.Z.)
- National Innovation Center for Digital Fishery, China Agricultural University, Beijing 100083, China;
- Key Laboratory of Agricultural Information Acquisition, Ministry of Agriculture, Beijing 100083, China
- Beijing Engineering and Technology Research Center for Internet of Things in Agriculture, Beijing 100083, China
| | - Ping Zhong
- College of Science, China Agricultural University, Beijing 100083, China; (S.J.); (Z.S.); (P.Z.)
- National Innovation Center for Digital Fishery, China Agricultural University, Beijing 100083, China;
| | - Junyan Tan
- College of Science, China Agricultural University, Beijing 100083, China; (S.J.); (Z.S.); (P.Z.)
- National Innovation Center for Digital Fishery, China Agricultural University, Beijing 100083, China;
| | - Jianfeng Liu
- College of Animal Science and Technology, China Agricultural University, Beijing 100083, China
| |
Collapse
|
2
|
Hou G, Li R, Tian M, Ding J, Zhang X, Yang B, Chen C, Huang R, Yin Y. Improving Efficiency: Automatic Intelligent Weighing System as a Replacement for Manual Pig Weighing. Animals (Basel) 2024; 14:1614. [PMID: 38891661 PMCID: PMC11171250 DOI: 10.3390/ani14111614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 05/27/2024] [Accepted: 05/27/2024] [Indexed: 06/21/2024] Open
Abstract
To verify the accuracy of AIWS, we weighed 106 pen growing-finishing pigs' weights using both the manual and AIWS methods, respectively. Accuracy was evaluated based on the values of MAE, MAPE, and RMSE. In the growth experiment, manual weighing was conducted every two weeks and AIWS predicted weight data was recorded daily, followed by fitting the growth curves. The results showed that MAE, MAPE, and RMSE values for 60 to 120 kg pigs were 3.48 kg, 3.71%, and 4.43 kg, respectively. The correlation coefficient r between the AIWS and manual method was 0.9410, and R2 was 0.8854. The two were extremely significant correlations (p < 0.001). In growth curve fitting, the AIWS method has lower AIC and BIC values than the manual method. The Logistic model by AIWS was the best-fit model. The age and body weight at the inflection point of the best-fit model were 164.46 d and 93.45 kg, respectively. The maximum growth rate was 831.66 g/d. In summary, AIWS can accurately predict pigs' body weights in actual production and has a better fitting effect on the growth curves of growing-finishing pigs. This study suggested that it was feasible for AIWS to replace manual weighing to measure the weight of 50 to 120 kg live pigs in large-scale farming.
Collapse
Affiliation(s)
- Gaifeng Hou
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Rui Li
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Mingzhou Tian
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Jing Ding
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Xingfu Zhang
- College of Computer Science and Technology, Heilongjiang Institute of Technology, Harbin 150050, China;
- Beijing Focused Loong Technology Co., Ltd., Beijing 100086, China
| | - Bin Yang
- Key Laboratory of Visual Perception and Artificial Intelligence of Hunan Province, College of Electrical and Information Engineering, Hunan University, Changsha 410082, China;
| | - Chunyu Chen
- College of Information and Communication, Harbin Engineering University, Harbin 150001, China;
| | - Ruilin Huang
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Yulong Yin
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| |
Collapse
|
3
|
Liu Y, Zhou J, Bian Y, Wang T, Xue H, Liu L. Estimation of Weight and Body Measurement Model for Pigs Based on Back Point Cloud Data. Animals (Basel) 2024; 14:1046. [PMID: 38612285 PMCID: PMC11010847 DOI: 10.3390/ani14071046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 03/26/2024] [Accepted: 03/28/2024] [Indexed: 04/14/2024] Open
Abstract
Pig farming is a crucial sector in global animal husbandry. The weight and body dimension data of pigs reflect their growth and development status, serving as vital metrics for assessing their progress. Presently, pig weight and body dimensions are predominantly measured manually, which poses challenges such as difficulties in herding, stress responses in pigs, and the control of zoonotic diseases. To address these issues, this study proposes a non-contact weight estimation and body measurement model based on point cloud data from pig backs. A depth camera was installed above a weighbridge to acquire 3D point cloud data from 258 Yorkshire-Landrace crossbred sows. We selected 200 Yorkshire-Landrace sows as the research subjects and applied point cloud filtering and denoising techniques to their three-dimensional point cloud data. Subsequently, a K-means clustering segmentation algorithm was employed to extract the point cloud corresponding to the pigs' backs. A convolutional neural network with a multi-head attention was established for pig weight prediction and added RGB information as an additional feature. During the data processing process, we also measured the back body size information of the pigs. During the model evaluation, 58 Yorkshire-Landrace sows were specifically selected for experimental assessment. Compared to manual measurements, the weight estimation exhibited an average absolute error of 11.552 kg, average relative error of 4.812%, and root mean square error of 11.181 kg. Specifically, for the MACNN, incorporating RGB information as an additional feature resulted in a decrease of 2.469 kg in the RMSE, a decrease of 0.8% in the MAPE, and a decrease of 1.032 kg in the MAE. Measurements of shoulder width, abdominal width, and hip width yielded corresponding average relative errors of 3.144%, 3.798%, and 3.820%. In conclusion, a convolutional neural network with a multi-head attention was established for pig weight prediction, and incorporating RGB information as an additional feature method demonstrated accuracy and reliability for weight estimation and body dimension measurement.
Collapse
Affiliation(s)
| | | | | | | | | | - Longshen Liu
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China; (Y.L.)
| |
Collapse
|
4
|
Wang S, Jiang H, Qiao Y, Jiang S. A Method for Obtaining 3D Point Cloud Data by Combining 2D Image Segmentation and Depth Information of Pigs. Animals (Basel) 2023; 13:2472. [PMID: 37570282 PMCID: PMC10417003 DOI: 10.3390/ani13152472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/21/2023] [Accepted: 07/25/2023] [Indexed: 08/13/2023] Open
Abstract
This paper proposes a method for automatic pig detection and segmentation using RGB-D data for precision livestock farming. The proposed method combines the enhanced YOLOv5s model with the Res2Net bottleneck structure, resulting in improved fine-grained feature extraction and ultimately enhancing the precision of pig detection and segmentation in 2D images. Additionally, the method facilitates the acquisition of 3D point cloud data of pigs in a simpler and more efficient way by using the pig mask obtained in 2D detection and segmentation and combining it with depth information. To evaluate the effectiveness of the proposed method, two datasets were constructed. The first dataset consists of 5400 images captured in various pig pens under diverse lighting conditions, while the second dataset was obtained from the UK. The experimental results demonstrated that the improved YOLOv5s_Res2Net achieved a mAP@0.5:0.95 of 89.6% and 84.8% for both pig detection and segmentation tasks on our dataset, while achieving a mAP@0.5:0.95 of 93.4% and 89.4% on the Edinburgh pig behaviour dataset. This approach provides valuable insights for improving pig management, conducting welfare assessments, and estimating weight accurately.
Collapse
Affiliation(s)
- Shunli Wang
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China; (S.W.); (H.J.)
| | - Honghua Jiang
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China; (S.W.); (H.J.)
| | - Yongliang Qiao
- Australian Institute for Machine Learning (AIML), The University of Adelaide, Adelaide, SA 5005, Australia
| | - Shuzhen Jiang
- Key Laboratory of Efficient Utilisation of Non-Grain Feed Resources (Co-Construction by Ministry and Province), Ministry of Agriculture and Rural Affairs, Department of Animal Science and Technology, Shandong Agricultural University, Tai’an 271018, China;
| |
Collapse
|
5
|
Zhou H, Li Q, Xie Q. Individual Pig Identification Using Back Surface Point Clouds in 3D Vision. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115156. [PMID: 37299883 DOI: 10.3390/s23115156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/16/2023] [Accepted: 05/21/2023] [Indexed: 06/12/2023]
Abstract
The individual identification of pigs is the basis for precision livestock farming (PLF), which can provide prerequisites for personalized feeding, disease monitoring, growth condition monitoring and behavior identification. Pig face recognition has the problem that pig face samples are difficult to collect and images are easily affected by the environment and body dirt. Due to this problem, we proposed a method for individual pig identification using three-dimension (3D) point clouds of the pig's back surface. Firstly, a point cloud segmentation model based on the PointNet++ algorithm is established to segment the pig's back point clouds from the complex background and use it as the input for individual recognition. Then, an individual pig recognition model based on the improved PointNet++LGG algorithm was constructed by increasing the adaptive global sampling radius, deepening the network structure and increasing the number of features to extract higher-dimensional features for accurate recognition of different individuals with similar body sizes. In total, 10,574 3D point cloud images of ten pigs were collected to construct the dataset. The experimental results showed that the accuracy of the individual pig identification model based on the PointNet++LGG algorithm reached 95.26%, which was 2.18%, 16.76% and 17.19% higher compared with the PointNet model, PointNet++SSG model and MSG model, respectively. Individual pig identification based on 3D point clouds of the back surface is effective. This approach is easy to integrate with functions such as body condition assessment and behavior recognition, and is conducive to the development of precision livestock farming.
Collapse
Affiliation(s)
- Hong Zhou
- College of Engineering, Heilongjiang Bayi Agricultural University, Daqing 163319, China
| | - Qingda Li
- College of Engineering, Heilongjiang Bayi Agricultural University, Daqing 163319, China
| | - Qiuju Xie
- College of Electrical and Information, Northeast Agricultural University, Harbin 150030, China
- Key Laboratory of Swine Facilities Engineering, Ministry of Agriculture, Harbin 150030, China
| |
Collapse
|
6
|
Liu J, Xiao D, Liu Y, Huang Y. A Pig Mass Estimation Model Based on Deep Learning without Constraint. Animals (Basel) 2023; 13:ani13081376. [PMID: 37106939 PMCID: PMC10135044 DOI: 10.3390/ani13081376] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Revised: 04/10/2023] [Accepted: 04/13/2023] [Indexed: 04/29/2023] Open
Abstract
The body mass of pigs is an essential indicator of their growth and health. Lately, contactless pig body mass estimation methods based on computer vision technology have gained attention thanks to their potential to improve animal welfare and ensure breeders' safety. Nonetheless, current methods require pigs to be restrained in a confinement pen, and no study has been conducted in an unconstrained environment. In this study, we develop a pig mass estimation model based on deep learning, capable of estimating body mass without constraints. Our model comprises a Mask R-CNN-based pig instance segmentation algorithm, a Keypoint R-CNN-based pig keypoint detection algorithm and an improved ResNet-based pig mass estimation algorithm that includes multi-branch convolution, depthwise convolution, and an inverted bottleneck to improve accuracy. We constructed a dataset for this study using images and body mass data from 117 pigs. Our model achieved an RMSE of 3.52 kg on the test set, which is lower than that of the pig body mass estimation algorithm with ResNet and ConvNeXt as the backbone network, and the average estimation speed was 0.339 s·frame-1 Our model can evaluate the body quality of pigs in real-time to provide data support for grading and adjusting breeding plans, and has broad application prospects.
Collapse
Affiliation(s)
- Junbin Liu
- College of Mathematics Informatics, South China Agricultural University, Guangzhou 510642, China
| | - Deqin Xiao
- College of Mathematics Informatics, South China Agricultural University, Guangzhou 510642, China
| | - Youfu Liu
- College of Mathematics Informatics, South China Agricultural University, Guangzhou 510642, China
| | - Yigui Huang
- College of Mathematics Informatics, South China Agricultural University, Guangzhou 510642, China
| |
Collapse
|
7
|
Qin Q, Dai D, Zhang C, Zhao C, Liu Z, Xu X, Lan M, Wang Z, Zhang Y, Su R, Wang R, Wang Z, Zhao Y, Li J, Liu Z. Identification of body size characteristic points based on the Mask R-CNN and correlation with body weight in Ujumqin sheep. Front Vet Sci 2022; 9:995724. [DOI: 10.3389/fvets.2022.995724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Accepted: 10/13/2022] [Indexed: 11/07/2022] Open
Abstract
The measurements of body size data not only reflect the physical fitness, carcass structure, excellent growth condition, and developmental relationship among tissues and organs of animals but are also critical indicators to measure the growth and development of sheep. Computer vision-based body size identification is a non-contact and stress-free method. In this study, we analyzed different body size traits (height at wither, body slanting length, chest depth, chest circumference, shank circumference, hip height, shoulder width, and rump width) and the body weight of 332 Ujumqin sheep and significant correlations (P < 0.05) were obtained among all traits in Ujumqin sheep. Except for shoulder width, rump width, and shank circumference, all were positively correlated, and the effect of sex on Ujumqin sheep was highly significant. The main body size indexes affecting the body weight of rams and ewes were obtained through stepwise regression analysis of body size on body weight, in order of chest circumference, body slanting length, rump width, hip height, height at wither, and shoulder width for rams and body slanting length, chest circumference, rump width, hip height, height at wither and shoulder width for ewes. The body slanting length, chest circumference, and hip height of ewes were used to construct prediction equations for the body weight of Ujumqin sheep of different sexes. The model's prediction accuracy was 83.9% for the rams and 79.4% for ewes. Combined with a Mask R-CNN and machine vision methods, recognition models of important body size parameters of Ujumqin sheep were constructed. The prediction errors of body slanting length, height at wither, hip height, and chest circumference were ~5%, chest depth error was 9.63%, and shoulder width, rump width, and shank circumference errors were 14.95, 12.05, and 19.71%, respectively. The results show that the proposed method is effective and has great potential in precision management.
Collapse
|
8
|
Wang X, Wang W, Lu J, Wang H. HRST: An Improved HRNet for Detecting Joint Points of Pigs. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22197215. [PMID: 36236311 PMCID: PMC9571911 DOI: 10.3390/s22197215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 09/19/2022] [Accepted: 09/20/2022] [Indexed: 06/02/2023]
Abstract
The body size of pigs is a vital evaluation indicator for growth monitoring and selective breeding. The detection of joint points is critical for accurately estimating pig body size. However, most joint point detection methods focus on improving detection accuracy while neglecting detection speed and model parameters. In this study, we propose an HRNet with Swin Transformer block (HRST) based on HRNet for detecting the joint points of pigs. It can improve model accuracy while significantly reducing model parameters by replacing the fourth stage of parameter redundancy in HRNet with a Swin Transformer block. Moreover, we implemented joint point detection for multiple pigs following two steps: first, CenterNet was used to detect pig posture (lying or standing); then, HRST was used for joint point detection for standing pigs. The results indicated that CenterNet achieved an average precision (AP) of 86.5%, and HRST achieved an AP of 77.4% and a real-time detection speed of 40 images per second. Compared with HRNet, the AP of HRST improved by 6.8%, while the number of model parameters and the calculated amount reduced by 72.8% and 41.7%, respectively. The study provides technical support for the accurate and rapid detection of pig joint points, which can be used for contact-free body size estimation of pigs.
Collapse
Affiliation(s)
- Xiaopin Wang
- Key Laboratory of Smart Farming for Agricultural Animals, Ministry of Agriculture and Rural Affairs, College of Informatics, Huazhong Agricultural University, Wuhan 430070, China
- Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan 430070, China
| | - Wei Wang
- Key Laboratory of Smart Farming for Agricultural Animals, Ministry of Agriculture and Rural Affairs, College of Informatics, Huazhong Agricultural University, Wuhan 430070, China
| | - Jisheng Lu
- Key Laboratory of Smart Farming for Agricultural Animals, Ministry of Agriculture and Rural Affairs, College of Informatics, Huazhong Agricultural University, Wuhan 430070, China
| | - Haiyan Wang
- Key Laboratory of Smart Farming for Agricultural Animals, Ministry of Agriculture and Rural Affairs, College of Informatics, Huazhong Agricultural University, Wuhan 430070, China
- Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan 430070, China
| |
Collapse
|
9
|
Gherissi DE, Lamraoui R, Chacha F, Gaouar SBS. Accuracy of image analysis for linear zoometric measurements in dromedary camels. Trop Anim Health Prod 2022; 54:232. [PMID: 35857152 DOI: 10.1007/s11250-022-03242-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 07/13/2022] [Indexed: 11/27/2022]
Abstract
The present study was designed to verify the effectiveness of the image analysis method for body measurement in dromedary camel compared to manual measurements as a reference method. To achieve this aim, twenty-one linear body measurements were estimated on 59 adult Sahraoui dromedary camels (22 males and 37 females) with a normal clinical condition by using a measuring stick or vernier caliper (standard method). On the other hand, image analysis on profile, front, or behind photographs was processed using Axiovision Software. Overall mean comparison, relative error, variance, Pearson's correlation coefficient, and coefficient of variance showed that the image analysis method was accurate in relation to the manual measurement. Furthermore, image analysis results indicated relevant accuracy (bias correction factor, Cb ≈1) and precision (Pearson ρ ≈1) which were significantly correlated with the results of the reference method (Lin's concordance correlation coefficients rccc ≈ 1). According to Bland-Altman upper and lower limits of agreement, the concordance was estimated between 93.22 and 98.3%. Passing-Bablok regression showed a good relationship between the results of the two methods displaying no significant systematic and proportional bias. The image analysis method for linear body measurements in dromedary camel showed results that are in agreement with the manual measuring method. Therefore, the image analysis could be considered a valid tool for camel conformation trait studies.
Collapse
Affiliation(s)
- Djalel Eddine Gherissi
- Laboratory of Animal Productions, Biotechnologies and Health, Institute of Agronomic and Veterinary Sciences, University of Souk-Ahras, BP 41000, Souk Ahras, Algeria.
| | - Ramzi Lamraoui
- Laboratory of Animal Productions, Biotechnologies and Health, Institute of Agronomic and Veterinary Sciences, University of Souk-Ahras, BP 41000, Souk Ahras, Algeria
- Department of Biology of Living Organisms, Faculty of Natural and Life Sciences, University of Batna 2, Batna (05110), Algeria
| | - Faycel Chacha
- Laboratory of Animal Productions, Biotechnologies and Health, Institute of Agronomic and Veterinary Sciences, University of Souk-Ahras, BP 41000, Souk Ahras, Algeria
- Biotechnology Research Center, PO E73 .NU N° 03, Constantine, Algeria
| | - Semir Bechir Suheil Gaouar
- Applied Genetic in Agriculture, Ecology and Public Health (GenApAgiE), Faculty SNV/STU, University of Tlemcen, Tlemcen, Algeria
| |
Collapse
|
10
|
Identification of Body Size Determination Related Candidate Genes in Domestic Pig Using Genome-Wide Selection Signal Analysis. Animals (Basel) 2022; 12:ani12141839. [PMID: 35883386 PMCID: PMC9312078 DOI: 10.3390/ani12141839] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/13/2022] [Accepted: 07/18/2022] [Indexed: 01/03/2023] Open
Abstract
This study aimed to identify the genes related to the body size of pigs by conducting genome-wide selection analysis (GWSA). We performed a GWSA scan on 50 pigs belonging to four small-bodied pig populations (Diannan small-eared pig, Bama Xiang pig, Wuzhishan pig, and Jeju black pig from South Korea) and 124 large-bodied pigs. We used the genetic parameters of the pairwise fixation index (FST) and π ratio (case/control) to screen candidate genome regions and genes related to body size. The results revealed 47,339,509 high-quality SNPs obtained from 174 individuals, while 280 interacting candidate regions were obtained from the top 1% signal windows of both parameters, along with 187 genes (e.g., ADCK4, AMDHD2, ASPN, ASS1, and ATP6V0C). The results of the candidate gene (CG) annotation showed that a series of CGs (e.g., MSTN, LTBP4, PDPK1, PKMYT1, ASS1, and STAT6) was enriched into the gene ontology terms. Moreover, molecular pathways, such as the PI3K-Akt, HIF-1, and AMPK signaling pathways, were verified to be related to body development. Overall, we identified a series of key genes that may be closely related to the body size of pigs, further elucidating the heredity basis of body shape determination in pigs and providing a theoretical reference for molecular breeding.
Collapse
|
11
|
A GAN-Augmented Corrosion Prediction Model for Uncoated Steel Plates. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The deterioration and damage of aging steel structures have caused huge safety concerns. Corrosion has been identified as a big reason for the deterioration and damage, which causes steel members to lose materials. As a result, the structures’ stiffness and load-bearing capacity will be reduced, which brings economic losses and safety hazards. For the maintenance and repair of steel structures, fast and accurate prediction of corrosion development plays a critical role in numerical simulation analysis, which could save time and costs. In this research, we build a simulation system based on GAN data augmentation with UNet as the generator and MobileNetV2 as the discriminator. The goal is to effectively predict the corrosion behavior of uncoated steel structures over time and under different circumstances. The system can simulate three stages of corrosion based on the dataset collected from experiments. It can also predict the corrosion of steel plates in the next stage. The discriminator of the system can be used to classify the type of steel, the stage of corrosion, and days of corrosion. Based on comparative experiments, our system demonstrates outstanding performance and outperforms the baseline model.
Collapse
|