1
|
Bery S, Brown-Brandl TM, Jones BT, Rohrer GA, Sharma SR. Determining the Presence and Size of Shoulder Lesions in Sows Using Computer Vision. Animals (Basel) 2023; 14:131. [PMID: 38200862 PMCID: PMC10777999 DOI: 10.3390/ani14010131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 12/23/2023] [Accepted: 12/26/2023] [Indexed: 01/12/2024] Open
Abstract
Shoulder sores predominantly arise in breeding sows and often result in untimely culling. Reported prevalence rates vary significantly, spanning between 5% and 50% depending upon the type of crate flooring inside a farm, the animal's body condition, or an existing injury that causes lameness. These lesions represent not only a welfare concern but also have an economic impact due to the labor needed for treatment and medication. The objective of this study was to evaluate the use of computer vision techniques in detecting and determining the size of shoulder lesions. A Microsoft Kinect V2 camera captured the top-down depth and RGB images of sows in farrowing crates. The RGB images were collected at a resolution of 1920 × 1080. To ensure the best view of the lesions, images were selected with sows lying on their right and left sides with all legs extended. A total of 824 RGB images from 70 sows with lesions at various stages of development were identified and annotated. Three deep learning-based object detection models, YOLOv5, YOLOv8, and Faster-RCNN, pre-trained with the COCO and ImageNet datasets, were implemented to localize the lesion area. YOLOv5 was the best predictor as it was able to detect lesions with an mAP@0.5 of 0.92. To estimate the lesion area, lesion pixel segmentation was carried out on the localized region using traditional image processing techniques like Otsu's binarization and adaptive thresholding alongside DL-based segmentation models based on U-Net architecture. In conclusion, this study demonstrates the potential of computer vision techniques in effectively detecting and assessing the size of shoulder lesions in breeding sows, providing a promising avenue for improving sow welfare and reducing economic losses.
Collapse
Affiliation(s)
- Shubham Bery
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE 68583, USA; (S.B.); (S.R.S.)
| | - Tami M. Brown-Brandl
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE 68583, USA; (S.B.); (S.R.S.)
| | - Bradley T. Jones
- Genetics and Breeding Research Unit, USDA-ARS U.S. Meat Animal Research Center, Clay Center, NE 68933, USA; (B.T.J.); (G.A.R.)
| | - Gary A. Rohrer
- Genetics and Breeding Research Unit, USDA-ARS U.S. Meat Animal Research Center, Clay Center, NE 68933, USA; (B.T.J.); (G.A.R.)
| | - Sudhendu Raj Sharma
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE 68583, USA; (S.B.); (S.R.S.)
| |
Collapse
|
2
|
Huang YZ, Han L, Yang X, Liu Y, Zhu BW, Dong XP. Enhanced batch sorting and rapid sensory analysis of Mackerel products using YOLOv5s algorithm and CBAM: Validation through TPA, colorimeter, and PLSR analysis. Food Chem X 2023; 19:100733. [PMID: 37434800 PMCID: PMC10331289 DOI: 10.1016/j.fochx.2023.100733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 05/22/2023] [Accepted: 05/28/2023] [Indexed: 07/13/2023] Open
Abstract
This study employed the YOLOv5s algorithm to establish a rapid quality identification model for Pacific chub mackerel (S. japonicus) and Spanish mackerel (S. niphonius). Data augmentation was conducted using the copy-paste augmentation within the YOLOv5s network. Furthermore, a small object detection layer was integrated into the network structure's neck, while the convolutional block attention module (CBAM) was incorporated into the convolutional module to optimize the model. The model's accuracy was assessed through sensory evaluation, texture profile analysis, and colorimeter analysis. The findings indicated that the enhanced model achieved a mAP@0.5 score of 0.966, surpassing the original version's score of 0.953. Moreover, the improved model's params was only 7.848 M, and an average detection time of 115 ms/image (image resolution 2400 × 3200). Furthermore, sensory and physicochemical indicators are reliably distinguished between qualified and unqualified samples. The PLSR model exhibited R2X, R2Y, and Q2 values of 0.977, 0.956, and 0.663, respectively.
Collapse
Affiliation(s)
- Yi-Zhen Huang
- Academy of Food Interdisciplinary Science, School of Food Science and Technology, Dalian Polytechnic University, Dalian 116034, Liaoning, China
- National Engineering Research Center of Seafood, Collaborative Innovation Center of Seafood Deep Processing, Liaoning Province Collaborative Innovation Center for Marine Food Deep Processing, Dalian 116034, Liaoning, China
| | - Lin Han
- Academy of Food Interdisciplinary Science, School of Food Science and Technology, Dalian Polytechnic University, Dalian 116034, Liaoning, China
- National Engineering Research Center of Seafood, Collaborative Innovation Center of Seafood Deep Processing, Liaoning Province Collaborative Innovation Center for Marine Food Deep Processing, Dalian 116034, Liaoning, China
| | - Xiaoqing Yang
- Academy of Food Interdisciplinary Science, School of Food Science and Technology, Dalian Polytechnic University, Dalian 116034, Liaoning, China
- National Engineering Research Center of Seafood, Collaborative Innovation Center of Seafood Deep Processing, Liaoning Province Collaborative Innovation Center for Marine Food Deep Processing, Dalian 116034, Liaoning, China
| | - Yu Liu
- Academy of Food Interdisciplinary Science, School of Food Science and Technology, Dalian Polytechnic University, Dalian 116034, Liaoning, China
- National Engineering Research Center of Seafood, Collaborative Innovation Center of Seafood Deep Processing, Liaoning Province Collaborative Innovation Center for Marine Food Deep Processing, Dalian 116034, Liaoning, China
| | - Bei-Wei Zhu
- Academy of Food Interdisciplinary Science, School of Food Science and Technology, Dalian Polytechnic University, Dalian 116034, Liaoning, China
- National Engineering Research Center of Seafood, Collaborative Innovation Center of Seafood Deep Processing, Liaoning Province Collaborative Innovation Center for Marine Food Deep Processing, Dalian 116034, Liaoning, China
| | - Xiu-Ping Dong
- Academy of Food Interdisciplinary Science, School of Food Science and Technology, Dalian Polytechnic University, Dalian 116034, Liaoning, China
- National Engineering Research Center of Seafood, Collaborative Innovation Center of Seafood Deep Processing, Liaoning Province Collaborative Innovation Center for Marine Food Deep Processing, Dalian 116034, Liaoning, China
| |
Collapse
|
3
|
Hasan HA, Saad FH, Ahmed S, Mohammed N, Farook TH, Dudley J. Experimental validation of computer-vision methods for the successful detection of endodontic treatment obturation and progression from noisy radiographs. Oral Radiol 2023; 39:683-698. [PMID: 37097541 PMCID: PMC10504118 DOI: 10.1007/s11282-023-00685-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 04/11/2023] [Indexed: 04/26/2023]
Abstract
PURPOSE (1) To evaluate the effects of denoising and data balancing on deep learning to detect endodontic treatment outcomes from radiographs. (2) To develop and train a deep-learning model and classifier to predict obturation quality from radiomics. METHODS The study conformed to the STARD 2015 and MI-CLAIMS 2021 guidelines. 250 deidentified dental radiographs were collected and augmented to produce 2226 images. The dataset was classified according to endodontic treatment outcomes following a set of customized criteria. The dataset was denoised and balanced, and processed with YOLOv5s, YOLOv5x, and YOLOv7 models of real-time deep-learning computer vision. Diagnostic test parameters such as sensitivity (Sn), specificity (Sp), accuracy (Ac), precision, recall, mean average precision (mAP), and confidence were evaluated. RESULTS Overall accuracy for all the deep-learning models was above 85%. Imbalanced datasets with noise removal led to YOLOv5x's prediction accuracy to drop to 72%, while balancing and noise removal led to all three models performing at over 95% accuracy. mAP saw an improvement from 52 to 92% following balancing and denoising. CONCLUSION The current study of computer vision applied to radiomic datasets successfully classified endodontic treatment obturation and mishaps according to a custom progressive classification system and serves as a foundation to larger research on the subject matter.
Collapse
Affiliation(s)
- Habib Al Hasan
- Department of Electrical and Computer Engineering, North South University, Dhaka, Bangladesh
| | - Farhan Hasin Saad
- Department of Electrical and Computer Engineering, North South University, Dhaka, Bangladesh
| | - Saif Ahmed
- Department of Electrical and Computer Engineering, North South University, Dhaka, Bangladesh
| | - Nabeel Mohammed
- Department of Electrical and Computer Engineering, North South University, Dhaka, Bangladesh
| | - Taseef Hasan Farook
- Adelaide Dental School, Faculty of Health and Medical Sciences, The University of Adelaide, Level 10, AHMS Building, Adelaide, South Australia 5000 Australia
| | - James Dudley
- Adelaide Dental School, Faculty of Health and Medical Sciences, The University of Adelaide, Level 10, AHMS Building, Adelaide, South Australia 5000 Australia
| |
Collapse
|
4
|
Lut M, Latib LA, Ayob MA, Rohaziat N. YOLOv5 Models Comparison of Under Extrusion Failure Detection in FDM 3D Printing. 2023 IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC CONTROL AND INTELLIGENT SYSTEMS (I2CACIS) 2023. [DOI: 10.1109/i2cacis57635.2023.10193388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
- Muhammad Lut
- Universiti Tun Hussein Onn Malaysia,Department of Electronic Faculty of Electrical and Electronic Engineering,Johor,Malaysia
| | - Liwauddin Abd Latib
- Universiti Tun Hussein Onn Malaysia,Department of Electronic Faculty of Electrical and Electronic Engineering,Johor,Malaysia
| | - Mohammad Afif Ayob
- Universiti Tun Hussein Onn Malaysia,Department of Electronic Faculty of Electrical and Electronic Engineering,Johor,Malaysia
| | - Nurasyeera Rohaziat
- Universiti Tun Hussein Onn Malaysia,Department of Electronic Faculty of Electrical and Electronic Engineering,Johor,Malaysia
| |
Collapse
|
5
|
Liu Q, Deng W, Pham DT, Hu J, Wang Y, Zhou Z. A Two-Stage Screw Detection Framework for Automatic Disassembly Using a Reflection Feature Regression Model. MICROMACHINES 2023; 14:mi14050946. [PMID: 37241570 DOI: 10.3390/mi14050946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/28/2023]
Abstract
For remanufacturing to be more economically attractive, there is a need to develop automatic disassembly and automated visual detection methods. Screw removal is a common step in end-of-life product disassembly for remanufacturing. This paper presents a two-stage detection framework for structurally damaged screws and a linear regression model of reflection features that allows the detection framework to be conducted under uneven illumination conditions. The first stage employs reflection features to extract screws together with the reflection feature regression model. The second stage uses texture features to filter out false areas that have reflection features similar to those of screws. A self-optimisation strategy and weighted fusion are employed to connect the two stages. The detection framework was implemented on a robotic platform designed for disassembling electric vehicle batteries. This method allows screw removal to be conducted automatically in complex disassembly tasks, and the utilisation of the reflection feature and data learning provides new ideas for further research.
Collapse
Affiliation(s)
- Quan Liu
- School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
| | - Wupeng Deng
- School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
- Department of Mechanical Engineering, University of Birmingham, Birmingham B15 2TT, UK
| | - Duc Truong Pham
- Department of Mechanical Engineering, University of Birmingham, Birmingham B15 2TT, UK
| | - Jiwei Hu
- School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
| | - Yongjing Wang
- Department of Mechanical Engineering, University of Birmingham, Birmingham B15 2TT, UK
| | - Zude Zhou
- School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
| |
Collapse
|
6
|
Automatic Detection and Measurement of Renal Cysts in Ultrasound Images: A Deep Learning Approach. Healthcare (Basel) 2023; 11:healthcare11040484. [PMID: 36833018 PMCID: PMC9956133 DOI: 10.3390/healthcare11040484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 01/29/2023] [Accepted: 02/01/2023] [Indexed: 02/10/2023] Open
Abstract
Ultrasonography is widely used for diagnosis of diseases in internal organs because it is nonradioactive, noninvasive, real-time, and inexpensive. In ultrasonography, a set of measurement markers is placed at two points to measure organs and tumors, then the position and size of the target finding are measured on this basis. Among the measurement targets of abdominal ultrasonography, renal cysts occur in 20-50% of the population regardless of age. Therefore, the frequency of measurement of renal cysts in ultrasound images is high, and the effect of automating measurement would be high as well. The aim of this study was to develop a deep learning model that can automatically detect renal cysts in ultrasound images and predict the appropriate position of a pair of salient anatomical landmarks to measure their size. The deep learning model adopted fine-tuned YOLOv5 for detection of renal cysts and fine-tuned UNet++ for prediction of saliency maps, representing the position of salient landmarks. Ultrasound images were input to YOLOv5, and images cropped inside the bounding box and detected from the input image by YOLOv5 were input to UNet++. For comparison with human performance, three sonographers manually placed salient landmarks on 100 unseen items of the test data. These salient landmark positions annotated by a board-certified radiologist were used as the ground truth. We then evaluated and compared the accuracy of the sonographers and the deep learning model. Their performances were evaluated using precision-recall metrics and the measurement error. The evaluation results show that the precision and recall of our deep learning model for detection of renal cysts are comparable to standard radiologists; the positions of the salient landmarks were predicted with an accuracy close to that of the radiologists, and in a shorter time.
Collapse
|
7
|
Yu M, Wan Q, Tian S, Hou Y, Wang Y, Zhao J. Equipment Identification and Localization Method Based on Improved YOLOv5s Model for Production Line. SENSORS (BASEL, SWITZERLAND) 2022; 22:10011. [PMID: 36560377 PMCID: PMC9785116 DOI: 10.3390/s222410011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/07/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Intelligent video surveillance based on artificial intelligence, image processing, and other advanced technologies is a hot topic of research in the upcoming era of Industry 5.0. Currently, low recognition accuracy and low location precision of devices in intelligent monitoring remain a problem in production lines. This paper proposes a production line device recognition and localization method based on an improved YOLOv5s model. The proposed method can achieve real-time detection and localization of production line equipment such as robotic arms and AGV carts by introducing CA attention module in YOLOv5s network model architecture, GSConv lightweight convolution method and Slim-Neck method in Neck layer, add Decoupled Head structure to the Detect layer. The experimental results show that the improved method achieves 93.6% Precision, 85.6% recall, and 91.8% mAP@0.5, and the Pascal VOC2007 public dataset test shows that the improved method effectively improves the recognition accuracy. The research results can substantially improve the intelligence level of production lines and provide an important reference for manufacturing industries to realize intelligent and digital transformation.
Collapse
Affiliation(s)
- Ming Yu
- School of Computer and Information Engineering, Tianjin Chengjian University, Tianjin 300384, China
| | - Qian Wan
- School of Control and Mechanical Engineering, Tianjin Chengjian University, Tianjin 300384, China
| | - Songling Tian
- School of Control and Mechanical Engineering, Tianjin Chengjian University, Tianjin 300384, China
| | - Yanyan Hou
- School of Control and Mechanical Engineering, Tianjin Chengjian University, Tianjin 300384, China
| | - Yimiao Wang
- School of Control and Mechanical Engineering, Tianjin Chengjian University, Tianjin 300384, China
| | - Jian Zhao
- School of Control and Mechanical Engineering, Tianjin Chengjian University, Tianjin 300384, China
| |
Collapse
|
8
|
Ye Z, Guo Q, Wei J, Zhang J, Zhang H, Bian L, Guo S, Zheng X, Cao S. Recognition of terminal buds of densely-planted Chinese fir seedlings using improved YOLOv5 by integrating attention mechanism. FRONTIERS IN PLANT SCIENCE 2022; 13:991929. [PMID: 36299793 PMCID: PMC9589298 DOI: 10.3389/fpls.2022.991929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
Accurate and timely information on the number of densely-planted Chinese fir seedlings is essential for their scientific cultivation and intelligent management. However, in the later stage of cultivation, the overlapping of lateral branches among individuals is too severe to identify the entire individual in the UAV image. At the same time, in the high-density planting nursery, the terminal bud of each seedling has a distinctive characteristic of growing upward, which can be used as an identification feature. Still, due to the small size and dense distribution of the terminal buds, the existing recognition algorithm will have a significant error. Therefore, in this study, we proposed a model based on the improved network structure of the latest YOLOv5 algorithm for identifying the terminal bud of Chinese fir seedlings. Firstly, the micro-scale prediction head was added to the original prediction head to enhance the model's ability to perceive small-sized terminal buds. Secondly, a multi-attention mechanism module composed of Convolutional Block Attention Module (CBAM) and Efficient Channel Attention (ECA) was integrated into the neck of the network to enhance further the model's ability to focus on key target objects in complex backgrounds. Finally, the methods including data augmentation, Test Time Augmentation (TTA) and Weighted Boxes Fusion (WBF) were used to improve the robustness and generalization of the model for the identification of terminal buds in different growth states. The results showed that, compared with the standard version of YOLOv5, the recognition accuracy of the improved YOLOv5 was significantly increased, with a precision of 95.55%, a recall of 95.84%, an F1-Score of 96.54%, and an mAP of 94.63%. Under the same experimental conditions, compared with other current mainstream algorithms (YOLOv3, Faster R-CNN, and PP-YOLO), the average precision and F1-Score of the improved YOLOv5 also increased by 9.51-28.19 percentage points and 15.92-32.94 percentage points, respectively. Overall, The improved YOLOv5 algorithm integrated with the attention network can accurately identify the terminal buds of densely-planted Chinese fir seedlings in UAV images and provide technical support for large-scale and automated counting and precision cultivation of Chinese fir seedlings.
Collapse
Affiliation(s)
- Zhangxi Ye
- College of Forestry, Fujian Agriculture and Forestry University, Fuzhou, China
| | - Qian Guo
- College of Forestry, Fujian Agriculture and Forestry University, Fuzhou, China
| | - Jiahao Wei
- College of Forestry, Fujian Agriculture and Forestry University, Fuzhou, China
| | - Jian Zhang
- College of Forestry, Fujian Agriculture and Forestry University, Fuzhou, China
| | - Houxi Zhang
- College of Forestry, Fujian Agriculture and Forestry University, Fuzhou, China
| | - Liming Bian
- College of Forestry, Nanjing Forestry University, Nanjing, China
- Co-Innovation Center for Sustainable Forestry in Southern China, Nanjing Forestry University, Nanjing, China
- Key Laboratory of Forest Genetics & Biotechnology of the Ministry of Education, Nanjing Forestry University, Nanjing, China
| | - Shijie Guo
- College of Forestry, Fujian Agriculture and Forestry University, Fuzhou, China
- Key Laboratory of State Forestry and Grassland Administration for Soil and Water Conservation in Red Soil Region of South China, Fuzhou, China
- Cross-Strait Collaborative Innovation Center of Soil and Water Conservation, Fuzhou, China
| | - Xueyan Zheng
- Seed and seedling department, Yangkou State-owned Forest Farm, Nanping, China
| | - Shijiang Cao
- College of Forestry, Fujian Agriculture and Forestry University, Fuzhou, China
| |
Collapse
|
9
|
GAN-Based Image Dehazing for Intelligent Weld Shape Classification and Tracing Using Deep Learning. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12146860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Weld seam identification with industrial robots is a difficult task since it requires manual edge recognition and traditional image processing approaches, which take time. Furthermore, noises such as arc light, weld fumes, and different backgrounds have a significant impact on traditional weld seam identification. To solve these issues, deep learning-based object detection is used to distinguish distinct weld seam shapes in the presence of weld fumes, simulating real-world industrial welding settings. Genetic algorithm-based state-of-the-art object detection models such as Scaled YOLOv4 (You Only Look Once), YOLO DarkNet, and YOLOv5 are used in this work. To support actual welding, the aforementioned architecture is trained with 2286 real weld pieces made of mild steel and aluminum plates. To improve weld detection, the welding fumes are denoised using the generative adversarial network (GAN) and compared with dark channel prior (DCP) approach. Then, to discover the distinct weld seams, a contour detection method was applied, and an artificial neural network (ANN) was used to convert the pixel values into robot coordinates. Finally, distinct weld shape coordinates are provided to the TAL BRABO manipulator for tracing the shapes recognized using an eye-to-hand robotic camera setup. Peak signal-to-noise ratio, the structural similarity index, mean square error, and the naturalness image quality evaluator score are the dehazing metrics utilized for evaluation. For each test scenario, detection parameters such as precision, recall, mean average precision (mAP), loss, and inference speed values are compared. Weld shapes are recognized with 95% accuracy using YOLOv5 in both normal and post-fume removal settings. It was observed that the robot is able to trace the weld seam more precisely.
Collapse
|
10
|
Abstract
Aggregate classification is the prerequisite for making concrete. Traditional aggregate identification methods have the disadvantages of low accuracy and a slow speed. To solve these problems, a miniature aggregate detection and classification model, based on the improved You Only Look Once (YOLO) algorithm, named YOLOv5-ytiny is proposed in this study. Firstly, the C3 structure in YOLOv5 is replaced with our proposed CI structure. Then, the redundant part of the Neck structure is pruned by us. Finally, the bounding box regression loss function GIoU is changed to the CIoU function. The proposed YOLOv5-ytiny model was compared with other object detection algorithms such as YOLOv4, YOLOv4-tiny, and SSD. The experimental results demonstrate that the YOLOv5-ytiny model reaches 9.17 FPS, 60% higher than the original YOLOv5 algorithm, and reaches 99.6% mAP (the mean average precision). Moreover, the YOLOv5-ytiny model has significant speed advantages over CPU-only computer devices. This method can not only accurately identify the aggregate but can also obtain the relative position of the aggregate, which can be effectively used for aggregate detection.
Collapse
|
11
|
Pang N, Liu Z, Lin Z, Chen X, Liu X, Pan M, Shi K, Xiao Y, Xu L. Fast identification and quantification of c-Fos protein using you-only-look-once-v5. Front Psychiatry 2022; 13:1011296. [PMID: 36213931 PMCID: PMC9537349 DOI: 10.3389/fpsyt.2022.1011296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 09/05/2022] [Indexed: 11/13/2022] Open
Abstract
In neuroscience, protein activity characterizes neuronal excitability in response to a diverse array of external stimuli and represents the cell state throughout the development of brain diseases. Importantly, it is necessary to characterize the proteins involved in disease progression, nuclear function determination, stimulation method effect, and other aspects. Therefore, the quantification of protein activity is indispensable in neuroscience. Currently, ImageJ software and manual counting are two of the most commonly used methods to quantify proteins. To improve the efficiency of quantitative protein statistics, the you-only-look-once-v5 (YOLOv5) model was proposed. In this study, c-Fos immunofluorescence images data set as an example to verify the efficacy of the system using protein quantitative statistics. The results indicate that YOLOv5 was less time-consuming or obtained higher accuracy than other methods (time: ImageJ software: 80.12 ± 1.67 s, manual counting: 3.41 ± 0.25 s, YOLOv5: 0.0251 ± 0.0003 s, p < 0.0001, n = 83; simple linear regression equation: ImageJ software: Y = 1.013 × X + 0.776, R 2 = 0.837; manual counting: Y = 1.0*X + 0, R 2 = 1; YOLOv5: Y = 0.9730*X + 0.3821, R 2 = 0.933, n = 130). The findings suggest that the YOLOv5 algorithm provides feasible methods for quantitative statistical analysis of proteins and has good potential for application in detecting target proteins in neuroscience.
Collapse
Affiliation(s)
- Na Pang
- The College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zihao Liu
- Shenzhen Hospital of Guangzhou University of Chinese Medicine, Shenzhen, China
| | - Zhengrong Lin
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiaoyan Chen
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiufang Liu
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Min Pan
- Shenzhen Hospital of Guangzhou University of Chinese Medicine, Shenzhen, China
| | - Keke Shi
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yang Xiao
- National Innovation Center for Advanced Medical Devices, Shenzhen, China
| | - Lisheng Xu
- The College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Key Laboratory of Medical Image Computing, Ministry of Education, Shenyang, China
| |
Collapse
|