1
|
Wang X, Li N, Yin X, Xing L, Zheng Y. Classification of metastatic hepatic carcinoma and hepatocellular carcinoma lesions using contrast-enhanced CT based on EI-CNNet. Med Phys 2023; 50:5630-5642. [PMID: 36869656 DOI: 10.1002/mp.16340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 02/24/2023] [Accepted: 02/24/2023] [Indexed: 03/05/2023] Open
Abstract
BACKGROUND For hepatocellular carcinoma and metastatic hepatic carcinoma, imaging is one of the main diagnostic methods. In clinical practice, diagnosis mainly relied on experienced imaging physicians, which was inefficient and cannot met the demand for rapid and accurate diagnosis. Therefore, how to efficiently and accurately classify the two types of liver cancer based on imaging is an urgent problem to be solved at present. PURPOSE The purpose of this study was to use the deep learning classification model to help radiologists classify the single metastatic hepatic carcinoma and hepatocellular carcinoma based on the enhanced features of enhanced CT (Computer Tomography) portal phase images of the liver site. METHODS In this retrospective study, 52 patients with metastatic hepatic carcinoma and 50 patients with hepatocellular carcinoma were among the patients who underwent preoperative enhanced CT examinations from 2017-2020. A total of 565 CT slices from these patients were used to train and validate the classification network (EI-CNNet, training/validation: 452/113). First, the EI block was used to extract edge information from CT slices to enrich fine-grained information and classify them. Then, ROC (Receiver Operating Characteristic) curve was used to evaluate the performance, accuracy, and recall of the EI-CNNet. Finally, the classification results of EI-CNNet were compared with popular classification models. RESULTS By utilizing 80% data for model training and 20% data for model validation, the average accuracy of this experiment was 98.2% ± 0.62 (mean ± standard deviation (SD)), the recall rate was 97.23% ± 2.77, the precision rate was 98.02% ± 2.07, the network parameters were 11.83 MB, and the validation time was 9.83 s/sample. The classification accuracy was improved by 20.98% compared to the base CNN network and the validation time was 10.38 s/sample. Compared with other classification networks, the InceptionV3 network showed improved classification results, but the number of parameters was increased and the validation time was 33 s/sample, and the classification accuracy was improved by 6.51% using this method. CONCLUSION EI-CNNet demonstrated promised diagnostic performance and has potential to reduce the workload of radiologists and may help distinguish whether the tumor is primary or metastatic in time; otherwise, it may be missed or misjudged.
Collapse
Affiliation(s)
- Xuehu Wang
- College of Electronic and Information Engineering, Hebei University, Baoding, China
- Research Center of Machine Vision Engineering & Technology of Hebei Province, Baoding, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding, China
| | - Nie Li
- College of Electronic and Information Engineering, Hebei University, Baoding, China
- Research Center of Machine Vision Engineering & Technology of Hebei Province, Baoding, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding, China
| | - Xiaoping Yin
- Affiliated Hospital of Hebei University, Bao ding, China
| | - Lihong Xing
- CT/MRI room, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| | - Yongchang Zheng
- Department of Liver Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (CAMS & PUMC), Beijing, China
| |
Collapse
|
2
|
Song C, Peng B, Wang H, Zhou Y, Sun L, Suo X, Fan X. Maize seed appearance quality assessment based on improved Inception-ResNet. FRONTIERS IN PLANT SCIENCE 2023; 14:1249989. [PMID: 37692413 PMCID: PMC10484107 DOI: 10.3389/fpls.2023.1249989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 08/04/2023] [Indexed: 09/12/2023]
Abstract
Current inspections of seed appearance quality are mainly performed manually, which is time-consuming, tedious, and subjective, and creates difficulties in meeting the needs of practical applications. For rapid and accurate identification of seeds based on appearance quality, this study proposed a seed-quality evaluation method that used an improved Inception-ResNet network with corn seeds of different qualities. First, images of multiple corn seeds were segmented to build a single seed image database. Second, the standard convolution of the Inception-ResNet module was replaced by a depthwise separable convolution to reduce the number of model parameters and computational complexity of the network. In addition, an attention mechanism was applied to improve the feature learning performance of the network model and extract the best image information to express the appearance quality. Finally, the feature fusion strategy was used to fuse the feature information at different levels to prevent the loss of important information. The results showed that the proposed method had decent comprehensive performance in detection of corn seed appearance quality, with an average of 96.03% for detection accuracy, 96.27% for precision, 96.03% for recall rate, 96.15% for F1 value of reconciliation, and the average detection time of an image was about 2.44 seconds. This study realized rapid nondestructive detection of seeds and provided a theoretical basis and technical support for construction of intelligent seed sorting equipment.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Xiaofei Fan
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China
| |
Collapse
|
3
|
Bao Z, Li W, Chen J, Chen H, John V, Xiao C, Chen Y. Predicting and Visualizing Citrus Color Transformation Using a Deep Mask-Guided Generative Network. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0057. [PMID: 37292188 PMCID: PMC10246884 DOI: 10.34133/plantphenomics.0057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 05/19/2023] [Indexed: 06/10/2023]
Abstract
Citrus rind color is a good indicator of fruit development, and methods to monitor and predict color transformation therefore help the decisions of crop management practices and harvest schedules. This work presents the complete workflow to predict and visualize citrus color transformation in the orchard featuring high accuracy and fidelity. A total of 107 sample Navel oranges were observed during the color transformation period, resulting in a dataset containing 7,535 citrus images. A framework is proposed that integrates visual saliency into deep learning, and it consists of a segmentation network, a deep mask-guided generative network, and a loss network with manually designed loss functions. Moreover, the fusion of image features and temporal information enables one single model to predict the rind color at different time intervals, thus effectively shrinking the number of model parameters. The semantic segmentation network of the framework achieves the mean intersection over a union score of 0.9694, and the generative network obtains a peak signal-to-noise ratio of 30.01 and a mean local style loss score of 2.710, which indicate both high quality and similarity of the generated images and are also consistent with human perception. To ease the applications in the real world, the model is ported to an Android-based application for mobile devices. The methods can be readily expanded to other fruit crops with a color transformation period. The dataset and the source code are publicly available at GitHub.
Collapse
Affiliation(s)
- Zehan Bao
- College of Informatics,
Huazhong Agricultural University, Wuhan 430070, China
| | - Weifu Li
- College of Informatics,
Huazhong Agricultural University, Wuhan 430070, China
- Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, Wuhan, China
| | - Jun Chen
- College of Informatics,
Huazhong Agricultural University, Wuhan 430070, China
- Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, Wuhan, China
| | - Hong Chen
- College of Informatics,
Huazhong Agricultural University, Wuhan 430070, China
- Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, Wuhan, China
| | - Vijay John
- RIKEN, Guardian robot project, 2-2-2 Hikaridai Seika-cho, Sorakugun, 619-0288 Kyoto, Japan
| | - Chi Xiao
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering,
Hainan University, Haikou 570100, China
| | - Yaohui Chen
- College of Engineering,
Huazhong Agricultural University, 430070 Wuhan, China
| |
Collapse
|
4
|
Huang C, Zhang Z, Zhang X, Jiang L, Hua X, Ye J, Yang W, Song P, Zhu L. A Novel Intelligent System for Dynamic Observation of Cotton Verticillium Wilt. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0013. [PMID: 37040292 PMCID: PMC10076053 DOI: 10.34133/plantphenomics.0013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 11/17/2022] [Indexed: 06/19/2023]
Abstract
Verticillium wilt is one of the most critical cotton diseases, which is widely distributed in cotton-producing countries. However, the conventional method of verticillium wilt investigation is still manual, which has the disadvantages of subjectivity and low efficiency. In this research, an intelligent vision-based system was proposed to dynamically observe cotton verticillium wilt with high accuracy and high throughput. Firstly, a 3-coordinate motion platform was designed with the movement range 6,100 mm × 950 mm × 500 mm, and a specific control unit was adopted to achieve accurate movement and automatic imaging. Secondly, the verticillium wilt recognition was established based on 6 deep learning models, in which the VarifocalNet (VFNet) model had the best performance with a mean average precision (mAP) of 0.932. Meanwhile, deformable convolution, deformable region of interest pooling, and soft non-maximum suppression optimization methods were adopted to improve VFNet, and the mAP of the VFNet-Improved model improved by 1.8%. The precision-recall curves showed that VFNet-Improved was superior to VFNet for each category and had a better improvement effect on the ill leaf category than fine leaf. The regression results showed that the system measurement based on VFNet-Improved achieved high consistency with manual measurements. Finally, the user software was designed based on VFNet-Improved, and the dynamic observation results proved that this system was able to accurately investigate cotton verticillium wilt and quantify the prevalence rate of different resistant varieties. In conclusion, this study has demonstrated a novel intelligent system for the dynamic observation of cotton verticillium wilt on the seedbed, which provides a feasible and effective tool for cotton breeding and disease resistance research.
Collapse
Affiliation(s)
- Chenglong Huang
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Zhongfu Zhang
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Xiaojun Zhang
- College of Plant Science & Technology, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Li Jiang
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Xiangdong Hua
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Junli Ye
- College of Plant Science & Technology, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Wanneng Yang
- College of Plant Science & Technology, Huazhong Agricultural University, Wuhan 430070, PR China
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan 430070, PR China
| | - Peng Song
- College of Plant Science & Technology, Huazhong Agricultural University, Wuhan 430070, PR China
| | - Longfu Zhu
- College of Plant Science & Technology, Huazhong Agricultural University, Wuhan 430070, PR China
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan 430070, PR China
| |
Collapse
|
5
|
Yadav PK, Burks T, Frederick Q, Qin J, Kim M, Ritenour MA. Citrus disease detection using convolution neural network generated features and Softmax classifier on hyperspectral image data. FRONTIERS IN PLANT SCIENCE 2022; 13:1043712. [PMID: 36570926 PMCID: PMC9768035 DOI: 10.3389/fpls.2022.1043712] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 11/18/2022] [Indexed: 06/17/2023]
Abstract
Identification and segregation of citrus fruit with diseases and peel blemishes are required to preserve market value. Previously developed machine vision approaches could only distinguish cankerous from non-cankerous citrus, while this research focused on detecting eight different peel conditions on citrus fruit using hyperspectral (HSI) imagery and an AI-based classification algorithm. The objectives of this paper were: (i) selecting the five most discriminating bands among 92 using PCA, (ii) training and testing a custom convolution neural network (CNN) model for classification with the selected bands, and (iii) comparing the CNN's performance using 5 PCA bands compared to five randomly selected bands. A hyperspectral imaging system from earlier work was used to acquire reflectance images in the spectral region from 450 to 930 nm (92 spectral bands). Ruby Red grapefruits with normal, cankerous, and 5 other common peel diseases including greasy spot, insect damage, melanose, scab, and wind scar were tested. A novel CNN based on the VGG-16 architecture was developed for feature extraction, and SoftMax for classification. The PCA-based bands were found to be 666.15, 697.54, 702.77, 849.24 and 917.25 nm, which resulted in an average accuracy, sensitivity, and specificity of 99.84%, 99.84% and 99.98% respectively. However, 10 trials of five randomly selected bands resulted in only a slightly lower performance, with accuracy, sensitivity, and specificity of 98.87%, 98.43% and 99.88%, respectively. These results demonstrate that an AI-based algorithm can successfully classify eight different peel conditions. The findings reported herein can be used as a precursor to develop a machine vision-based, real-time peel condition classification system for citrus processing.
Collapse
Affiliation(s)
- Pappu Kumar Yadav
- Department of Agricultural and Biological Engineering, University of Florida, Gainesville, FL, United States
| | - Thomas Burks
- Department of Agricultural and Biological Engineering, University of Florida, Gainesville, FL, United States
| | - Quentin Frederick
- Department of Agricultural and Biological Engineering, University of Florida, Gainesville, FL, United States
| | - Jianwei Qin
- USDA/ARS Environmental Microbial and Food Safety Laboratory, Beltsville Agricultural Research Center, Beltsville, MD, United States
| | - Moon Kim
- USDA/ARS Environmental Microbial and Food Safety Laboratory, Beltsville Agricultural Research Center, Beltsville, MD, United States
| | - Mark A. Ritenour
- Department of Horticultural Sciences, University of Florida, Fort Pierce, FL, United States
| |
Collapse
|
6
|
Lee S, Choi G, Park HC, Choi C. Automatic Classification Service System for Citrus Pest Recognition Based on Deep Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:8911. [PMID: 36433508 PMCID: PMC9692507 DOI: 10.3390/s22228911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/10/2022] [Accepted: 11/16/2022] [Indexed: 06/16/2023]
Abstract
Plant diseases are a major cause of reduction in agricultural output, which leads to severe economic losses and unstable food supply. The citrus plant is an economically important fruit crop grown and produced worldwide. However, citrus plants are easily affected by various factors, such as climate change, pests, and diseases, resulting in reduced yield and quality. Advances in computer vision in recent years have been widely used for plant disease detection and classification, providing opportunities for early disease detection, and resulting in improvements in agriculture. Particularly, the early and accurate detection of citrus diseases, which are vulnerable to pests, is very important to prevent the spread of pests and reduce crop damage. Research on citrus pest disease is ongoing, but it is difficult to apply research results to cultivation owing to a lack of datasets for research and limited types of pests. In this study, we built a dataset by self-collecting a total of 20,000 citrus pest images, including fruits and leaves, from actual cultivation sites. The constructed dataset was trained, verified, and tested using a model that had undergone five transfer learning steps. All models used in the experiment had an average accuracy of 97% or more and an average f1 score of 96% or more. We built a web application server using the EfficientNet-b0 model, which exhibited the best performance among the five learning models. The built web application tested citrus pest disease using image samples collected from websites other than the self-collected image samples and prepared data, and both samples correctly classified the disease. The citrus pest automatic diagnosis web system using the model proposed in this study plays a useful auxiliary role in recognizing and classifying citrus diseases. This can, in turn, help improve the overall quality of citrus fruits.
Collapse
Affiliation(s)
- Saebom Lee
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
| | - Gyuho Choi
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
| | - Hyun-Cheol Park
- Department of AI Software, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
| | - Chang Choi
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, Republic of Korea
| |
Collapse
|
7
|
Niu Q, Liu J, Jin Y, Chen X, Zhu W, Yuan Q. Tobacco shred varieties classification using Multi-Scale-X-ResNet network and machine vision. FRONTIERS IN PLANT SCIENCE 2022; 13:962664. [PMID: 36061766 PMCID: PMC9433752 DOI: 10.3389/fpls.2022.962664] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 07/25/2022] [Indexed: 05/21/2023]
Abstract
The primary task in calculating the tobacco shred blending ratio is identifying the four tobacco shred types: expanded tobacco silk, cut stem, tobacco silk, and reconstituted tobacco shred. The classification precision directly affects the subsequent determination of tobacco shred components. However, the tobacco shred types, especially expanded tobacco silk and tobacco silk, have no apparent differences in macro-scale characteristics. The tobacco shreds have small size and irregular shape characteristics, creating significant challenges in their recognition and classification based on machine vision. This study provides a complete set of solutions aimed at this problem for screening tobacco shred samples, taking images, image preprocessing, establishing datasets, and identifying types. A block threshold binarization method is used for image preprocessing. Parameter setting and method performance are researched to obtain the maximum number of complete samples with acceptable execution time. ResNet50 is used as the primary classification and recognition network structure. By increasing the multi-scale structure and optimizing the number of blocks and loss function, a new tobacco shred image classification method is proposed based on the MS-X-ResNet (Multi-Scale-X-ResNet) network. Specifically, the MS-ResNet network is obtained by fusing the multi-scale Stage 3 low-dimensional and Stage 4 high-dimensional features to reduce the overfitting risk. The number of blocks in Stages 1-4 are adjusted from the original 3:4:6:3 to 3:4:N:3 (A-ResNet) and 3:3:N:3 (B-ResNet) to obtain the X-ResNet network, which improves the model's classification performance with lower complexity. The focal loss function is selected to reduce the impact of identification difficulty for different sample types on the network and improve its performance. The experimental results show that the final classification accuracy of the network on a tobacco shred dataset is 96.56%. The image recognition of a single tobacco shred requires 103 ms, achieving high classification accuracy and efficiency. The image preprocessing and deep learning algorithms for tobacco shred classification and identification proposed in this study provide a new implementation approach for the actual production and quality detection of tobacco and a new way for online real-time type identification of other agricultural products.
Collapse
Affiliation(s)
- Qunfeng Niu
- School of Electrical Engineering, Henan University of Technology, Zhengzhou, China
| | - Jiangpeng Liu
- School of Electrical Engineering, Henan University of Technology, Zhengzhou, China
| | - Yi Jin
- Anyang Cigarette Factory, China Tobacco Henan Industrial Co., Ltd., Anyang, China
| | - Xia Chen
- Anyang Cigarette Factory, China Tobacco Henan Industrial Co., Ltd., Anyang, China
| | - Wenkui Zhu
- Zhengzhou Tobacco Research Institute of China National Tobacco Corporation (CNTC), Zhengzhou, China
| | - Qiang Yuan
- School of Electrical Engineering, Henan University of Technology, Zhengzhou, China
| |
Collapse
|
8
|
Chacon WDC, dos Santos Alves MJ, Monteiro AR, González SYG, Ayala Valencia G. Image analysis applied to control postharvest maturity of papayas (
Carica papaya
L.). J FOOD PROCESS PRES 2022. [DOI: 10.1111/jfpp.16999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
| | | | | | | | - Germán Ayala Valencia
- Department of Chemical and Food Engineering Federal University of Santa Catarina Florianópolis SC Brazil
| |
Collapse
|
9
|
Gu J, Zhang Y, Yin Y, Wang R, Deng J, Zhang B. Surface Defect Detection of Cabbage Based on Curvature Features of 3D Point Cloud. FRONTIERS IN PLANT SCIENCE 2022; 13:942040. [PMID: 35909747 PMCID: PMC9331920 DOI: 10.3389/fpls.2022.942040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 06/14/2022] [Indexed: 05/25/2023]
Abstract
The dents and cracks of cabbage caused by mechanical damage during transportation have a direct impact on both commercial value and storage time. In this study, a method for surface defect detection of cabbage is proposed based on the curvature feature of the 3D point cloud. First, the red-green-blue (RGB) images and depth images are collected using a RealSense-D455 depth camera for 3D point cloud reconstruction. Then, the region of interest (ROI) is extracted by statistical filtering and Euclidean clustering segmentation algorithm, and the 3D point cloud of cabbage is segmented from background noise. Then, the curvature features of the 3D point cloud are calculated using the estimated normal vector based on the least square plane fitting method. Finally, the curvature threshold is determined according to the curvature characteristic parameters, and the surface defect type and area can be detected. The flat-headed cabbage and round-headed cabbage are selected to test the surface damage of dents and cracks. The test results show that the average detection accuracy of this proposed method is 96.25%, in which, the average detection accuracy of dents is 93.3% and the average detection accuracy of cracks is 96.67%, suggesting high detection accuracy and good adaptability for various cabbages. This study provides important technical support for automatic and non-destructive detection of cabbage surface defects.
Collapse
Affiliation(s)
- Jin Gu
- College of Engineering, China Agricultural University, Beijing, China
| | - Yawei Zhang
- College of Engineering, China Agricultural University, Beijing, China
| | - Yanxin Yin
- Research Center of Intelligent Equipment, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Research Center of Intelligent Equipment for Agriculture, Beijing, China
| | - Ruixue Wang
- Chinese Academy of Agricultural Mechanization Sciences Group Co., Ltd., Beijing, China
| | - Junwen Deng
- College of Engineering, China Agricultural University, Beijing, China
| | - Bin Zhang
- College of Engineering, China Agricultural University, Beijing, China
| |
Collapse
|
10
|
Wang C, Liu S, Wang Y, Xiong J, Zhang Z, Zhao B, Luo L, Lin G, He P. Application of Convolutional Neural Network-Based Detection Methods in Fresh Fruit Production: A Comprehensive Review. FRONTIERS IN PLANT SCIENCE 2022; 13:868745. [PMID: 35651761 PMCID: PMC9149381 DOI: 10.3389/fpls.2022.868745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 03/03/2022] [Indexed: 05/12/2023]
Abstract
As one of the representative algorithms of deep learning, a convolutional neural network (CNN) with the advantage of local perception and parameter sharing has been rapidly developed. CNN-based detection technology has been widely used in computer vision, natural language processing, and other fields. Fresh fruit production is an important socioeconomic activity, where CNN-based deep learning detection technology has been successfully applied to its important links. To the best of our knowledge, this review is the first on the whole production process of fresh fruit. We first introduced the network architecture and implementation principle of CNN and described the training process of a CNN-based deep learning model in detail. A large number of articles were investigated, which have made breakthroughs in response to challenges using CNN-based deep learning detection technology in important links of fresh fruit production including fruit flower detection, fruit detection, fruit harvesting, and fruit grading. Object detection based on CNN deep learning was elaborated from data acquisition to model training, and different detection methods based on CNN deep learning were compared in each link of the fresh fruit production. The investigation results of this review show that improved CNN deep learning models can give full play to detection potential by combining with the characteristics of each link of fruit production. The investigation results also imply that CNN-based detection may penetrate the challenges created by environmental issues, new area exploration, and multiple task execution of fresh fruit production in the future.
Collapse
Affiliation(s)
- Chenglin Wang
- Faculty of Modern Agricultural Engineering, Kunming University of Science and Technology, Kunming, China
- School of Intelligent Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Suchun Liu
- School of Intelligent Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Yawei Wang
- School of Intelligent Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Juntao Xiong
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, China
| | - Zhaoguo Zhang
- Faculty of Modern Agricultural Engineering, Kunming University of Science and Technology, Kunming, China
| | - Bo Zhao
- Chinese Academy of Agricultural Mechanization Sciences, Beijing, China
| | - Lufeng Luo
- School of Mechatronic Engineering and Automation, Foshan University, Foshan, China
| | - Guichao Lin
- School of Mechanical and Electrical Engineering, Zhongkai University of Agriculture and Engineering, Guangzhou, China
| | - Peng He
- School of Electronic and Information Engineering, Taizhou University, Taizhou, China
| |
Collapse
|
11
|
Gao S, Kang H, An X, Cheng Y, Chen H, Chen Y, Li S. Non-destructive Storage Time Prediction of Newhall Navel Oranges Based on the Characteristics of Rind Oil Glands. FRONTIERS IN PLANT SCIENCE 2022; 13:811630. [PMID: 35422823 PMCID: PMC9002176 DOI: 10.3389/fpls.2022.811630] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 03/10/2022] [Indexed: 06/14/2023]
Abstract
How to non-destructively and quickly estimate the storage time of citrus fruit is necessary and urgent for freshness control in the fruit market. As a feasibility study, we present a non-destructive method for storage time prediction of Newhall navel oranges by investigating the characteristics of the rind oil glands in this paper. Through the observation using a digital microscope, the oil glands were divided into three types and the change of their proportions could indicate the rind status as well as the storage time. Images of the rind of the oranges were taken in intervals of 10 days for 40 days, and they were used to train and test the proposed prediction models based on K-Nearest Neighbors (KNN) and deep learning algorithms, respectively. The KNN-based model demonstrated explicit features for storage time prediction based on the gland characteristics and reached a high accuracy of 93.0%, and the deep learning-based model attained an even higher accuracy of 96.0% due to its strong adaptability and robustness. The workflow presented can be readily replicated to develop non-destructive methods to predict the storage time of other types of citrus fruit with similar oil gland characteristics in different storage conditions featuring high efficiency and accuracy.
Collapse
Affiliation(s)
- Shumin Gao
- College of Engineering, Huazhong Agricultural University, Wuhan, China
- Key Laboratory of Agricultural Equipment in Mid-Lower Yangtze River, Ministry of Agriculture and Rural Affairs, Wuhan, China
| | - Hanwen Kang
- Department of Aerospace and Mechanical Engineering, Monash University, Clayton, VIC, Australia
| | - Xiaosong An
- College of Engineering, Huazhong Agricultural University, Wuhan, China
- Key Laboratory of Agricultural Equipment in Mid-Lower Yangtze River, Ministry of Agriculture and Rural Affairs, Wuhan, China
| | - Yunjiang Cheng
- College of Horticulture and Forestry Science, Huazhong Agricultural University, Wuhan, China
- National R&D Center for Citrus Preservation, Wuhan, China
| | - Hong Chen
- College of Engineering, Huazhong Agricultural University, Wuhan, China
- Key Laboratory of Agricultural Equipment in Mid-Lower Yangtze River, Ministry of Agriculture and Rural Affairs, Wuhan, China
| | - Yaohui Chen
- College of Engineering, Huazhong Agricultural University, Wuhan, China
- Key Laboratory of Agricultural Equipment in Mid-Lower Yangtze River, Ministry of Agriculture and Rural Affairs, Wuhan, China
- National R&D Center for Citrus Preservation, Wuhan, China
| | - Shanjun Li
- College of Engineering, Huazhong Agricultural University, Wuhan, China
- Key Laboratory of Agricultural Equipment in Mid-Lower Yangtze River, Ministry of Agriculture and Rural Affairs, Wuhan, China
- National R&D Center for Citrus Preservation, Wuhan, China
| |
Collapse
|