1
|
Yu H, Dong M, Zhao R, Zhang L, Sui Y. Research on precise phenotype identification and growth prediction of lettuce based on deep learning. ENVIRONMENTAL RESEARCH 2024; 252:118845. [PMID: 38570128 DOI: 10.1016/j.envres.2024.118845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 03/28/2024] [Accepted: 03/30/2024] [Indexed: 04/05/2024]
Abstract
In recent years, precision agriculture, driven by scientific monitoring, precise management, and efficient use of agricultural resources, has become the direction for future agricultural development. The precise identification and assessment of phenotypes, which serve as external representations of a crop's growth, development, and genetic characteristics, are crucial for the realization of precision agriculture. Applications surrounding phenotypic indices also provide significant technical support for optimizing crop cultivation management and advancing smart agriculture, contributing to the efficient and high-quality development of precision agriculture.This paper focuses on lettuce and employs common nutritional stress conditions during growth as experimental settings. By collecting RGB images throughout the lettuce's complete growth cycle, we developed a deep learning-based computational model to tackle key issues in the lettuce's growth and precisely identify and assess phenotypic indices. We discovered that some phenotypic indices, including custom ones defined in this study, are representative of the lettuce's growth status. By dynamically monitoring the changes in phenotypic traits during growth, we quantitatively analyzed the accumulation and evolution of phenotypic indices across different growth stages. On this basis, a predictive model for lettuce growth and development was trained.The model incorporates MSE, SSIM, and perceptual loss, significantly enhancing the predictive accuracy of the lettuce growth images and phenotypic indices. The model trained with the reconstructed loss function outperforms the original model, with the SSIM and PSNR improving by 1.33% and 10.32%, respectively. The model also demonstrates high accuracy in predicting lettuce phenotypic indices, with an average error less than 0.55% for geometric indices and less than 1.7% for color and texture indices. Ultimately, it achieves intelligent monitoring and management throughout the lettuce's life cycle, providing technical support for high-quality and efficient lettuce production.
Collapse
Affiliation(s)
- Haiye Yu
- College of Biological and Agricultural Engineering, Jilin University, Changchun 130022, Jilin, China
| | - Mo Dong
- Mudanjiang Medical University, Mudanjiang, 157000, Heilongjiang, China.
| | - Ruohan Zhao
- Mudanjiang Medical University, Mudanjiang, 157000, Heilongjiang, China
| | - Lei Zhang
- College of Biological and Agricultural Engineering, Jilin University, Changchun 130022, Jilin, China
| | - Yuanyuan Sui
- College of Biological and Agricultural Engineering, Jilin University, Changchun 130022, Jilin, China
| |
Collapse
|
2
|
Kim JSG, Moon S, Park J, Kim T, Chung S. Development of a machine vision-based weight prediction system of butterhead lettuce ( Lactuca sativa L.) using deep learning models for industrial plant factory. FRONTIERS IN PLANT SCIENCE 2024; 15:1365266. [PMID: 38903437 PMCID: PMC11188371 DOI: 10.3389/fpls.2024.1365266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 05/10/2024] [Indexed: 06/22/2024]
Abstract
Introduction Indoor agriculture, especially plant factories, becomes essential because of the advantages of cultivating crops yearly to address global food shortages. Plant factories have been growing in scale as commercialized. Developing an on-site system that estimates the fresh weight of crops non-destructively for decision-making on harvest time is necessary to maximize yield and profits. However, a multi-layer growing environment with on-site workers is too confined and crowded to develop a high-performance system.This research developed a machine vision-based fresh weight estimation system to monitor crops from the transplant stage to harvest with less physical labor in an on-site industrial plant factory. Methods A linear motion guide with a camera rail moving in both the x-axis and y-axis directions was produced and mounted on a cultivating rack with a height under 35 cm to get consistent images of crops from the top view. Raspberry Pi4 controlled its operation to capture images automatically every hour. The fresh weight was manually measured eleven times for four months to use as the ground-truth weight of the models. The attained images were preprocessed and used to develop weight prediction models based on manual and automatic feature extraction. Results and discussion The performance of models was compared, and the best performance among them was the automatic feature extraction-based model using convolutional neural networks (CNN; ResNet18). The CNN-based model on automatic feature extraction from images performed much better than any other manual feature extraction-based models with 0.95 of the coefficients of determination (R2) and 8.06 g of root mean square error (RMSE). However, another multiplayer perceptron model (MLP_2) was more appropriate to be adopted on-site since it showed around nine times faster inference time than CNN with a little less R2 (0.93). Through this study, field workers in a confined indoor farming environment can measure the fresh weight of crops non-destructively and easily. In addition, it would help to decide when to harvest on the spot.
Collapse
Affiliation(s)
- Jung-Sun Gloria Kim
- Department of Biosystems Engineering, Seoul National University, Seoul, Republic of Korea
- Integrated Major in Global Smart Farm, Seoul National University, Seoul, Republic of Korea
| | - Seongje Moon
- Department of Biosystems Engineering, Seoul National University, Seoul, Republic of Korea
| | - Junyoung Park
- Department of Biosystems Engineering, Seoul National University, Seoul, Republic of Korea
- Integrated Major in Global Smart Farm, Seoul National University, Seoul, Republic of Korea
| | - Taehyeong Kim
- Department of Biosystems Engineering, Seoul National University, Seoul, Republic of Korea
- Integrated Major in Global Smart Farm, Seoul National University, Seoul, Republic of Korea
- Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul, Republic of Korea
| | - Soo Chung
- Department of Biosystems Engineering, Seoul National University, Seoul, Republic of Korea
- Integrated Major in Global Smart Farm, Seoul National University, Seoul, Republic of Korea
- Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
3
|
Ye Z, Tan X, Dai M, Chen X, Zhong Y, Zhang Y, Ruan Y, Kong D. A hyperspectral deep learning attention model for predicting lettuce chlorophyll content. PLANT METHODS 2024; 20:22. [PMID: 38310270 PMCID: PMC10838441 DOI: 10.1186/s13007-024-01148-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 01/23/2024] [Indexed: 02/05/2024]
Abstract
BACKGROUND The phenotypic traits of leaves are the direct reflection of the agronomic traits in the growth process of leafy vegetables, which plays a vital role in the selection of high-quality leafy vegetable varieties. The current image-based phenotypic traits extraction research mainly focuses on the morphological and structural traits of plants or leaves, and there are few studies on the phenotypes of physiological traits of leaves. The current research has developed a deep learning model aimed at predicting the total chlorophyll of greenhouse lettuce directly from the full spectrum of hyperspectral images. RESULTS A CNN-based one-dimensional deep learning model with spectral attention module was utilized for the estimate of the total chlorophyll of greenhouse lettuce from the full spectrum of hyperspectral images. Experimental results demonstrate that the deep neural network with spectral attention module outperformed the existing standard approaches, including partial least squares regression (PLSR) and random forest (RF), with an average R2 of 0.746 and an average RMSE of 2.018. CONCLUSIONS This study unveils the capability of leveraging deep attention networks and hyperspectral imaging for estimating lettuce chlorophyll levels. This approach offers a convenient, non-destructive, and effective estimation method for the automatic monitoring and production management of leafy vegetables.
Collapse
Affiliation(s)
- Ziran Ye
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, 310,021, Zhejiang, China
| | - Xiangfeng Tan
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, 310,021, Zhejiang, China
| | - Mengdi Dai
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, 310,021, Zhejiang, China
| | - Xuting Chen
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, 310,021, Zhejiang, China
| | - Yuanxiang Zhong
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, 310,021, Zhejiang, China
| | - Yi Zhang
- Hangzhou Institute for Advanced Study, UCAS, Hangzhou, 310,024, China
| | - Yunjie Ruan
- lnstitute of Agricultural Bio-Environmental Engineering, College of Bio-systems Engineering and Food Science, Zhejiang University, Hangzhou, 310,058, China
- Academy of Rural Development, Zhejiang University, Hangzhou, 310,058, China
| | - Dedong Kong
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, 310,021, Zhejiang, China.
| |
Collapse
|
4
|
Ye Z, Tan X, Dai M, Lin Y, Chen X, Nie P, Ruan Y, Kong D. Estimation of rice seedling growth traits with an end-to-end multi-objective deep learning framework. FRONTIERS IN PLANT SCIENCE 2023; 14:1165552. [PMID: 37332711 PMCID: PMC10272763 DOI: 10.3389/fpls.2023.1165552] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 05/10/2023] [Indexed: 06/20/2023]
Abstract
In recent years, rice seedling raising factories have gradually been promoted in China. The seedlings bred in the factory need to be selected manually and then transplanted to the field. Growth-related traits such as height and biomass are important indicators for quantifying the growth of rice seedlings. Nowadays, the development of image-based plant phenotyping has received increasing attention, however, there is still room for improvement in plant phenotyping methods to meet the demand for rapid, robust and low-cost extraction of phenotypic measurements from images in environmentally-controlled plant factories. In this study, a method based on convolutional neural networks (CNNs) and digital images was applied to estimate the growth of rice seedlings in a controlled environment. Specifically, an end-to-end framework consisting of hybrid CNNs took color images, scaling factor and image acquisition distance as input and directly predicted the shoot height (SH) and shoot fresh weight (SFW) after image segmentation. The results on the rice seedlings dataset collected by different optical sensors demonstrated that the proposed model outperformed compared random forest (RF) and regression CNN models (RCNN). The model achieved R2 values of 0.980 and 0.717, and normalized root mean square error (NRMSE) values of 2.64% and 17.23%, respectively. The hybrid CNNs method can learn the relationship between digital images and seedling growth traits, promising to provide a convenient and flexible estimation tool for the non-destructive monitoring of seedling growth in controlled environments.
Collapse
Affiliation(s)
- Ziran Ye
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, China
| | - Xiangfeng Tan
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, China
| | - Mengdi Dai
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, China
| | - Yue Lin
- Institute of Spatial Information for City Brain (ISICA), Hangzhou City University, Hangzhou, China
| | - Xuting Chen
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, China
| | - Pengcheng Nie
- Institute of Agricultural Bio-Environmental Engineering, College of Bio-systems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Yunjie Ruan
- Institute of Agricultural Bio-Environmental Engineering, College of Bio-systems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Academy of Rural Development, Zhejiang University, Hangzhou, China
| | - Dedong Kong
- Institute of Digital Agriculture, Zhejiang Academy of Agricultural Sciences, Hangzhou, China
| |
Collapse
|
5
|
Abebe AM, Kim Y, Kim J, Kim SL, Baek J. Image-Based High-Throughput Phenotyping in Horticultural Crops. PLANTS (BASEL, SWITZERLAND) 2023; 12:2061. [PMID: 37653978 PMCID: PMC10222289 DOI: 10.3390/plants12102061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/12/2023] [Accepted: 05/18/2023] [Indexed: 09/02/2023]
Abstract
Plant phenotyping is the primary task of any plant breeding program, and accurate measurement of plant traits is essential to select genotypes with better quality, high yield, and climate resilience. The majority of currently used phenotyping techniques are destructive and time-consuming. Recently, the development of various sensors and imaging platforms for rapid and efficient quantitative measurement of plant traits has become the mainstream approach in plant phenotyping studies. Here, we reviewed the trends of image-based high-throughput phenotyping methods applied to horticultural crops. High-throughput phenotyping is carried out using various types of imaging platforms developed for indoor or field conditions. We highlighted the applications of different imaging platforms in the horticulture sector with their advantages and limitations. Furthermore, the principles and applications of commonly used imaging techniques, visible light (RGB) imaging, thermal imaging, chlorophyll fluorescence, hyperspectral imaging, and tomographic imaging for high-throughput plant phenotyping, are discussed. High-throughput phenotyping has been widely used for phenotyping various horticultural traits, which can be morphological, physiological, biochemical, yield, biotic, and abiotic stress responses. Moreover, the ability of high-throughput phenotyping with the help of various optical sensors will lead to the discovery of new phenotypic traits which need to be explored in the future. We summarized the applications of image analysis for the quantitative evaluation of various traits with several examples of horticultural crops in the literature. Finally, we summarized the current trend of high-throughput phenotyping in horticultural crops and highlighted future perspectives.
Collapse
Affiliation(s)
| | | | | | | | - Jeongho Baek
- Department of Agricultural Biotechnology, National Institute of Agricultural Science, Rural Development Administration, Jeonju 54874, Republic of Korea
| |
Collapse
|
6
|
Gan F, Liu H, Qin WG, Zhou SL. Application of artificial intelligence for automatic cataract staging based on anterior segment images: comparing automatic segmentation approaches to manual segmentation. Front Neurosci 2023; 17:1182388. [PMID: 37152605 PMCID: PMC10159175 DOI: 10.3389/fnins.2023.1182388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 03/27/2023] [Indexed: 05/09/2023] Open
Abstract
Purpose Cataract is one of the leading causes of blindness worldwide, accounting for >50% of cases of blindness in low- and middle-income countries. In this study, two artificial intelligence (AI) diagnosis platforms are proposed for cortical cataract staging to achieve a precise diagnosis. Methods A total of 647 high quality anterior segment images, which included the four stages of cataracts, were collected into the dataset. They were divided randomly into a training set and a test set using a stratified random-allocation technique at a ratio of 8:2. Then, after automatic or manual segmentation of the lens area of the cataract, the deep transform-learning (DTL) features extraction, PCA dimensionality reduction, multi-features fusion, fusion features selection, and classification models establishment, the automatic and manual segmentation DTL platforms were developed. Finally, the accuracy, confusion matrix, and area under the receiver operating characteristic (ROC) curve (AUC) were used to evaluate the performance of the two platforms. Results In the automatic segmentation DTL platform, the accuracy of the model in the training and test sets was 94.59 and 84.50%, respectively. In the manual segmentation DTL platform, the accuracy of the model in the training and test sets was 97.48 and 90.00%, respectively. In the test set, the micro and macro average AUCs of the two platforms reached >95% and the AUC for each classification was >90%. The results of a confusion matrix showed that all stages, except for mature, had a high recognition rate. Conclusion Two AI diagnosis platforms were proposed for cortical cataract staging. The resulting automatic segmentation platform can stage cataracts more quickly, whereas the resulting manual segmentation platform can stage cataracts more accurately.
Collapse
Affiliation(s)
- Fan Gan
- Medical College of Nanchang University, Nanchang, China
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Hui Liu
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Wei-Guo Qin
- Department of Cardiothoracic Surgery, The 908th Hospital of Chinese People’s Liberation Army Joint Logistic Support Force, Nanchang, China
| | - Shui-Lian Zhou
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
- *Correspondence: Shui-Lian Zhou,
| |
Collapse
|
7
|
Lüling N, Reiser D, Straub J, Stana A, Griepentrog HW. Fruit Volume and Leaf-Area Determination of Cabbage by a Neural-Network-Based Instance Segmentation for Different Growth Stages. SENSORS (BASEL, SWITZERLAND) 2022; 23:129. [PMID: 36616727 PMCID: PMC9824424 DOI: 10.3390/s23010129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/17/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
Fruit volume and leaf area are important indicators to draw conclusions about the growth condition of the plant. However, the current methods of manual measuring morphological plant properties, such as fruit volume and leaf area, are time consuming and mainly destructive. In this research, an image-based approach for the non-destructive determination of fruit volume and for the total leaf area over three growth stages for cabbage (brassica oleracea) is presented. For this purpose, a mask-region-based convolutional neural network (Mask R-CNN) based on a Resnet-101 backbone was trained to segment the cabbage fruit from the leaves and assign it to the corresponding plant. Combining the segmentation results with depth information through a structure-from-motion approach, the leaf length of single leaves, as well as the fruit volume of individual plants, can be calculated. The results indicated that even with a single RGB camera, the developed methods provided a mean accuracy of fruit volume of 87% and a mean accuracy of total leaf area of 90.9%, over three growth stages on an individual plant level.
Collapse
|
8
|
Moon T, Kim D, Kwon S, Ahn TI, Son JE. Non-Destructive Monitoring of Crop Fresh Weight and Leaf Area with a Simple Formula and a Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2022; 22:7728. [PMID: 36298080 PMCID: PMC9607460 DOI: 10.3390/s22207728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 10/05/2022] [Accepted: 10/09/2022] [Indexed: 06/16/2023]
Abstract
Crop fresh weight and leaf area are considered non-destructive growth factors due to their direct relation to vegetative growth and carbon assimilation. Several methods to measure these parameters have been introduced; however, measuring these parameters using the existing methods can be difficult. Therefore, a non-destructive measurement method with high versatility is essential. The objective of this study was to establish a non-destructive monitoring system for estimating the fresh weight and leaf area of trellised crops. The data were collected from a greenhouse with sweet peppers (Capsicum annuum var. annuum); the target growth factors were the crop fresh weight and leaf area. The crop fresh weight was estimated based on the total system weight and volumetric water content using a simple formula. The leaf area was estimated using top-view images of the crops and a convolutional neural network (ConvNet). The estimated crop fresh weight and leaf area exhibited average R2 values of 0.70 and 0.95, respectively. The simple calculation was able to avoid overfitting with fewer limitations compared with the previous study. ConvNet was able to analyze raw images and evaluate the leaf area without additional sensors and features. As the simple calculation and ConvNet could adequately estimate the target growth factors, the monitoring system can be used for data collection in practice owing to its versatility. Therefore, the proposed monitoring system can be widely applied for diverse data analyses.
Collapse
Affiliation(s)
- Taewon Moon
- Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea
- Department of Agriculture, Forestry and Bioresources, Seoul National University, Seoul 08826, Korea
| | - Dongpil Kim
- Department of Agriculture, Forestry and Bioresources, Seoul National University, Seoul 08826, Korea
| | - Sungmin Kwon
- Department of Agriculture, Forestry and Bioresources, Seoul National University, Seoul 08826, Korea
| | - Tae In Ahn
- Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea
- Department of Agriculture, Forestry and Bioresources, Seoul National University, Seoul 08826, Korea
| | - Jung Eek Son
- Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea
- Department of Agriculture, Forestry and Bioresources, Seoul National University, Seoul 08826, Korea
| |
Collapse
|
9
|
Zhang Q, Zhang X, Wu Y, Li X. TMSCNet: A three-stage multi-branch self-correcting trait estimation network for RGB and depth images of lettuce. FRONTIERS IN PLANT SCIENCE 2022; 13:982562. [PMID: 36119576 PMCID: PMC9470961 DOI: 10.3389/fpls.2022.982562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Abstract
Growth traits, such as fresh weight, diameter, and leaf area, are pivotal indicators of growth status and the basis for the quality evaluation of lettuce. The time-consuming, laborious and inefficient method of manually measuring the traits of lettuce is still the mainstream. In this study, a three-stage multi-branch self-correcting trait estimation network (TMSCNet) for RGB and depth images of lettuce was proposed. The TMSCNet consisted of five models, of which two master models were used to preliminarily estimate the fresh weight (FW), dry weight (DW), height (H), diameter (D), and leaf area (LA) of lettuce, and three auxiliary models realized the automatic correction of the preliminary estimation results. To compare the performance, typical convolutional neural networks (CNNs) widely adopted in botany research were used. The results showed that the estimated values of the TMSCNet fitted the measurements well, with coefficient of determination (R 2) values of 0.9514, 0.9696, 0.9129, 0.8481, and 0.9495, normalized root mean square error (NRMSE) values of 15.63, 11.80, 11.40, 10.18, and 14.65% and normalized mean squared error (NMSE) value of 0.0826, which was superior to compared methods. Compared with previous studies on the estimation of lettuce traits, the performance of the TMSCNet was still better. The proposed method not only fully considered the correlation between different traits and designed a novel self-correcting structure based on this but also studied more lettuce traits than previous studies. The results indicated that the TMSCNet is an effective method to estimate the lettuce traits and will be extended to the high-throughput situation. Code is available at https://github.com/lxsfight/TMSCNet.git.
Collapse
Affiliation(s)
- Qinjian Zhang
- School of Mechanical Electrical Engineering, Beijing Information Science and Technology University, Beijing, China
| | - Xiangyan Zhang
- School of Mechanical Electrical Engineering, Beijing Information Science and Technology University, Beijing, China
| | - Yalin Wu
- Lushan Botanical Garden, Chinese Academy of Sciences, Jiujiang, China
| | - Xingshuai Li
- School of Mechanical Electrical Engineering, Beijing Information Science and Technology University, Beijing, China
| |
Collapse
|
10
|
Lin Z, Fu R, Ren G, Zhong R, Ying Y, Lin T. Automatic monitoring of lettuce fresh weight by multi-modal fusion based deep learning. FRONTIERS IN PLANT SCIENCE 2022; 13:980581. [PMID: 36092436 PMCID: PMC9458202 DOI: 10.3389/fpls.2022.980581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 08/03/2022] [Indexed: 06/15/2023]
Abstract
Fresh weight is a widely used growth indicator for quantifying crop growth. Traditional fresh weight measurement methods are time-consuming, laborious, and destructive. Non-destructive measurement of crop fresh weight is urgently needed in plant factories with high environment controllability. In this study, we proposed a multi-modal fusion based deep learning model for automatic estimation of lettuce shoot fresh weight by utilizing RGB-D images. The model combined geometric traits from empirical feature extraction and deep neural features from CNN. A lettuce leaf segmentation network based on U-Net was trained for extracting leaf boundary and geometric traits. A multi-branch regression network was performed to estimate fresh weight by fusing color, depth, and geometric features. The leaf segmentation model reported a reliable performance with a mIoU of 0.982 and an accuracy of 0.998. A total of 10 geometric traits were defined to describe the structure of the lettuce canopy from segmented images. The fresh weight estimation results showed that the proposed multi-modal fusion model significantly improved the accuracy of lettuce shoot fresh weight in different growth periods compared with baseline models. The model yielded a root mean square error (RMSE) of 25.3 g and a coefficient of determination (R 2) of 0.938 over the entire lettuce growth period. The experiment results demonstrated that the multi-modal fusion method could improve the fresh weight estimation performance by leveraging the advantages of empirical geometric traits and deep neural features simultaneously.
Collapse
Affiliation(s)
- Zhixian Lin
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Rongmei Fu
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Guoqiang Ren
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Renhai Zhong
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Yibin Ying
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Intelligent Equipment and Robotics for Agriculture of Zhejiang Province, Hangzhou, China
| | - Tao Lin
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Intelligent Equipment and Robotics for Agriculture of Zhejiang Province, Hangzhou, China
| |
Collapse
|
11
|
Gang MS, Kim HJ, Kim DW. Estimation of Greenhouse Lettuce Growth Indices Based on a Two-Stage CNN Using RGB-D Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:5499. [PMID: 35898004 PMCID: PMC9331482 DOI: 10.3390/s22155499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 07/14/2022] [Accepted: 07/20/2022] [Indexed: 06/15/2023]
Abstract
Growth indices can quantify crop productivity and establish optimal environmental, nutritional, and irrigation control strategies. A convolutional neural network (CNN)-based model is presented for estimating various growth indices (i.e., fresh weight, dry weight, height, leaf area, and diameter) of four varieties of greenhouse lettuce using red, green, blue, and depth (RGB-D) data obtained using a stereo camera. Data from an online autonomous greenhouse challenge (Wageningen University, June 2021) were employed in this study. The data were collected using an Intel RealSense D415 camera. The developed model has a two-stage CNN architecture based on ResNet50V2 layers. The developed model provided coefficients of determination from 0.88 to 0.95, with normalized root mean square errors of 6.09%, 6.30%, 7.65%, 7.92%, and 5.62% for fresh weight, dry weight, height, diameter, and leaf area, respectively, on unknown lettuce images. Using red, green, blue (RGB) and depth data employed in the CNN improved the determination accuracy for all five lettuce growth indices due to the ability of the stereo camera to extract height information on lettuce. The average time for processing each lettuce image using the developed CNN model run on a Jetson SUB mini-PC with a Jetson Xavier NX was 0.83 s, indicating the potential for the model in fast real-time sensing of lettuce growth indices.
Collapse
Affiliation(s)
- Min-Seok Gang
- Department of Biosystems Engineering, College of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea; (M.-S.G.); (D.-W.K.)
- Integrated Major in Global Smart Farm, College of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea
| | - Hak-Jin Kim
- Department of Biosystems Engineering, College of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea; (M.-S.G.); (D.-W.K.)
- Integrated Major in Global Smart Farm, College of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea
- Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea
| | - Dong-Wook Kim
- Department of Biosystems Engineering, College of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Korea; (M.-S.G.); (D.-W.K.)
| |
Collapse
|
12
|
Du J, Li B, Lu X, Yang X, Guo X, Zhao C. Quantitative phenotyping and evaluation for lettuce leaves of multiple semantic components. PLANT METHODS 2022; 18:54. [PMID: 35468831 PMCID: PMC9036747 DOI: 10.1186/s13007-022-00890-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 04/13/2022] [Indexed: 05/09/2023]
Abstract
BACKGROUND Classification and phenotype identification of lettuce leaves urgently require fine quantification of their multi-semantic traits. Different components of lettuce leaves undertake specific physiological functions and can be quantitatively described and interpreted using their observable properties. In particular, petiole and veins determine mechanical support and material transport performance of leaves, while other components may be closely related to photosynthesis. Currently, lettuce leaf phenotyping does not accurately differentiate leaf components, and there is no comparative evaluation for positive-back of the same lettuce leaf. In addition, a few traits of leaf components can be measured manually, but it is time-consuming, laborious, and inaccurate. Although several studies have been on image-based phenotyping of leaves, there is still a lack of robust methods to extract and validate multi-semantic traits of large-scale lettuce leaves automatically. RESULTS In this study, we developed an automated phenotyping pipeline to recognize the components of detached lettuce leaves and calculate multi-semantic traits for phenotype identification. Six semantic segmentation models were constructed to extract leaf components from visible images of lettuce leaves. And then, the leaf normalization technique was used to rotate and scale different leaf sizes to the "size-free" space for consistent leaf phenotyping. A novel lamina-based approach was also utilized to determine the petiole, first-order vein, and second-order veins. The proposed pipeline contributed 30 geometry-, 20 venation-, and 216 color-based traits to characterize each lettuce leaf. Eleven manually measured traits were evaluated and demonstrated high correlations with computation results. Further, positive-back images of leaves were used to verify the accuracy of the proposed method and evaluate the trait differences. CONCLUSIONS The proposed method lays an effective strategy for quantitative analysis of detached lettuce leaves' fine structure and components. Geometry, color, and vein traits of lettuce leaf and its components can be comprehensively utilized for phenotype identification and breeding of lettuce. This study provides valuable perspectives for developing automated high-throughput phenotyping application of lettuce leaves and the improvement of agronomic traits such as effective photosynthetic area and vein configuration.
Collapse
Affiliation(s)
- Jianjun Du
- Beijing Key Lab of Digital Plant, Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Bo Li
- Beijing Key Laboratory of Agricultural Genetic Resources and Biotechnology, Beijing Agro-Biotechnology Research Center, Beijing, China
| | - Xianju Lu
- Beijing Key Lab of Digital Plant, Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Xiaozeng Yang
- Beijing Key Laboratory of Agricultural Genetic Resources and Biotechnology, Beijing Agro-Biotechnology Research Center, Beijing, China
| | - Xinyu Guo
- Beijing Key Lab of Digital Plant, Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Chunjiang Zhao
- Beijing Key Lab of Digital Plant, Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| |
Collapse
|
13
|
Buxbaum N, Lieth JH, Earles M. Non-destructive Plant Biomass Monitoring With High Spatio-Temporal Resolution via Proximal RGB-D Imagery and End-to-End Deep Learning. FRONTIERS IN PLANT SCIENCE 2022; 13:758818. [PMID: 35498682 PMCID: PMC9043900 DOI: 10.3389/fpls.2022.758818] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 03/21/2022] [Indexed: 06/14/2023]
Abstract
Plant breeders, scientists, and commercial producers commonly use growth rate as an integrated signal of crop productivity and stress. Plant growth monitoring is often done destructively via growth rate estimation by harvesting plants at different growth stages and simply weighing each individual plant. Within plant breeding and research applications, and more recently in commercial applications, non-destructive growth monitoring is done using computer vision to segment plants in images from the background, either in 2D or 3D, and relating these image-based features to destructive biomass measurements. Recent advancements in machine learning have improved image-based localization and detection of plants, but such techniques are not well suited to make biomass predictions when there is significant self-occlusion or occlusion from neighboring plants, such as those encountered under leafy green production in controlled environment agriculture. To enable prediction of plant biomass under occluded growing conditions, we develop an end-to-end deep learning approach that directly predicts lettuce plant biomass from color and depth image data as provided by a low cost and commercially available sensor. We test the performance of the proposed deep neural network for lettuce production, observing a mean prediction error of 7.3% on a comprehensive test dataset of 864 individuals and substantially outperforming previous work on plant biomass estimation. The modeling approach is robust to the busy and occluded scenes often found in commercial leafy green production and requires only measured mass values for training. We then demonstrate that this level of prediction accuracy allows for rapid, non-destructive detection of changes in biomass accumulation due to experimentally induced stress induction in as little as 2 days. Using this method growers may observe and react to changes in plant-environment interactions in near real time. Moreover, we expect that such a sensitive technique for non-destructive biomass estimation will enable novel research and breeding of improved productivity and yield in response to stress.
Collapse
Affiliation(s)
- Nicolas Buxbaum
- Department of Biological and Agricultural Engineering, University of California, Davis, Davis, CA, United States
| | - Johann Heinrich Lieth
- Department of Plant Sciences, University of California, Davis, Davis, CA, United States
| | - Mason Earles
- Department of Biological and Agricultural Engineering, University of California, Davis, Davis, CA, United States
- Department of Viticulture and Enology, University of California, Davis, Davis, CA, United States
| |
Collapse
|
14
|
Soetedjo A, Hendriarianti E. Plant Leaf Detection and Counting in a Greenhouse during Day and Nighttime Using a Raspberry Pi NoIR Camera. SENSORS 2021; 21:s21196659. [PMID: 34640979 PMCID: PMC8512127 DOI: 10.3390/s21196659] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/29/2021] [Accepted: 09/30/2021] [Indexed: 01/21/2023]
Abstract
A non-destructive method using machine vision is an effective way to monitor plant growth. However, due to the lighting changes and complicated backgrounds in outdoor environments, this becomes a challenging task. In this paper, a low-cost camera system using an NoIR (no infrared filter) camera and a Raspberry Pi module is employed to detect and count the leaves of Ramie plants in a greenhouse. An infrared camera captures the images of leaves during the day and nighttime for a precise evaluation. The infrared images allow Otsu thresholding to be used for efficient leaf detection. A combination of numbers of thresholds is introduced to increase the detection performance. Two approaches, consisting of static images and image sequence methods are proposed. A watershed algorithm is then employed to separate the leaves of a plant. The experimental results show that the proposed leaf detection using static images achieves high recall, precision, and F1 score of 0.9310, 0.9053, and 0.9167, respectively, with an execution time of 551 ms. The strategy of using sequences of images increases the performances to 0.9619, 0.9505, and 0.9530, respectively, with an execution time of 516.30 ms. The proposed leaf counting achieves a difference in count (DiC) and absolute DiC (ABS_DiC) of 2.02 and 2.23, respectively, with an execution time of 545.41 ms. Moreover, the proposed method is evaluated using the benchmark image datasets, and shows that the foreground–background dice (FBD), DiC, and ABS_DIC are all within the average values of the existing techniques. The results suggest that the proposed system provides a promising method for real-time implementation.
Collapse
Affiliation(s)
- Aryuanto Soetedjo
- Department of Electrical Engineering, National Institute of Technology (ITN), Malang 65145, East Java, Indonesia
- Correspondence:
| | - Evy Hendriarianti
- Department of Environmental Engineering, National Institute of Technology (ITN), Malang 65145, East Java, Indonesia;
| |
Collapse
|