1
|
Sampurno RM, Liu Z, Abeyrathna RMRD, Ahamed T. Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations. Sensors (Basel) 2024; 24:893. [PMID: 38339611 PMCID: PMC10857644 DOI: 10.3390/s24030893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Revised: 01/21/2024] [Accepted: 01/28/2024] [Indexed: 02/12/2024]
Abstract
Mechanical weed management is a drudging task that requires manpower and has risks when conducted within rows of orchards. However, intrarow weeding must still be conducted by manual labor due to the restricted movements of riding mowers within the rows of orchards due to their confined structures with nets and poles. However, autonomous robotic weeders still face challenges identifying uncut weeds due to the obstruction of Global Navigation Satellite System (GNSS) signals caused by poles and tree canopies. A properly designed intelligent vision system would have the potential to achieve the desired outcome by utilizing an autonomous weeder to perform operations in uncut sections. Therefore, the objective of this study is to develop a vision module using a custom-trained dataset on YOLO instance segmentation algorithms to support autonomous robotic weeders in recognizing uncut weeds and obstacles (i.e., fruit tree trunks, fixed poles) within rows. The training dataset was acquired from a pear orchard located at the Tsukuba Plant Innovation Research Center (T-PIRC) at the University of Tsukuba, Japan. In total, 5000 images were preprocessed and labeled for training and testing using YOLO models. Four versions of edge-device-dedicated YOLO instance segmentation were utilized in this research-YOLOv5n-seg, YOLOv5s-seg, YOLOv8n-seg, and YOLOv8s-seg-for real-time application with an autonomous weeder. A comparison study was conducted to evaluate all YOLO models in terms of detection accuracy, model complexity, and inference speed. The smaller YOLOv5-based and YOLOv8-based models were found to be more efficient than the larger models, and YOLOv8n-seg was selected as the vision module for the autonomous weeder. In the evaluation process, YOLOv8n-seg had better segmentation accuracy than YOLOv5n-seg, while the latter had the fastest inference time. The performance of YOLOv8n-seg was also acceptable when it was deployed on a resource-constrained device that is appropriate for robotic weeders. The results indicated that the proposed deep learning-based detection accuracy and inference speed can be used for object recognition via edge devices for robotic operation during intrarow weeding operations in orchards.
Collapse
Affiliation(s)
- Rizky Mulya Sampurno
- Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan; (R.M.S.); (Z.L.); (R.M.R.D.A.)
- Department of Agricultural and Biosystem Engineering, Universitas Padjadjaran, Jatinangor, Sumedang 45363, Indonesia
| | - Zifu Liu
- Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan; (R.M.S.); (Z.L.); (R.M.R.D.A.)
| | - R. M. Rasika D. Abeyrathna
- Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan; (R.M.S.); (Z.L.); (R.M.R.D.A.)
- Department of Agricultural Engineering, University of Paradeniya, Kandy 20400, Sri Lanka
| | - Tofael Ahamed
- Faculty of Life and Environmental Science, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan
| |
Collapse
|
2
|
Kurniawan KIA, Putra AS, Ishizaki R, Rani DS, Rahmah DM, Al Husna SN, Ahamed T, Noguchi R. Life cycle assessment of integrated microalgae oil production in Bojongsoang Wastewater Treatment Plant, Indonesia. Environ Sci Pollut Res Int 2024; 31:7902-7933. [PMID: 38168854 DOI: 10.1007/s11356-023-31582-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
This study aims to determine the eco-friendliness of microalgae-based renewable energy production in several scenarios based on life cycle assessment (LCA). The LCA provides critical data for sustainable decision-making and energy requirement analysis, including net energy ratio (NER) and cumulative energy demand (CED). The Centrum voor Milieuwetenschappen Leiden (CML) IA-Baseline was used on environmental impact assessment method by SimaPro v9.3.0.3® software and energy analysis of biofuel production using native polyculture microalgae biomass in municipal wastewater treatment plants (WWTP) Bojongsoang, Bandung, Indonesia. The study was analyzed under three scenarios: (1) the current scenario; (2) the algae scenario without waste heat and carbon dioxide (CO2); and (3) the algae scenario with waste heat and carbon dioxide (CO2). Waste heat and CO2 were obtained from an industrial zone near the WWTP. The results disclosed that the microalgae scenario with waste heat and CO2 utilization is the most promising scenario with the lowest environmental impact (- 0.139 kg CO2eq/MJ), positive energy balance of 1.23 MJ/m3 wastewater (NER > 1), and lower CED value across various impact categories. It indicates that utilizing the waste heat and CO2 has a positive impact on energy efficiency. Based on the environmental impact, NER and CED values, this study suggests that the microalgae scenario with waste heat and CO2 is more feasible and sustainable to adopt and could be implemented at the Bojongsoang WWTP.
Collapse
Affiliation(s)
| | - Agusta Samodra Putra
- Research Center for Sustainable Production System and Life Cycle Assessment, National Research and Innovation Agency, Puspiptek Area, Serpong, 15314, Indonesia
| | | | - Devitra Saka Rani
- Research Organization for Energy and Manufacture, National Research and Innovation Agency, Puspiptek Area, Serpong, 15314, Indonesia
| | - Devi Maulida Rahmah
- Faculty of Agricultural Industrial Technology, Universitas Padjadjaran, Sumedang, Indonesia
| | - Shabrina Nida Al Husna
- Department of Microbiology, School of Life Sciences and Technology, Institut Teknologi Bandung, Jl. Ganesa No.10, Lb. Siliwangi, Kecamatan Coblong, Kota Bandung, Jawa Barat, 40132, Indonesia
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Ryozo Noguchi
- Laboratory of Agricultural Systems Engineering, Division of Environmental Science and Technology, Graduate School of Agriculture, Kyoto University, Kyoto, 606-8502, Japan.
| |
Collapse
|
3
|
Apacionado BV, Ahamed T. Sooty Mold Detection on Citrus Tree Canopy Using Deep Learning Algorithms. Sensors (Basel) 2023; 23:8519. [PMID: 37896610 PMCID: PMC10610784 DOI: 10.3390/s23208519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 10/05/2023] [Accepted: 10/13/2023] [Indexed: 10/29/2023]
Abstract
Sooty mold is a common disease found in citrus plants and is characterized by black fungi growth on fruits, leaves, and branches. This mold reduces the plant's ability to carry out photosynthesis. In small leaves, it is very difficult to detect sooty mold at the early stages. Deep learning-based image recognition techniques have the potential to identify and diagnose pest damage and diseases such as sooty mold. Recent studies used advanced and expensive hyperspectral or multispectral cameras attached to UAVs to examine the canopy of the plants and mid-range cameras to capture close-up infected leaf images. To bridge the gap on capturing canopy level images using affordable camera sensors, this study used a low-cost home surveillance camera to monitor and detect sooty mold infection on citrus canopy combined with deep learning algorithms. To overcome the challenges posed by varying light conditions, the main reason for using specialized cameras, images were collected at night, utilizing the camera's built-in night vision feature. A total of 4200 sliced night-captured images were used for training, 200 for validation, and 100 for testing, employed on the YOLOv5m, YOLOv7, and CenterNet models for comparison. The results showed that YOLOv7 was the most accurate in detecting sooty molds at night, with 74.4% mAP compared to YOLOv5m (72%) and CenterNet (70.3%). The models were also tested using preprocessed (unsliced) night images and day-captured sliced and unsliced images. The testing on preprocessed (unsliced) night images demonstrated the same trend as the training results, with YOLOv7 performing best compared to YOLOv5m and CenterNet. In contrast, testing on the day-captured images had underwhelming outcomes for both sliced and unsliced images. In general, YOLOv7 performed best in detecting sooty mold infections at night on citrus canopy and showed promising potential in real-time orchard disease monitoring and detection. Moreover, this study demonstrated that utilizing a cost-effective surveillance camera and deep learning algorithms can accurately detect sooty molds at night, enabling growers to effectively monitor and identify occurrences of the disease at the canopy level.
Collapse
Affiliation(s)
- Bryan Vivas Apacionado
- Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan;
| | - Tofael Ahamed
- Institute of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan
| |
Collapse
|
4
|
Hamidon MH, Ahamed T. Detection of Defective Lettuce Seedlings Grown in an Indoor Environment under Different Lighting Conditions Using Deep Learning Algorithms. Sensors (Basel) 2023; 23:5790. [PMID: 37447645 PMCID: PMC10346403 DOI: 10.3390/s23135790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/19/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
Sorting seedlings is laborious and requires attention to identify damage. Separating healthy seedlings from damaged or defective seedlings is a critical task in indoor farming systems. However, sorting seedlings manually can be challenging and time-consuming, particularly under complex lighting conditions. Different indoor lighting conditions can affect the visual appearance of the seedlings, making it difficult for human operators to accurately identify and sort the seedlings consistently. Therefore, the objective of this study was to develop a defective-lettuce-seedling-detection system under different indoor cultivation lighting systems using deep learning algorithms to automate the seedling sorting process. The seedling images were captured under different indoor lighting conditions, including white, blue, and red. The detection approach utilized and compared several deep learning algorithms, specifically CenterNet, YOLOv5, YOLOv7, and faster R-CNN to detect defective seedlings in indoor farming environments. The results demonstrated that the mean average precision (mAP) of YOLOv7 (97.2%) was the highest and could accurately detect defective lettuce seedlings compared to CenterNet (82.8%), YOLOv5 (96.5%), and faster R-CNN (88.6%). In terms of detection under different light variables, YOLOv7 also showed the highest detection rate under white and red/blue/white lighting. Overall, the detection of defective lettuce seedlings by YOLOv7 shows great potential for introducing automated seedling-sorting systems and classification under actual indoor farming conditions. Defective-seedling-detection can improve the efficiency of seedling-management operations in indoor farming.
Collapse
Affiliation(s)
- Munirah Hayati Hamidon
- Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan;
| | - Tofael Ahamed
- Institute of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan
| |
Collapse
|
5
|
Jiang A, Ahamed T. Navigation of an Autonomous Spraying Robot for Orchard Operations Using LiDAR for Tree Trunk Detection. Sensors (Basel) 2023; 23:4808. [PMID: 37430726 DOI: 10.3390/s23104808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 05/06/2023] [Accepted: 05/10/2023] [Indexed: 07/12/2023]
Abstract
Traditional Japanese orchards control the growth height of fruit trees for the convenience of farmers, which is unfavorable to the operation of medium- and large-sized machinery. A compact, safe, and stable spraying system could offer a solution for orchard automation. Due to the complex orchard environment, the dense tree canopy not only obstructs the GNSS signal but also has effects due to low light, which may impact the recognition of objects by ordinary RGB cameras. To overcome these disadvantages, this study selected LiDAR as a single sensor to achieve a prototype robot navigation system. In this study, density-based spatial clustering of applications with noise (DBSCAN) and K-means and random sample consensus (RANSAC) machine learning algorithms were used to plan the robot navigation path in a facilitated artificial-tree-based orchard system. Pure pursuit tracking and an incremental proportional-integral-derivative (PID) strategy were used to calculate the vehicle steering angle. In field tests on a concrete road, grass field, and a facilitated artificial-tree-based orchard, as indicated by the test data results for several formations of left turns and right turns separately, the position root mean square error (RMSE) of this vehicle was as follows: on the concrete road, the right turn was 12.0 cm and the left turn was 11.6 cm, on grass, the right turn was 12.6 cm and the left turn was 15.5 cm, and in the facilitated artificial-tree-based orchard, the right turn was 13.8 cm and the left turn was 11.4 cm. The vehicle was able to calculate the path in real time based on the position of the objects, operate safely, and complete the task of pesticide spraying.
Collapse
Affiliation(s)
- Ailian Jiang
- Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan
| | - Tofael Ahamed
- Institute of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan
| |
Collapse
|
6
|
Abeyrathna RMRD, Nakaguchi VM, Minn A, Ahamed T. Recognition and Counting of Apples in a Dynamic State Using a 3D Camera and Deep Learning Algorithms for Robotic Harvesting Systems. Sensors (Basel) 2023; 23:3810. [PMID: 37112151 PMCID: PMC10145955 DOI: 10.3390/s23083810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 03/23/2023] [Accepted: 04/03/2023] [Indexed: 06/19/2023]
Abstract
Recognition and 3D positional estimation of apples during harvesting from a robotic platform in a moving vehicle are still challenging. Fruit clusters, branches, foliage, low resolution, and different illuminations are unavoidable and cause errors in different environmental conditions. Therefore, this research aimed to develop a recognition system based on training datasets from an augmented, complex apple orchard. The recognition system was evaluated using deep learning algorithms established from a convolutional neural network (CNN). The dynamic accuracy of the modern artificial neural networks involving 3D coordinates for deploying robotic arms at different forward-moving speeds from an experimental vehicle was investigated to compare the recognition and tracking localization accuracy. In this study, a Realsense D455 RGB-D camera was selected to acquire 3D coordinates of each detected and counted apple attached to artificial trees placed in the field to propose a specially designed structure for ease of robotic harvesting. A 3D camera, YOLO (You Only Look Once), YOLOv4, YOLOv5, YOLOv7, and EfficienDet state-of-the-art models were utilized for object detection. The Deep SORT algorithm was employed for tracking and counting detected apples using perpendicular, 15°, and 30° orientations. The 3D coordinates were obtained for each tracked apple when the on-board camera in the vehicle passed the reference line and was set in the middle of the image frame. To optimize harvesting at three different speeds (0.052 ms-1, 0.069 ms-1, and 0.098 ms-1), the accuracy of 3D coordinates was compared for three forward-moving speeds and three camera angles (15°, 30°, and 90°). The mean average precision (mAP@0.5) values of YOLOv4, YOLOv5, YOLOv7, and EfficientDet were 0.84, 0.86, 0.905, and 0.775, respectively. The lowest root mean square error (RMSE) was 1.54 cm for the apples detected by EfficientDet at a 15° orientation and a speed of 0.098 ms-1. In terms of counting apples, YOLOv5 and YOLOv7 showed a higher number of detections in outdoor dynamic conditions, achieving a counting accuracy of 86.6%. We concluded that the EfficientDet deep learning algorithm at a 15° orientation in 3D coordinates can be employed for further robotic arm development while harvesting apples in a specially designed orchard.
Collapse
Affiliation(s)
- R. M. Rasika D. Abeyrathna
- Graduate School of Science and Technology, University of Tsukuba, Tennodai 1-1-1, Tsukuba 305-8577, Japan
- Department of Agricultural Engineering, University of Peradeniya, Kandy 20400, Sri Lanka
| | - Victor Massaki Nakaguchi
- Graduate School of Science and Technology, University of Tsukuba, Tennodai 1-1-1, Tsukuba 305-8577, Japan
| | - Arkar Minn
- Graduate School of Science and Technology, University of Tsukuba, Tennodai 1-1-1, Tsukuba 305-8577, Japan
- Department of Agricultural Engineering, Yezin Agricultural University, Nay Phi Taw 150501, Myanmar
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba 305-8577, Japan
| |
Collapse
|
7
|
Nakaguchi VM, Ahamed T. Fast and Non-Destructive Quail Egg Freshness Assessment Using a Thermal Camera and Deep Learning-Based Air Cell Detection Algorithms for the Revalidation of the Expiration Date of Eggs. Sensors (Basel) 2022; 22:7703. [PMID: 36298055 PMCID: PMC9610913 DOI: 10.3390/s22207703] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 10/06/2022] [Accepted: 10/07/2022] [Indexed: 06/16/2023]
Abstract
Freshness is one of the most important parameters for assessing the quality of avian eggs. Available techniques to estimate the degradation of albumen and enlargement of the air cell are either destructive or not suitable for high-throughput applications. The aim of this research was to introduce a new approach to evaluate the air cell of quail eggs for freshness assessment as a fast, noninvasive, and nondestructive method. A new methodology was proposed by using a thermal microcamera and deep learning object detection algorithms. To evaluate the new method, we stored 174 quail eggs and collected thermal images 30, 50, and 60 days after the labeled expiration date. These data, 522 in total, were expanded to 3610 by image augmentation techniques and then split into training and validation samples to produce models of the deep learning algorithms, referred to as "You Only Look Once" version 4 and 5 (YOLOv4 and YOLOv5) and EfficientDet. We tested the models in a new dataset composed of 60 eggs that were kept for 15 days after the labeled expiration label date. The validation of our methodology was performed by measuring the air cell area highlighted in the thermal images at the pixel level; thus, we compared the difference in the weight of eggs between the first day of storage and after 10 days under accelerated aging conditions. The statistical significance showed that the two variables (air cell and weight) were negatively correlated (R2 = 0.676). The deep learning models could predict freshness with F1 scores of 0.69, 0.89, and 0.86 for the YOLOv4, YOLOv5, and EfficientDet models, respectively. The new methodology for freshness assessment demonstrated that the best model reclassified 48.33% of our testing dataset. Therefore, those expired eggs could have their expiration date extended for another 2 weeks from the original label date.
Collapse
Affiliation(s)
- Victor Massaki Nakaguchi
- Graduate School of Science and Technology, University of Tsukuba, Tennodai 1-1-1, Tsukuba 305-8577, Ibaraki, Japan
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba 305-8577, Ibaraki, Japan
| |
Collapse
|
8
|
Hamidon MH, Ahamed T. Detection of Tip-Burn Stress on Lettuce Grown in an Indoor Environment Using Deep Learning Algorithms. Sensors (Basel) 2022; 22:7251. [PMID: 36236351 PMCID: PMC9571858 DOI: 10.3390/s22197251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 09/20/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
Lettuce grown in indoor farms under fully artificial light is susceptible to a physiological disorder known as tip-burn. A vital factor that controls plant growth in indoor farms is the ability to adjust the growing environment to promote faster crop growth. However, this rapid growth process exacerbates the tip-burn problem, especially for lettuce. This paper presents an automated detection of tip-burn lettuce grown indoors using a deep-learning algorithm based on a one-stage object detector. The tip-burn lettuce images were captured under various light and indoor background conditions (under white, red, and blue LEDs). After augmentation, a total of 2333 images were generated and used for training using three different one-stage detectors, namely, CenterNet, YOLOv4, and YOLOv5. In the training dataset, all the models exhibited a mean average precision (mAP) greater than 80% except for YOLOv4. The most accurate model for detecting tip-burns was YOLOv5, which had the highest mAP of 82.8%. The performance of the trained models was also evaluated on the images taken under different indoor farm light settings, including white, red, and blue LEDs. Again, YOLOv5 was significantly better than CenterNet and YOLOv4. Therefore, detecting tip-burn on lettuce grown in indoor farms under different lighting conditions can be recognized by using deep-learning algorithms with a reliable overall accuracy. Early detection of tip-burn can help growers readjust the lighting and controlled environment parameters to increase the freshness of lettuce grown in plant factories.
Collapse
Affiliation(s)
- Munirah Hayati Hamidon
- Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan
| |
Collapse
|
9
|
Nakaguchi VM, Ahamed T. Development of an Early Embryo Detection Methodology for Quail Eggs Using a Thermal Micro Camera and the YOLO Deep Learning Algorithm. Sensors (Basel) 2022; 22:s22155820. [PMID: 35957378 PMCID: PMC9371013 DOI: 10.3390/s22155820] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 07/31/2022] [Accepted: 08/02/2022] [Indexed: 05/27/2023]
Abstract
Poultry production utilizes many available technologies in terms of farm-industry automation and sanitary control. However, there is a lack of robust techniques and affordable equipment for avian embryo detection and sexual segregation at the early stages. In this work, we aimed to evaluate the potential use of thermal micro cameras for detecting embryos in quail eggs via thermal images during the first 168 h (7 days) of incubation. We propose a methodology to collect data during the incubation period. Additionally, to support the visual analysis, YOLO deep learning object detection algorithms were applied to detect unfertilized eggs; the results showed its potential to distinguish fertilized eggs from unfertilized eggs during the incubation period, after filtering radiometric images. We compared YOLOv4, YOLOv5 and SSD-MobileNet V2 trained models. The mAP@0.50 of the YOLOv4, YOLOv5 and SSD-MobileNet V2 was 98.62%, 99.5% and 91.8%, respectively. We also compared three testing datasets for different intervals of rotation of eggs, as our hypothesis was that fewer turning periods could improve the visualization of fertilized egg features, and applied three treatments: 1.5 h, 6 h, and 12 h. The results showed that turning eggs in different periods did not exhibit a linear relation, as the F1 Score for YOLOv4 of detection for the 12 h period was 0.569, that for the 6 h period was 0.404 and that for the 1.5 h period was 0.384. YOLOv5 F1 Scores for 12 h, 6 h and 1.5 h were 1, 0.545 and 0.386, respectively. SSD-MobileNet V2 performed F1 scores of 0.60 for 12 h, 0.22 for 6 h and 0 for 1.5 h turning periods.
Collapse
Affiliation(s)
- Victor Massaki Nakaguchi
- Graduate School of Science and Technology, University of Tsukuba, Tennodai 1-1-1, Tsukuba 305-8577, Japan
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba 305-8577, Japan
| |
Collapse
|
10
|
Pan S, Ahamed T. Pear Recognition in an Orchard from 3D Stereo Camera Datasets to Develop a Fruit Picking Mechanism Using Mask R-CNN. Sensors (Basel) 2022; 22:s22114187. [PMID: 35684807 PMCID: PMC9185418 DOI: 10.3390/s22114187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 05/27/2022] [Accepted: 05/27/2022] [Indexed: 12/04/2022]
Abstract
In orchard fruit picking systems for pears, the challenge is to identify the full shape of the soft fruit to avoid injuries while using robotic or automatic picking systems. Advancements in computer vision have brought the potential to train for different shapes and sizes of fruit using deep learning algorithms. In this research, a fruit recognition method for robotic systems was developed to identify pears in a complex orchard environment using a 3D stereo camera combined with Mask Region-Convolutional Neural Networks (Mask R-CNN) deep learning technology to obtain targets. This experiment used 9054 RGBA original images (3018 original images and 6036 augmented images) to create a dataset divided into a training, validation, and testing sets. Furthermore, we collected the dataset under different lighting conditions at different times which were high-light (9–10 am) and low-light (6–7 pm) conditions at JST, Tokyo Time, August 2021 (summertime) to prepare training, validation, and test datasets at a ratio of 6:3:1. All the images were taken by a 3D stereo camera which included PERFORMANCE, QUALITY, and ULTRA models. We used the PERFORMANCE model to capture images to make the datasets; the camera on the left generated depth images and the camera on the right generated the original images. In this research, we also compared the performance of different types with the R-CNN model (Mask R-CNN and Faster R-CNN); the mean Average Precisions (mAP) of Mask R-CNN and Faster R-CNN were compared in the same datasets with the same ratio. Each epoch in Mask R-CNN was set at 500 steps with total 80 epochs. And Faster R-CNN was set at 40,000 steps for training. For the recognition of pears, the Mask R-CNN, had the mAPs of 95.22% for validation set and 99.45% was observed for the testing set. On the other hand, mAPs were observed 87.9% in the validation set and 87.52% in the testing set using Faster R-CNN. The different models using the same dataset had differences in performance in gathering clustered pears and individual pear situations. Mask R-CNN outperformed Faster R-CNN when the pears are densely clustered at the complex orchard. Therefore, the 3D stereo camera-based dataset combined with the Mask R-CNN vision algorithm had high accuracy in detecting the individual pears from gathered pears in a complex orchard environment.
Collapse
Affiliation(s)
- Siyu Pan
- Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan;
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan
- Correspondence:
| |
Collapse
|
11
|
Jiang A, Noguchi R, Ahamed T. Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN. Sensors (Basel) 2022; 22:s22052065. [PMID: 35271214 PMCID: PMC8914652 DOI: 10.3390/s22052065] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 03/04/2022] [Accepted: 03/04/2022] [Indexed: 11/16/2022]
Abstract
In an orchard automation process, a current challenge is to recognize natural landmarks and tree trunks to localize intelligent robots. To overcome low-light conditions and global navigation satellite system (GNSS) signal interruptions under a dense canopy, a thermal camera may be used to recognize tree trunks using a deep learning system. Therefore, the objective of this study was to use a thermal camera to detect tree trunks at different times of the day under low-light conditions using deep learning to allow robots to navigate. Thermal images were collected from the dense canopies of two types of orchards (conventional and joint training systems) under high-light (12-2 PM), low-light (5-6 PM), and no-light (7-8 PM) conditions in August and September 2021 (summertime) in Japan. The detection accuracy for a tree trunk was confirmed by the thermal camera, which observed an average error of 0.16 m for 5 m, 0.24 m for 15 m, and 0.3 m for 20 m distances under high-, low-, and no-light conditions, respectively, in different orientations of the thermal camera. Thermal imagery datasets were augmented to train, validate, and test using the Faster R-CNN deep learning model to detect tree trunks. A total of 12,876 images were used to train the model, 2318 images were used to validate the training process, and 1288 images were used to test the model. The mAP of the model was 0.8529 for validation and 0.8378 for the testing process. The average object detection time was 83 ms for images and 90 ms for videos with the thermal camera set at 11 FPS. The model was compared with the YOLO v3 with same number of datasets and training conditions. In the comparisons, Faster R-CNN achieved a higher accuracy than YOLO v3 in tree truck detection using the thermal camera. Therefore, the results showed that Faster R-CNN can be used to recognize objects using thermal images to enable robot navigation in orchards under different lighting conditions.
Collapse
Affiliation(s)
- Ailian Jiang
- Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan;
| | - Ryozo Noguchi
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan;
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan;
- Correspondence:
| |
Collapse
|
12
|
Parico AIB, Ahamed T. Real Time Pear Fruit Detection and Counting Using YOLOv4 Models and Deep SORT. Sensors (Basel) 2021; 21:4803. [PMID: 34300543 PMCID: PMC8309787 DOI: 10.3390/s21144803] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 06/28/2021] [Accepted: 07/08/2021] [Indexed: 11/16/2022]
Abstract
This study aimed to produce a robust real-time pear fruit counter for mobile applications using only RGB data, the variants of the state-of-the-art object detection model YOLOv4, and the multiple object-tracking algorithm Deep SORT. This study also provided a systematic and pragmatic methodology for choosing the most suitable model for a desired application in agricultural sciences. In terms of accuracy, YOLOv4-CSP was observed as the optimal model, with an AP@0.50 of 98%. In terms of speed and computational cost, YOLOv4-tiny was found to be the ideal model, with a speed of more than 50 FPS and FLOPS of 6.8-14.5. If considering the balance in terms of accuracy, speed and computational cost, YOLOv4 was found to be most suitable and had the highest accuracy metrics while satisfying a real time speed of greater than or equal to 24 FPS. Between the two methods of counting with Deep SORT, the unique ID method was found to be more reliable, with an F1count of 87.85%. This was because YOLOv4 had a very low false negative in detecting pear fruits. The ROI line is more reliable because of its more restrictive nature, but due to flickering in detection it was not able to count some pears despite their being detected.
Collapse
Affiliation(s)
- Addie Ira Borja Parico
- Graduate School of Life and Environmental Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba, Ibaraki 305-8577, Japan;
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba, Ibaraki 305-8577, Japan
| |
Collapse
|
13
|
Hara R, Ishigaki M, Ozaki Y, Ahamed T, Noguchi R, Miyamoto A, Genkawa T. Effect of Raman exposure time on the quantitative and discriminant analyses of carotenoid concentrations in intact tomatoes. Food Chem 2021; 360:129896. [PMID: 33989876 DOI: 10.1016/j.foodchem.2021.129896] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 04/14/2021] [Accepted: 04/17/2021] [Indexed: 01/11/2023]
Abstract
The significant worldwide expansion of the health food market, which includes functional fruits and vegetables, requires a simple and rapid analytical method for the on-site analysis of functional components, such as carotenoids, in fruits and vegetables, and Raman spectroscopy is a powerful candidate. Herein, we clarified the effects of Raman exposure time on quantitative and discriminant analysis accuracies. Raman spectra of intact tomatoes with various carotenoid concentrations were acquired and used to develop partial least squares regression (PLSR) and partial least squares discriminant analysis (PLS-DA) models. The accuracy of the PLSR model was superior (R2 = 0.87) when Raman spectra were acquired 10 s, but decreased with decreasing exposure time (R2 = 0.69; 0.7 s). The accuracy of the PLS-DA model was unaffected by exposure time (hit rate: 90%). We conclude that Raman spectroscopy combined with PLS-DA is useful for the on-site analysis of carotenoids in fruits and vegetables.
Collapse
Affiliation(s)
- Risa Hara
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8572, Japan; Research and Development Department, Yokogawa Electronic Corporation, 2-9-32, Nakacho, Musashino, Tokyo 180-8750, Japan.
| | - Mika Ishigaki
- Institute of Agricultural and Life Sciences, Academic Assembly, Shimane University, 1060 Nishikawatsu, Matsue, Shimane 690-8504, Japan.
| | - Yukihiro Ozaki
- School of Biological and Environmental Sciences, Kwansei Gakuin University, 2-1 Gakuen, Sanda, Hyogo 669-1337, Japan.
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8572, Japan.
| | - Ryozo Noguchi
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8572, Japan.
| | - Aiko Miyamoto
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8572, Japan; Institute of Food Research, National Agriculture and Food Research Organization, 2-1-12 Kannondai, Tsukuba, Ibaraki 305-8602, Japan.
| | - Takuma Genkawa
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8572, Japan; Institute of Food Research, National Agriculture and Food Research Organization, 2-1-12 Kannondai, Tsukuba, Ibaraki 305-8602, Japan.
| |
Collapse
|
14
|
Rani DS, Supriyanto, Watanabe MM, Demura M, Yoshida M, Ahamed T, Noguchi R. A Novel Polyculture Growth Model of Native Microalgal Communities to Estimate Biomass Productivity for Biofuel Production. Biotechnol Prog 2021:e3156. [PMID: 33870660 DOI: 10.1002/btpr.3156] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 04/15/2021] [Accepted: 04/15/2021] [Indexed: 11/09/2022]
Abstract
Native polyculture microalgae is a promising scheme to produce microalgal biomass as biofuel feedstock in an open raceway pond. However, predicting biomass productivity of native polycultures microalgae is incredibly complicated. Therefore, developing polyculture growth model to forecast biomass yield is indispensable for commercial-scale production. This research aims to develop a polyculture growth model for native microalgal communities in the Minamisoma algae plant and to estimate biomass and biocrude oil productivity in a semi-continuous open raceway pond. The model was built based on monoculture growth of polyculture species and it is later formulated using species growth, polyculture factor (k value ), initial concentration, light intensity, and temperature. In order to calculate species growth, a simplified Monod model was applied. In the simulation, 115 samples of the 2014-2015 field dataset were used for model training, and 70 samples of the 2017 field dataset were used for model validation. The model simulation on biomass concentration showed that the polyculture growth model with k value had a root-mean-square error of 0.12, whereas model validation provided a better result with a root-mean-square error of 0.08. Biomass productivity forecast showed maximum productivity of 18.87 g/m2 /d in June with an annual average of 13.59 g/m2 /d. Biocrude oil yield forecast indicated that hydrothermal liquefaction process was more suitable with a maximum productivity of 0.59 g/m2 /d compared with solvent extraction which was only 0.19 g/m2 /d. With satisfactory root mean square errors less than 0.3, this polyculture growth model can be applied to forecast the productivity of native microalgae. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Devitra Saka Rani
- Graduate School of Life and Environmental Sciences, University of Tsukuba, Japan
- R&D Centre for Oil and Gas Technology "LEMIGAS", Ministry of Energy and Mineral Resources, Indonesia
| | - Supriyanto
- Mechanical and Biosystem Engineering Department, Bogor Agricultural University, Indonesia
| | - Makoto M Watanabe
- Algae Biomass and Energy System R&D Center, University of Tsukuba, Japan
| | | | - Masaki Yoshida
- Algae Biomass and Energy System R&D Center, University of Tsukuba, Japan
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, Japan
| | - Ryozo Noguchi
- Faculty of Life and Environmental Sciences, University of Tsukuba, Japan
| |
Collapse
|
15
|
Hamidon MH, Abd Aziz S, Ahamed T, Mahadi MR. DESIGN AND DEVELOPMENT OF SMART VERTICAL GARDEN SYSTEM FOR URBAN AGRICULTURE INITIATIVE IN MALAYSIA. Jurnal Teknologi 2019; 82. [DOI: 10.11113/jt.v82.13931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Vertical garden system has the potential to increase vegetable production in the urban areas in Malaysia. This research designed and developed a compact and smart vertical garden system for the urban agriculture. It also analysed the growth performances of lettuce in the smart vertical garden system which involved two phases; the development of vertical garden system and the monitoring system for nutrient solution. The growth performances of different stacks of lettuce (Lactuca sativa) in the vertical garden system were observed and compared against the commercialised conventional hydroponic system. The growth performances of lettuce in the vertical garden system showed that the most bottom stack (stack 5) of lettuce achieved the maximum level of lettuce height, and had the highest number of leaves and leaves width. Nevertheless, from the overall ANOVA results, at different levels of the stacks of lettuce, only lettuce height was observed as having a significant difference (P < 0.0001) while no significant difference was found in the number of leaves (P = 0.0002) and leaves width (P = 0.0046). The growth development varied due to different amounts of water and light exposure. On the other hand, no significant difference was found when comparing between the vertical garden system and the commercialised conventional hydroponic system (lettuce height, P = 0.4997; number of leaves, P = 0.5325; and leaves width, P = 0.5231). In short, the smart vertical garden system can give the same performance as the commercial conventional hydroponic system.
Collapse
|
16
|
Gao P, Zhang Y, Zhang L, Noguchi R, Ahamed T. Development of a Recognition System for Spraying Areas from Unmanned Aerial Vehicles Using a Machine Learning Approach. Sensors (Basel) 2019; 19:s19020313. [PMID: 30646586 PMCID: PMC6359728 DOI: 10.3390/s19020313] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 01/04/2019] [Accepted: 01/10/2019] [Indexed: 11/16/2022]
Abstract
Unmanned aerial vehicle (UAV)-based spraying systems have recently become important for the precision application of pesticides, using machine learning approaches. Therefore, the objective of this research was to develop a machine learning system that has the advantages of high computational speed and good accuracy for recognizing spray and non-spray areas for UAV-based sprayers. A machine learning system was developed by using the mutual subspace method (MSM) for images collected from a UAV. Two target lands: agricultural croplands and orchard areas, were considered in building two classifiers for distinguishing spray and non-spray areas. The field experiments were conducted in target areas to train and test the system by using a commercial UAV (DJI Phantom 3 Pro) with an onboard 4K camera. The images were collected from low (5 m) and high (15 m) altitudes for croplands and orchards, respectively. The recognition system was divided into offline and online systems. In the offline recognition system, 74.4% accuracy was obtained for the classifiers in recognizing spray and non-spray areas for croplands. In the case of orchards, the average classifier recognition accuracy of spray and non-spray areas was 77%. On the other hand, the online recognition system performance had an average accuracy of 65.1% for croplands, and 75.1% for orchards. The computational time for the online recognition system was minimal, with an average of 0.0031 s for classifier recognition. The developed machine learning system had an average recognition accuracy of 70%, which can be implemented in an autonomous UAV spray system for recognizing spray and non-spray areas for real-time applications.
Collapse
Affiliation(s)
- Pengbo Gao
- Graduate School of Life and Environmental Sciences, University of Tsukuba, Tsukuba 305-8572, Japan.
| | - Yan Zhang
- Graduate School of Life and Environmental Sciences, University of Tsukuba, Tsukuba 305-8572, Japan.
| | - Linhuan Zhang
- Graduate School of Life and Environmental Sciences, University of Tsukuba, Tsukuba 305-8572, Japan.
| | - Ryozo Noguchi
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8572, Japan.
| | - Tofael Ahamed
- Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8572, Japan.
| |
Collapse
|
17
|
Valant R, Ahamed T, Musa F, Chan K, Jimenez E, Chalas E. Routine HbA1c testing in women undergoing major gynecologic surgery to detect prevalence of glucose intolerance. Gynecol Oncol 2018. [DOI: 10.1016/j.ygyno.2018.04.248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
18
|
Zhang L, Ahamed T, Zhang Y, Gao P, Takigawa T. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles. Sensors (Basel) 2016; 16:s16040578. [PMID: 27110793 PMCID: PMC4851092 DOI: 10.3390/s16040578] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2016] [Revised: 04/15/2016] [Accepted: 04/15/2016] [Indexed: 11/17/2022]
Abstract
The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional–integral–derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.
Collapse
Affiliation(s)
- Linhuan Zhang
- Graduate School of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8572, Japan.
| | - Tofael Ahamed
- Graduate School of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8572, Japan.
| | - Yan Zhang
- Graduate School of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8572, Japan.
| | - Pengbo Gao
- Graduate School of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8572, Japan.
| | - Tomohiro Takigawa
- Graduate School of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8572, Japan.
| |
Collapse
|
19
|
Genkawa T, Ahamed T, Noguchi R, Takigawa T, Ozaki Y. Simple and rapid determination of free fatty acids in brown rice by FTIR spectroscopy in conjunction with a second-derivative treatment. Food Chem 2016; 191:7-11. [DOI: 10.1016/j.foodchem.2015.02.014] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2014] [Accepted: 02/03/2015] [Indexed: 11/29/2022]
|
20
|
Akhanda AH, Ahamed T, Ahmad AU. Optic nerve avulsion following accidental penetrating orbital injury. Mymensingh Med J 2008; 17:197-200. [PMID: 18626458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Traumatic enucleation with optic nerve avulsion following accidental penetrating orbital injury is a rare phenomenon. A 35 years old young active man was suddenly traumatized in his left orbit after falling down on a boat following collision of two running boats having no search light in a dark night. The patient was examined few hours after the event. The patient presented with severe pain around left periorbital region. He never lost consciousness. On examination right eye revealed no abnormality. There was left periorbital swelling with blood clots. Left eyeball was hanging from the orbit with 20mm portion of optic nerve. Left eyeball was suspended with the attachment of superior oblique, superior rectus, inferior rectus, lateral rectus muscle. Medial rectus was lacerated and could not be traced out. One V shaped lacerated injury over the root of left side of nose was noted. One arm of the injury caused full thickness laceration of upper lid and other arm entered within orbit with medial bony orbital wall in nasal side and periorbita with other structures in temporal side upto apex of the orbit. Another lacerated full thickness lower lid injury was also noted. This case was managed surgically by removal of the left eyeball with orbital implant. Conjunctival injury, upper and lower lid injuries were repaired after proper surgical toileting. Ocular prosthesis was given two weeks later to have a good cosmetic view. Postoperatively the patient was managed with systemic antibiotics, NSAIDS and topical antibiotics. Traumatic enucleation following accidental boat collision was not reported yet. Awareness of the passengers, strict maintenance of the navigation rules may prevent this type of hazards.
Collapse
Affiliation(s)
- A H Akhanda
- Department of Ophthalmology, Mymensingh Medical College, Mymensingh, Bangladesh
| | | | | |
Collapse
|