1
|
Zhang B, Cao B, Ma H. A Real-time Object Volume Measurement Method Based on Line Laser Scanning. 2022 41ST CHINESE CONTROL CONFERENCE (CCC) 2022. [DOI: 10.23919/ccc55666.2022.9902574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
- Bin Zhang
- Beijing Institute of Technology,School of Automation,Beijing,100081
| | - Bin Cao
- Beijing Institute of Technology,School of Automation,Beijing,100081
| | - Hongbin Ma
- Beijing Institute of Technology,School of Automation,Beijing,100081
| |
Collapse
|
2
|
A Novel Preprocessing Method for Dynamic Point-Cloud Compression. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11135941] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Computer-based data processing capabilities have evolved to handle a lot of information. As such, the complexity of three-dimensional (3D) models (e.g., animations or real-time voxels) containing large volumes of information has increased exponentially. This rapid increase in complexity has led to problems with recording and transmission. In this study, we propose a method of efficiently managing and compressing animation information stored in the 3D point-clouds sequence. A compressed point-cloud is created by reconfiguring the points based on their voxels. Compared with the original point-cloud, noise caused by errors is removed, and a preprocessing procedure that achieves high performance in a redundant processing algorithm is proposed. The results of experiments and rendering demonstrate an average file-size reduction of 40% using the proposed algorithm. Moreover, 13% of the over-lap data are extracted and removed, and the file size is further reduced.
Collapse
|
3
|
Weight and volume estimation of poultry and products based on computer vision systems: a review. Poult Sci 2021; 100:101072. [PMID: 33752071 PMCID: PMC8010860 DOI: 10.1016/j.psj.2021.101072] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 01/14/2021] [Accepted: 02/04/2021] [Indexed: 01/10/2023] Open
Abstract
The appearance, size, and weight of poultry meat and eggs are essential for production economics and vital in the poultry sector. These external characteristics influence their market price and consumers' preference and choice. With technological developments, there is an increase in the application and importance of vision systems in the agricultural sector. Computer vision has become a promising tool in the real-time automation of poultry weighing and processing systems. Owing to its noninvasive and nonintrusive nature and its capacity to present a wide range of information, computer vision systems can be applied in the size, mass, volume determination, and sorting and grading of poultry products. This review article gives a detailed summary of the current advances in measuring poultry products' external characteristics based on computer vision systems. An overview of computer vision systems is discussed and summarized. A comprehensive presentation of the application of computer vision-based systems for assessing poultry meat and eggs was provided, that is, weight and volume estimation, sorting, and classification. Finally, the challenges and potential future trends in size, weight, and volume estimation of poultry products are reported.
Collapse
|
4
|
|
5
|
Chan TO, Xia L, Lichti DD, Sun Y, Wang J, Jiang T, Li Q. Geometric Modelling for 3D Point Clouds of Elbow Joints in Piping Systems. SENSORS 2020; 20:s20164594. [PMID: 32824328 PMCID: PMC7471979 DOI: 10.3390/s20164594] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 08/13/2020] [Accepted: 08/14/2020] [Indexed: 11/16/2022]
Abstract
Pipe elbow joints exist in almost every piping system supporting many important applications such as clean water supply. However, spatial information of the elbow joints is rarely extracted and analyzed from observations such as point cloud data obtained from laser scanning due to lack of a complete geometric model that can be applied to different types of joints. In this paper, we proposed a novel geometric model and several model adaptions for typical elbow joints including the 90° and 45° types, which facilitates the use of 3D point clouds of the elbow joints collected from laser scanning. The model comprises translational, rotational, and dimensional parameters, which can be used not only for monitoring the joints’ geometry but also other applications such as point cloud registrations. Both simulated and real datasets were used to verify the model, and two applications derived from the proposed model (point cloud registration and mounting bracket detection) were shown. The results of the geometric fitting of the simulated datasets suggest that the model can accurately recover the geometry of the joint with very low translational (0.3 mm) and rotational (0.064°) errors when ±0.02 m random errors were introduced to coordinates of a simulated 90° joint (with diameter equal to 0.2 m). The fitting of the real datasets suggests that the accuracy of the diameter estimate reaches 97.2%. The joint-based registration accuracy reaches sub-decimeter and sub-degree levels for the translational and rotational parameters, respectively.
Collapse
Affiliation(s)
- Ting On Chan
- Guangdong Provincial Key Laboratory of Urbanization and Geo-Simulation, School of Geography and Planning, Sun Yat-sen University, Guangzhou 510000, China; (T.O.C.); (T.J.); (Q.L.)
| | - Linyuan Xia
- Guangdong Provincial Key Laboratory of Urbanization and Geo-Simulation, School of Geography and Planning, Sun Yat-sen University, Guangzhou 510000, China; (T.O.C.); (T.J.); (Q.L.)
- Correspondence: ; Tel.: +86-20-84112486
| | - Derek D. Lichti
- Department of Geomatics Engineering, University of Calgary, 2500 University Dr NW, Calgary, AB T2N 1N4, Canada;
| | - Yeran Sun
- Department of Geography, College of Science, Swansea University, Swansea SA28PP, UK;
| | - Jun Wang
- School of Electrical and Computer Engineering, Nanfang College of Sun Yat-sen University, Guangzhou 510000, China;
| | - Tao Jiang
- Guangdong Provincial Key Laboratory of Urbanization and Geo-Simulation, School of Geography and Planning, Sun Yat-sen University, Guangzhou 510000, China; (T.O.C.); (T.J.); (Q.L.)
| | - Qianxia Li
- Guangdong Provincial Key Laboratory of Urbanization and Geo-Simulation, School of Geography and Planning, Sun Yat-sen University, Guangzhou 510000, China; (T.O.C.); (T.J.); (Q.L.)
| |
Collapse
|
6
|
Zhang X, Liu G, Jing L, Chen S. Automated Measurement of Heart Girth for Pigs Using Two Kinect Depth Sensors. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3848. [PMID: 32664221 PMCID: PMC7411683 DOI: 10.3390/s20143848] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 07/01/2020] [Accepted: 07/08/2020] [Indexed: 11/16/2022]
Abstract
The heart girth parameter is an important indicator reflecting the growth and development of pigs that provides critical guidance for the optimization of healthy pig breeding. To overcome the heavy workloads and poor adaptability of traditional measurement methods currently used in pig breeding, this paper proposes an automated pig heart girth measurement method using two Kinect depth sensors. First, a two-view pig depth image acquisition platform is established for data collection; the two-view point clouds after preprocessing are registered and fused by feature-based improved 4-Point Congruent Set (4PCS) method. Second, the fused point cloud is pose-normalized, and the axillary contour is used to automatically extract the heart girth measurement point. Finally, this point is taken as the starting point to intercept the circumferential perpendicular to the ground from the pig point cloud, and the complete heart girth point cloud is obtained by mirror symmetry. The heart girth is measured along this point cloud using the shortest path method. Using the proposed method, experiments were conducted on two-view data from 26 live pigs. The results showed that the heart girth measurement absolute errors were all less than 4.19 cm, and the average relative error was 2.14%, which indicating a high accuracy and efficiency of this method.
Collapse
Affiliation(s)
- Xinyue Zhang
- Key Laboratory of Modern Precision Agriculture System Integration Research, Ministry of Education, China Agricultural University, Beijing 100083, China;
- Key Laboratory of Agricultural Information Acquisition Technology, Ministry of Agriculture, and Rural Affairs, China Agricultural University, Beijing 100083, China
| | - Gang Liu
- Key Laboratory of Modern Precision Agriculture System Integration Research, Ministry of Education, China Agricultural University, Beijing 100083, China;
- Key Laboratory of Agricultural Information Acquisition Technology, Ministry of Agriculture, and Rural Affairs, China Agricultural University, Beijing 100083, China
| | - Ling Jing
- College of Science, China Agricultural University, Beijing 100083, China;
| | - Siyao Chen
- Graduate School of Agriculture, Kyoto University, Kyoto 606-8502, Japan;
| |
Collapse
|
7
|
DietSensor: Automatic Dietary Intake Measurement Using Mobile 3D Scanning Sensor for Diabetic Patients. SENSORS 2020; 20:s20123380. [PMID: 32549356 PMCID: PMC7349497 DOI: 10.3390/s20123380] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 06/09/2020] [Accepted: 06/13/2020] [Indexed: 11/17/2022]
Abstract
Diabetes is a global epidemic that impacts millions of people every year. Enhanced dietary assessment techniques are critical for maintaining a healthy life for a diabetic patient. Moreover, hospitals must monitor their diabetic patients' food intake to prescribe a certain amount of insulin. Malnutrition significantly increases patient mortality, the duration of the hospital stay, and, ultimately, medical costs. Currently, hospitals are not fully equipped to measure and track a patient's nutritional intake, and the existing solutions require an extensive user input, which introduces a lot of human errors causing endocrinologists to overlook the measurement. This paper presents DietSensor, a wearable three-dimensional (3D) measurement system, which uses an over the counter 3D camera to assist the hospital personnel with measuring a patient's nutritional intake. The structured environment of the hospital provides the opportunity to have access to the total nutritional data of any meal prepared in the kitchen as a cloud database. DietSensor uses the 3D scans and correlates them with the hospital kitchen database to calculate the exact consumed nutrition by the patient. The system was tested on twelve volunteers with no prior background or familiarity with the system. The overall calculated nutrition from the DietSensor phone application was compared with the outputs from the 24-h dietary recall (24HR) web application and MyFitnessPal phone application. The average absolute error on the collected data was 73%, 51%, and 33% for the 24HR, MyFitnessPal, and DietSensor systems, respectively.
Collapse
|
8
|
A 2-D imaging-assisted geometrical transformation method for non-destructive evaluation of the volume and surface area of avian eggs. Food Control 2020. [DOI: 10.1016/j.foodcont.2020.107112] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
9
|
Portable System for Box Volume Measurement Based on Line-Structured Light Vision and Deep Learning. SENSORS 2019; 19:s19183921. [PMID: 31514439 PMCID: PMC6767226 DOI: 10.3390/s19183921] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 08/27/2019] [Accepted: 09/09/2019] [Indexed: 11/24/2022]
Abstract
Portable box volume measurement has always been a popular issue in the intelligent logistic industry. This work presents a portable system for box volume measurement that is based on line-structured light vision and deep learning. This system consists of a novel 2 × 2 laser line grid projector, a sensor, and software modules, with which only two laser-modulated images of boxes are required for volume measurement. For laser-modulated images, a novel end-to-end deep learning model is proposed by using an improved holistically nested edge detection network to extract edges. Furthermore, an automatic one-step calibration method for the line-structured light projector is designed for fast calibration. The experimental results show that the measuring range of our proposed system is 100–1800 mm, with errors less than ±5.0 mm. Theoretical analysis indicates that within the measuring range of the system, the measurement uncertainty of the measuring device is ±0.52 mm to ±4.0 mm, which is consistent with the experimental results. The device size is 140 mm × 35 mm × 35 mm and the weight is 110 g, thus the system is suitable for portable automatic box volume measurement.
Collapse
|
10
|
Suitability of the Kinect Sensor and Leap Motion Controller-A Literature Review. SENSORS 2019; 19:s19051072. [PMID: 30832385 PMCID: PMC6427122 DOI: 10.3390/s19051072] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 02/18/2019] [Accepted: 02/22/2019] [Indexed: 12/23/2022]
Abstract
As the need for sensors increases with the inception of virtual reality, augmented reality and mixed reality, the purpose of this paper is to evaluate the suitability of the two Kinect devices and the Leap Motion Controller. When evaluating the suitability, the authors’ focus was on the state of the art, device comparison, accuracy, precision, existing gesture recognition algorithms and on the price of the devices. The aim of this study is to give an insight whether these devices could substitute more expensive sensors in the industry or on the market. While in general the answer is yes, it is not as easy as it seems: There are significant differences between the devices, even between the two Kinects, such as different measurement ranges, error distributions on each axis and changing depth precision relative to distance.
Collapse
|
11
|
A Novel Mobile Structured Light System in Food 3D Reconstruction and Volume Estimation. SENSORS 2019; 19:s19030564. [PMID: 30700041 PMCID: PMC6386919 DOI: 10.3390/s19030564] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 01/16/2019] [Accepted: 01/28/2019] [Indexed: 11/17/2022]
Abstract
. Over the past ten years, diabetes has rapidly become more prevalent in all age demographics and especially in children. Improved dietary assessment techniques are necessary for epidemiological studies that investigate the relationship between diet and disease. Current nutritional research is hindered by the low accuracy of traditional dietary intake estimation methods used for portion size assessment. This paper presents the development and validation of a novel instrumentation system for measuring accurate dietary intake for diabetic patients. This instrument uses a mobile Structured Light System (SLS), which measures the food volume and portion size of a patient's diet in daily living conditions. The SLS allows for the accurate determination of the volume and portion size of a scanned food item. Once the volume of a food item is calculated, the nutritional content of the item can be estimated using existing nutritional databases. The system design includes a volume estimation algorithm and a hardware add-on that consists of a laser module and a diffraction lens. The experimental results demonstrate an improvement of around 40% in the accuracy of the volume or portion size measurement when compared to manual calculation. The limitations and shortcomings of the system are discussed in this manuscript.
Collapse
|