1
|
Kong Y, Zhu F, Sun H, Lin Z, Wang Q. A Generic View Planning System Based on Formal Expression of Perception Tasks. ENTROPY (BASEL, SWITZERLAND) 2022; 24:578. [PMID: 35626463 PMCID: PMC9141229 DOI: 10.3390/e24050578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 04/15/2022] [Accepted: 04/18/2022] [Indexed: 02/04/2023]
Abstract
View planning (VP) is a technique that guides the adjustment of the sensor's postures in multi-view perception tasks. It converts the perception process into active perception, which improves the intelligence and reduces the resource consumption of the robot. We propose a generic VP system for multiple kinds of visual perception. The VP system is built on the basis of the formal description of the visual task, and the next best view is calculated by the system. When dealing with a given visual task, we can simply update its description as the input of the VP system, and obtain the defined best view in real time. Formal description of the perception task includes the task's status, the objects' prior information library, the visual representation status and the optimization goal. The task's status and the visual representation status are updated when data are received at a new view. If the task's status has not reached its goal, candidate views are sorted based on the updated visual representation status, and the next best view that can minimize the entropy of the model space is chosen as the output of the VP system. Experiments of view planning for 3D recognition and reconstruction tasks are conducted, and the result shows that our algorithm has good performance on different tasks.
Collapse
Affiliation(s)
- Yanzi Kong
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110169, China; (Y.K.); (H.S.); (Z.L.); (Q.W.)
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Feng Zhu
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110169, China; (Y.K.); (H.S.); (Z.L.); (Q.W.)
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
| | - Haibo Sun
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110169, China; (Y.K.); (H.S.); (Z.L.); (Q.W.)
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Zhiyuan Lin
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110169, China; (Y.K.); (H.S.); (Z.L.); (Q.W.)
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qun Wang
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang 110169, China; (Y.K.); (H.S.); (Z.L.); (Q.W.)
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110169, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
2
|
Pan S, Hu H, Wei H. SCVP: Learning One-Shot View Planning via Set Covering for Unknown Object Reconstruction. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3140449] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
3
|
Song S, Kim D, Choi S. View Path Planning via Online Multiview Stereo for 3-D Modeling of Large-Scale Structures. IEEE T ROBOT 2022. [DOI: 10.1109/tro.2021.3083197] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
4
|
|
5
|
Kong Y, Zhu F, Hao Y, Sun H, Xie Y, Lin Z. An active reconstruction algorithm based on partial prior information. INT J ADV ROBOT SYST 2020. [DOI: 10.1177/1729881420904203] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Active reconstruction is an intelligent perception method that achieves object modeling with few views and short motion paths by systematically adjusting the parameters of the camera while ensuring model integrity. Part of the object information is always known for vision tasks in real scenes, and it provides some guidance for the view planning. A two-step active reconstruction algorithm based on partial prior information is presented, which includes rough shape estimation phase and complete object reconstruction phase, and both of them introduce the concept of active vision. An information expression method is proposed that can be used to manually initialize the repository according to specific visual tasks, and then the prior information and detected information are used to plan the next best view online until the object reconstruction is completed. The method is evaluated with simulated experiments and the result is compared with other methods.
Collapse
Affiliation(s)
- Yanzi Kong
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
- Key Laboratory of Opto-Electronic Information Process, Shenyang, China
- The Key Laboratory of Image Understanding and Computer Vision, Shenyang, China
| | - Feng Zhu
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- Key Laboratory of Opto-Electronic Information Process, Shenyang, China
- The Key Laboratory of Image Understanding and Computer Vision, Shenyang, China
| | - Yingming Hao
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- Key Laboratory of Opto-Electronic Information Process, Shenyang, China
- The Key Laboratory of Image Understanding and Computer Vision, Shenyang, China
| | - Haibo Sun
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Northeastern University, Shenyang, China
| | - Yilin Xie
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
- Key Laboratory of Opto-Electronic Information Process, Shenyang, China
- The Key Laboratory of Image Understanding and Computer Vision, Shenyang, China
| | - Zhiyuan Lin
- Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
- Key Laboratory of Opto-Electronic Information Process, Shenyang, China
- The Key Laboratory of Image Understanding and Computer Vision, Shenyang, China
| |
Collapse
|
6
|
Nuger E, Benhabib B. Multi-Camera Active-Vision for Markerless Shape Recovery of Unknown Deforming Objects. J INTELL ROBOT SYST 2018. [DOI: 10.1007/s10846-018-0773-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
7
|
|
8
|
Vasquez-Gomez JI, Sucar LE, Murrieta-Cid R, Herrera-Lozada JC. Tree-based search of the next best view/state for three-dimensional object reconstruction. INT J ADV ROBOT SYST 2018. [DOI: 10.1177/1729881418754575] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Three-dimensional models from real objects have many applications in robotics. To automatically build a three-dimensional model from an object, it is essential to determine where to place the range sensor in order to completely observe the object. However, the view (position and orientation) of the sensor is not sufficient, given that its corresponding robot state needs to be calculated. Additionally, a collision-free trajectory to reach that state is required. In this article, we directly find the state of the robot whose corresponding sensor view observes the object. This method does not require to calculate the inverse kinematics of the robot. Unlike previous approaches, the proposed method guides the search with a tree structure based on a rapidly exploring random tree overcoming previous sampling techniques. In addition, we propose an information metric that improves the reconstruction performance of previous information metrics.
Collapse
Affiliation(s)
- J Irving Vasquez-Gomez
- Consejo Nacional de Ciencia y Tecnología - Instituto Politécnico Nacional, Centro de Innovación y Desarrollo Tecnológico en Cómputo, Ciudad de México, México
| | - L Enrique Sucar
- Instituto Nacional de Astrofísica Óptica y Electrónica, Puebla, Mexico
| | | | - Juan-Carlos Herrera-Lozada
- Instituto Politécnico Nacional, Centro de Innovación y Desarrollo Tecnológico en Cómputo, Ciudad de México, Mexico
| |
Collapse
|
9
|
Delmerico J, Isler S, Sabzevari R, Scaramuzza D. A comparison of volumetric information gain metrics for active 3D object reconstruction. Auton Robots 2017. [DOI: 10.1007/s10514-017-9634-0] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
10
|
Monica R, Aleotti J. Contour-based next-best view planning from point cloud segmentation of unknown objects. Auton Robots 2017. [DOI: 10.1007/s10514-017-9618-0] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
11
|
Karaszewski M, Stępień M, Sitnik R. Two-stage automated measurement process for high-resolution 3D digitization of unknown objects. APPLIED OPTICS 2016; 55:8162-8170. [PMID: 27828058 DOI: 10.1364/ao.55.008162] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, a process for high-resolution, automated 3D digitization of unknown objects (i.e., without any digital model) is presented. The process has two stages-the first leads to a coarse 3D digital model of the object, and the second obtains the final model. A rough model, acquired by a 3D measurement head with a large working volume and relatively low resolution, is used to calculate the precise head positions required for the full digitization of the object, as well as collision detection and avoidance. We show that this approach is much more efficient than digitization with only a precise head, when its positions for subsequent measurements (so-called next-best-views) must be calculated based only on a partially recovered 3D model of the object. We also show how using a rough object representation for collision detection shortens the high-resolution digitization process.
Collapse
|
12
|
Wang C, Qi F, Shi G, Wang X. A sparse representation-based deployment method for optimizing the observation quality of camera networks. SENSORS 2013; 13:11453-75. [PMID: 23989826 PMCID: PMC3821344 DOI: 10.3390/s130911453] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/03/2013] [Revised: 08/13/2013] [Accepted: 08/14/2013] [Indexed: 11/21/2022]
Abstract
Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms.
Collapse
Affiliation(s)
- Chang Wang
- Authors to whom correspondence should be addressed; E-Mails: (C.W.); (F.Q.)
| | - Fei Qi
- Authors to whom correspondence should be addressed; E-Mails: (C.W.); (F.Q.)
| | | | | |
Collapse
|
13
|
Hollinger GA, Englot B, Hover FS, Mitra U, Sukhatme GS. Active planning for underwater inspection and the benefit of adaptivity. Int J Rob Res 2012. [DOI: 10.1177/0278364912467485] [Citation(s) in RCA: 94] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We discuss the problem of inspecting an underwater structure, such as a submerged ship hull, with an autonomous underwater vehicle (AUV). Unlike a large body of prior work, we focus on planning the views of the AUV to improve the quality of the inspection, rather than maximizing the accuracy of a given data stream. We formulate the inspection planning problem as an extension to Bayesian active learning, and we show connections to recent theoretical guarantees in this area. We rigorously analyze the benefit of adaptive re-planning for such problems, and we prove that the potential benefit of adaptivity can be reduced from an exponential to a constant factor by changing the problem from cost minimization with a constraint on information gain to variance reduction with a constraint on cost. Such analysis allows the use of robust, non-adaptive planning algorithms that perform competitively with adaptive algorithms. Based on our analysis, we propose a method for constructing 3D meshes from sonar-derived point clouds, and we introduce uncertainty modeling through non-parametric Bayesian regression. Finally, we demonstrate the benefit of active inspection planning using sonar data from ship hull inspections with the Bluefin-MIT Hovering AUV.
Collapse
Affiliation(s)
- Geoffrey A Hollinger
- Department of Computer Science, Viterbi School of Engineering, University of Southern California, USA
| | - Brendan Englot
- Center for Ocean Engineering, Department of Mechanical Engineering, Massachusetts Institute of Technology, USA
| | - Franz S Hover
- Center for Ocean Engineering, Department of Mechanical Engineering, Massachusetts Institute of Technology, USA
| | - Urbashi Mitra
- Department of Electrical Engineering, Viterbi School of Engineering, University of Southern California, USA
| | - Gaurav S Sukhatme
- Department of Computer Science, Viterbi School of Engineering, University of Southern California, USA
| |
Collapse
|
14
|
Motion analysis of live objects by super-resolution fluorescence microscopy. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2011; 2012:859398. [PMID: 22162725 PMCID: PMC3227432 DOI: 10.1155/2012/859398] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 09/01/2011] [Accepted: 09/26/2011] [Indexed: 11/21/2022]
Abstract
Motion analysis plays an important role in studing activities or behaviors of live objects in medicine, biotechnology, chemistry, physics, spectroscopy, nanotechnology, enzymology, and biological engineering. This paper briefly reviews the developments in this area mostly in the recent three years, especially for cellular analysis in fluorescence microscopy. The topic has received much attention with the increasing demands in biomedical applications. The tasks of motion analysis include detection and tracking of objects, as well as analysis of motion behavior, living activity, events, motion statistics, and so forth. In the last decades, hundreds of papers have been published in this research topic. They cover a wide area, such as investigation of cell, cancer, virus, sperm, microbe, karyogram, and so forth. These contributions are summarized in this review. Developed methods and practical examples are also introduced. The review is useful to people in the related field for easy referral of the state of the art.
Collapse
|
15
|
Abstract
We present an integrated and fully autonomous eye-in-hand system for 3D object modeling. The system hardware consists of a laser range scanner mounted on a six-DOF manipulator arm and the task is to autonomously build a 3D model of an object in situ where the object may not be moved and must be scanned in its original location. Our system assumes no knowledge of object shape or geometry other than that it is within a bounding box whose location and size are known a priori, and, furthermore, the environment is unknown. The overall planner integrates the three main algorithms in the system: one that finds the next best view (NBV) for modeling the object; one that finds the NBV for exploration, i.e. exploring the environment, so the arm can move to the modeling view pose; and finally a sensor-based path planner, that is able to find a collision-free path to the view configuration determined by either of the the two view planners. Our modeling NBV algorithm efficiently searches the five-dimensional view space to determine the best modeling viewpoint, while considering key constraints such as field of view (FOV), overlap, and occlusion. If the determined viewpoint is reachable, the sensor-based path planner determines a collision-free path to move the manipulator to the desired view configuration, and a scan of the object is taken. Since the workspace is initially unknown, in some phases, the exploration view planner is used to increase information about the reachability and also the status of the modeling view configurations, since the view configuration may lie in an unknown workspace. This is repeated until the object modeling is complete or the planner deems that no further progress can be made, and the system stops. We have implemented the system with a six-DOF powercube arm and a wrist mounted Hokuyo URG-04LX laser scanner. Our results show that the system is able to autonomously build a 3D model of an object in situ in an unknown environment.
Collapse
Affiliation(s)
- Liila Torabi
- School of Engineering Science, Simon Fraser University, Burnaby, Canada
| | - Kamal Gupta
- School of Engineering Science, Simon Fraser University, Burnaby, Canada
| |
Collapse
|
16
|
Abstract
In this paper we provide a broad survey of developments in active vision in robotic applications over the last 15 years. With increasing demand for robotic automation, research in this area has received much attention. Among the many factors that can be attributed to a high-performance robotic system, the planned sensing or acquisition of perceptions on the operating environment is a crucial component. The aim of sensor planning is to determine the pose and settings of vision sensors for undertaking a vision-based task that usually requires obtaining multiple views of the object to be manipulated. Planning for robot vision is a complex problem for an active system due to its sensing uncertainty and environmental uncertainty. This paper describes such problems arising from many applications, e.g. object recognition and modeling, site reconstruction and inspection, surveillance, tracking and search, as well as robotic manipulation and assembly, localization and mapping, navigation and exploration. A bundle of solutions and methods have been proposed to solve these problems in the past. They are summarized in this review while enabling readers to easily refer solution methods for practical applications. Representative contributions, their evaluations, analyses, and future research trends are also addressed in an abstract level.
Collapse
|
17
|
Panetta K, Agaian S, Yicong Zhou, Wharton EJ. Parameterized Logarithmic Framework for Image Enhancement. ACTA ACUST UNITED AC 2011; 41:460-73. [DOI: 10.1109/tsmcb.2010.2058847] [Citation(s) in RCA: 93] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
18
|
Chenghui Cai, Ferrari S. Information-Driven Sensor Path Planning by Approximate Cell Decomposition. ACTA ACUST UNITED AC 2009; 39:672-89. [DOI: 10.1109/tsmcb.2008.2008561] [Citation(s) in RCA: 90] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
19
|
|
20
|
Huiying Chen, Youfu Li. Dynamic View Planning by Effective Particles for Three-Dimensional Tracking. ACTA ACUST UNITED AC 2009; 39:242-53. [DOI: 10.1109/tsmcb.2008.2005113] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
21
|
Panetta KA, Wharton EJ, Agaian SS. Human visual system-based image enhancement and logarithmic contrast measure. ACTA ACUST UNITED AC 2008; 38:174-88. [PMID: 18270089 DOI: 10.1109/tsmcb.2007.909440] [Citation(s) in RCA: 187] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Varying scene illumination poses many challenging problems for machine vision systems. One such issue is developing global enhancement methods that work effectively across the varying illumination. In this paper, we introduce two novel image enhancement algorithms: edge-preserving contrast enhancement, which is able to better preserve edge details while enhancing contrast in images with varying illumination, and a novel multihistogram equalization method which utilizes the human visual system (HVS) to segment the image, allowing a fast and efficient correction of nonuniform illumination. We then extend this HVS-based multihistogram equalization approach to create a general enhancement method that can utilize any combination of enhancement algorithms for an improved performance. Additionally, we propose new quantitative measures of image enhancement, called the logarithmic Michelson contrast measure (AME) and the logarithmic AME by entropy. Many image enhancement methods require selection of operating parameters, which are typically chosen using subjective methods, but these new measures allow for automated selection. We present experimental results for these methods and make a comparison against other leading algorithms.
Collapse
Affiliation(s)
- Karen A Panetta
- Department of Electrical and Computer Engineering, Tufts University, Medford, MA 02155, USA.
| | | | | |
Collapse
|
22
|
Chen SY, Li YF, Zhang J. Vision processing for realtime 3-D data acquisition based on coded structured light. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:167-176. [PMID: 18270109 DOI: 10.1109/tip.2007.914755] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Structured light vision systems have been successfully used for accurate measurement of 3-D surfaces in computer vision. However, their applications are mainly limited to scanning stationary objects so far since tens of images have to be captured for recovering one 3-D scene. This paper presents an idea for real-time acquisition of 3-D surface data by a specially coded vision system. To achieve 3-D measurement for a dynamic scene, the data acquisition must be performed with only a single image. A principle of uniquely color-encoded pattern projection is proposed to design a color matrix for improving the reconstruction efficiency. The matrix is produced by a special code sequence and a number of state transitions. A color projector is controlled by a computer to generate the desired color patterns in the scene. The unique indexing of the light codes is crucial here for color projection since it is essential that each light grid be uniquely identified by incorporating local neighborhoods so that 3-D reconstruction can be performed with only local analysis of a single image. A scheme is presented to describe such a vision processing method for fast 3-D data acquisition. Practical experimental performance is provided to analyze the efficiency of the proposed methods.
Collapse
Affiliation(s)
- S Y Chen
- College of Information Engineering, Zhejiang University of Technology, 310014 Hangzhou, China.
| | | | | |
Collapse
|
23
|
Bakhtari A, Benhabib B. An Active Vision System for Multitarget Surveillance in Dynamic Environments. ACTA ACUST UNITED AC 2007; 37:190-8. [PMID: 17278571 DOI: 10.1109/tsmcb.2006.883423] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a novel agent-based method for the dynamic coordinated selection and positioning of active-vision cameras for the simultaneous surveillance of multiple objects-of-interest as they travel through a cluttered environment with a-priori unknown trajectories. The proposed system dynamically adjusts not only the orientation but also the position of the cameras in order to maximize the system's performance by avoiding occlusions and acquiring images with preferred viewing angles. Sensor selection and positioning are accomplished through an agent-based approach. The proposed sensing-system reconfiguration strategy has been verified via simulations and implemented on an experimental prototype setup for automated facial recognition. Both simulations and experimental analyses have shown that the use of dynamic sensors along with an effective online dispatching strategy may tangibly improve the surveillance performance of a sensing system.
Collapse
|