1
|
Jiang L. Information Visualization Based on Visual Transmission and Multimedia Data Fusion. INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGIES AND SYSTEMS APPROACH 2023. [DOI: 10.4018/ijitsa.320229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
With the rapid development of information technology, the application media of visual identity design has been greatly broadened, and the requirements of dynamic variability and interactivity of vision have become higher and higher. The introduction of data as an element has brought new semantic endowments to visual design. To solve the problems of noisy image reconstruction and many outliers in traditional art design, this paper uses an improved phase correlation algorithm to reconstruct multimedia video images. In addition, based on the visual characteristics of human eyes, a multi-feature fusion viewpoint image quality evaluation algorithm is proposed. The simulation results show that the method adopted in this paper improves the uneven image texture in art design, and its real effect is greatly enhanced.
Collapse
|
2
|
Andolfo S, Petricca F, Genova A. Precise pose estimation of the NASA Mars 2020 Perseverance rover through a stereo‐vision‐based approach. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Affiliation(s)
- Simone Andolfo
- Department of Mechanical and Aerospace Engineering Sapienza University of Rome Rome Italy
| | - Flavio Petricca
- Department of Mechanical and Aerospace Engineering Sapienza University of Rome Rome Italy
| | - Antonio Genova
- Department of Mechanical and Aerospace Engineering Sapienza University of Rome Rome Italy
| |
Collapse
|
3
|
|
4
|
Dagli MM, Rajesh A, Asaad M, Butler CE. The Use of Artificial Intelligence and Machine Learning in Surgery: A Comprehensive Literature Review. Am Surg 2021:31348211065101. [PMID: 34958252 DOI: 10.1177/00031348211065101] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Interest in the use of artificial intelligence (AI) and machine learning (ML) in medicine has grown exponentially over the last few years. With its ability to enhance speed, precision, and efficiency, AI has immense potential, especially in the field of surgery. This article aims to provide a comprehensive literature review of artificial intelligence as it applies to surgery and discuss practical examples, current applications, and challenges to the adoption of this technology. Furthermore, we elaborate on the utility of natural language processing and computer vision in improving surgical outcomes, research, and patient care.
Collapse
Affiliation(s)
| | - Aashish Rajesh
- Department of Surgery, 14742University of Texas Health Science Center, San Antonio, TX, USA
| | - Malke Asaad
- Department of Plastic & Reconstructive Surgery, 571198the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Charles E Butler
- Department of Plastic & Reconstructive Surgery, 571198the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
5
|
Gerdes L, Azkarate M, Sánchez‐Ibáñez JR, Joudrier L, Perez‐del‐Pulgar CJ. Efficient autonomous navigation for planetary rovers with limited resources. J FIELD ROBOT 2020. [DOI: 10.1002/rob.21981] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Levin Gerdes
- Automation and Robotics SectionEuropean Space Agency Noordwijk The Netherlands
| | - Martin Azkarate
- Automation and Robotics SectionEuropean Space Agency Noordwijk The Netherlands
- Systems Engineering and Automation DepartmentUniversidad de Málaga Málaga Spain
| | | | - Luc Joudrier
- Automation and Robotics SectionEuropean Space Agency Noordwijk The Netherlands
| | | |
Collapse
|
6
|
Abstract
Field Programmable Gate Array (FPGA) is a general purpose programmable logic device that can be configured by a customer after manufacturing to perform from a simple logic gate operations to complex systems on chip or even artificial intelligence systems. Scientific publications related to FPGA started in 1992 and, up to now, we found more than 70,000 documents in the two leading scientific databases (Scopus and Clarivative Web of Science). These publications show the vast range of applications based on FPGAs, from the new mechanism that enables the magnetic suspension system for the kilogram redefinition, to the Mars rovers’ navigation systems. This paper reviews the top FPGAs’ applications by a scientometric analysis in ScientoPy, covering publications related to FPGAs from 1992 to 2018. Here we found the top 150 applications that we divided into the following categories: digital control, communication interfaces, networking, computer security, cryptography techniques, machine learning, digital signal processing, image and video processing, big data, computer algorithms and other applications. Also, we present an evolution and trend analysis of the related applications.
Collapse
|
7
|
Liu J, Ren X, Yan W, Li C, Zhang H, Jia Y, Zeng X, Chen W, Gao X, Liu D, Tan X, Zhang X, Ni T, Zhang H, Zuo W, Su Y, Wen W. Descent trajectory reconstruction and landing site positioning of Chang'E-4 on the lunar farside. Nat Commun 2019; 10:4229. [PMID: 31551413 PMCID: PMC6760200 DOI: 10.1038/s41467-019-12278-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Accepted: 08/23/2019] [Indexed: 12/02/2022] Open
Abstract
Chang'E-4 (CE-4) was the first mission to accomplish the goal of a successful soft landing on the lunar farside. The landing trajectory and the location of the landing site can be effectively reconstructed and determined using series of images obtained during descent when there were no Earth-based radio tracking and the telemetry data. Here we reconstructed the powered descent trajectory of CE-4 using photogrammetrically processed images of the CE-4 landing camera, navigation camera, and terrain data of Chang'E-2. We confirmed that the precise location of the landing site is 177.5991°E, 45.4446°S with an elevation of -5935 m. The landing location was accurately identified with lunar imagery and terrain data with spatial resolutions of 7 m/p, 5 m/p, 1 m/p, 10 cm/p and 5 cm/p. These results will provide geodetic data for the study of lunar control points, high-precision lunar mapping, and subsequent lunar exploration, such as by the Yutu-2 rover.
Collapse
Affiliation(s)
- Jianjun Liu
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
- School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Xin Ren
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Wei Yan
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Chunlai Li
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China.
- School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - He Zhang
- China Academy of Space Technology, Beijing, 100094, China
| | - Yang Jia
- China Academy of Space Technology, Beijing, 100094, China
| | - Xingguo Zeng
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Wangli Chen
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Xingye Gao
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Dawei Liu
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Xu Tan
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Xiaoxia Zhang
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Tao Ni
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
- School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Hongbo Zhang
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Wei Zuo
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Yan Su
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| | - Weibin Wen
- Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China
| |
Collapse
|
8
|
Gonzalez R, Iagnemma K. Slippage estimation and compensation for planetary exploration rovers. State of the art and future challenges. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21761] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Ramon Gonzalez
- Robotic Mobility Group; Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Karl Iagnemma
- Robotic Mobility Group; Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
9
|
Ejiri R, Kubota T, Nakatani I. Vision-Based Behavior Planning for Lunar or Planetary Exploration Rover on Flat Surface. JOURNAL OF ROBOTICS AND MECHATRONICS 2017. [DOI: 10.20965/jrm.2017.p0847] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Lunar or planetary exploration rovers are expected to have the ability to move across an area as wide as possible in an unknown environment during a limited mission period. Hence, they need an efficient navigation method. Most of the surface of the moon or planets consists of flat ground, sand, and scattered rocks. In a simple flat sandy terrain with some rocks, rough route planning is sufficient for a lunar or planetary rover to avoid obstacles and reach an assigned point. This paper proposes an efficient vision-based planning scheme for exploration rovers on a flat surface with scattered obstacles. In the proposed scheme, dangerous areas are robustly extracted by processing image data, and the degree of danger is defined. A rough routing plan and sensing plan are simultaneously constructed based on the dangerous-area extraction results. The effectiveness of the proposed scheme is discussed based on the results of some simulations and simple experiments.
Collapse
|
10
|
Gu Y, Ohi N, Lassak K, Strader J, Kogan L, Hypes A, Harper S, Hu B, Gramlich M, Kavi R, Watson R, Cheng M, Gross J. Cataglyphis: An autonomous sample return rover. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21737] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Yu Gu
- West Virginia University; Morgantown WV 26506
| | | | - Kyle Lassak
- West Virginia University; Morgantown WV 26506
| | | | - Lisa Kogan
- West Virginia University; Morgantown WV 26506
| | | | | | - Boyi Hu
- West Virginia University; Morgantown WV 26506
| | | | - Rahul Kavi
- West Virginia University; Morgantown WV 26506
| | - Ryan Watson
- West Virginia University; Morgantown WV 26506
| | | | - Jason Gross
- West Virginia University; Morgantown WV 26506
| |
Collapse
|
11
|
Zhang L, Zhu F, Hao Y, Pan W. Optimization-based non-cooperative spacecraft pose estimation using stereo cameras during proximity operations. APPLIED OPTICS 2017; 56:4522-4531. [PMID: 29047884 DOI: 10.1364/ao.56.004522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 04/28/2017] [Indexed: 06/07/2023]
Abstract
Pose estimation for spacecraft is widely recognized as an important technology for space applications. Many space missions require accurate relative pose between the chaser and the target spacecraft. Stereo-vision is a usual mean to estimate the pose of non-cooperative targets during proximity operations. However, the uncertainty of stereo-vision measurement is still an outstanding issue that needs to be solved. With binocular structure and the geometric structure of the object, we present a robust pose estimation method for non-cooperative spacecraft. Because the solar panel can provide strict geometry constraints, our approach takes the corner points of which as features. After stereo matching, an optimization-based method is proposed to estimate the relative pose between the two spacecraft. Simulation results show that our method improves the precision and robustness of pose estimation. Our system improves the performance with maximum 3D localization error less than 5% and relative rotation angle error less than 1°. Our laboratory experiments further validate the method.
Collapse
|
12
|
Shaukat A, Blacker PC, Spiteri C, Gao Y. Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis. SENSORS 2016; 16:s16111952. [PMID: 27879625 PMCID: PMC5134611 DOI: 10.3390/s16111952] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Revised: 11/09/2016] [Accepted: 11/10/2016] [Indexed: 11/16/2022]
Abstract
In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation.
Collapse
Affiliation(s)
- Affan Shaukat
- Surrey Space Centre, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH, UK.
| | - Peter C Blacker
- Surrey Space Centre, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH, UK.
| | - Conrad Spiteri
- Surrey Space Centre, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH, UK.
| | - Yang Gao
- Surrey Space Centre, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH, UK.
| |
Collapse
|
13
|
Karumanchi S, Edelberg K, Baldwin I, Nash J, Reid J, Bergh C, Leichty J, Carpenter K, Shekels M, Gildner M, Newill‐Smith D, Carlton J, Koehler J, Dobreva T, Frost M, Hebert P, Borders J, Ma J, Douillard B, Backes P, Kennedy B, Satzinger B, Lau C, Byl K, Shankar K, Burdick J. Team RoboSimian: Semi‐autonomous Mobile Manipulation at the 2015 DARPA Robotics Challenge Finals. J FIELD ROBOT 2016. [DOI: 10.1002/rob.21676] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Sisir Karumanchi
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Kyle Edelberg
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Ian Baldwin
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Jeremy Nash
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Jason Reid
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Charles Bergh
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - John Leichty
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Kalind Carpenter
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Matthew Shekels
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Matthew Gildner
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - David Newill‐Smith
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Jason Carlton
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - John Koehler
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Tatyana Dobreva
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Matthew Frost
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Paul Hebert
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - James Borders
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Jeremy Ma
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Bertrand Douillard
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Paul Backes
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | - Brett Kennedy
- Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena California 91109
| | | | - Chelsea Lau
- University of California Santa Barbara California 93106
| | - Katie Byl
- University of California Santa Barbara California 93106
| | - Krishna Shankar
- California Institute of Technology 1200 East California Boulevard Pasadena California 91125
| | - Joel Burdick
- California Institute of Technology 1200 East California Boulevard Pasadena California 91125
| |
Collapse
|
14
|
|
15
|
Ilyas M, Hong B, Cho K, Baeg SH, Park S. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering. SENSORS (BASEL, SWITZERLAND) 2016; 16:s16050749. [PMID: 27223293 PMCID: PMC4883439 DOI: 10.3390/s16050749] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Revised: 05/10/2016] [Accepted: 05/17/2016] [Indexed: 06/05/2023]
Abstract
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.
Collapse
Affiliation(s)
- Muhammad Ilyas
- Department of Robotics and Virtual Engineering, Korea University of Science and Technology (UST), Daejon 305-333, Korea.
| | - Beomjin Hong
- Department of Robotics and Virtual Engineering, Korea University of Science and Technology (UST), Daejon 305-333, Korea.
| | - Kuk Cho
- Robotics R & BD Group, Korea Institute of Industrial Technology (KITECH), Ansan 426-791, Korea.
| | - Seung-Ho Baeg
- Robotics R & BD Group, Korea Institute of Industrial Technology (KITECH), Ansan 426-791, Korea.
| | - Sangdeok Park
- Robotics R & BD Group, Korea Institute of Industrial Technology (KITECH), Ansan 426-791, Korea.
| |
Collapse
|
16
|
Autonomy for ground-level robotic space exploration: framework, simulation, architecture, algorithms and experiments. ROBOTICA 2016. [DOI: 10.1017/s0263574714001428] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
SUMMARYRobotic surface planetary exploration is a challenging endeavor, with critical safety requirements and severe communication constraints. Autonomous navigation is one of the most crucial and yet risky aspects of these operations. Therefore, a certain level of local autonomy for onboard robots is an essential feature, so that they can make their own decisions independently of ground control, reducing operational costs and maximizing the scientific return of the mission. In addition, existing tools to support research in this domain are usually proprietary to space agencies, and out of reach of most researchers. This paper presents a framework developed to support research in this field, a modular onboard software architecture design and a series of algorithms that implement a visual-based autonomous navigation approach for robotic space exploration. It allows analysis of algorithms' performance and functional validation of approaches and autonomy strategies, data monitoring and the creation of simulation models to replicate the vehicle, sensors, terrain and operational conditions. The framework and algorithms are partly supported by open-source packages and tools. A set of experiments and field testing with a physical robot and hardware are described as well, detailing results and algorithms' processing time, which experience an incremented of one order of magnitude when executed in space-certified like hardware, with constrained resources, in comparison to using general purpose hardware.
Collapse
|
17
|
Abstract
Our ever-growing knowledge about the human brain and human behavior is opening doors to increasingly impressive technological achievements. This neurobiological inspiration has a significant history and involves almost equal parts neuroscience, computation and art. With a focus on the sense of vision, this essay presents a selective and highly condensed snapshot of the history of how neurobiology has inspired technological developments, pointing the way to where new inspirations may lead.
Collapse
|
18
|
Lucid Workspace for Stereo Vision. J INTELL ROBOT SYST 2015. [DOI: 10.1007/s10846-014-0083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
19
|
|
20
|
García GJ, Jara CA, Pomares J, Alabdo A, Poggi LM, Torres F. A survey on FPGA-based sensor systems: towards intelligent and reconfigurable low-power sensors for computer vision, control and signal processing. SENSORS 2014; 14:6247-78. [PMID: 24691100 PMCID: PMC4029637 DOI: 10.3390/s140406247] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2013] [Revised: 03/20/2014] [Accepted: 03/21/2014] [Indexed: 11/16/2022]
Abstract
The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.
Collapse
Affiliation(s)
- Gabriel J García
- Department of Physics, System Engineering and Signal Theory, University of Alicante, San Vicente del Raspeig, Alicante 03690, Spain.
| | - Carlos A Jara
- Department of Physics, System Engineering and Signal Theory, University of Alicante, San Vicente del Raspeig, Alicante 03690, Spain.
| | - Jorge Pomares
- Department of Physics, System Engineering and Signal Theory, University of Alicante, San Vicente del Raspeig, Alicante 03690, Spain.
| | - Aiman Alabdo
- Department of Physics, System Engineering and Signal Theory, University of Alicante, San Vicente del Raspeig, Alicante 03690, Spain.
| | - Lucas M Poggi
- Department of Physics, System Engineering and Signal Theory, University of Alicante, San Vicente del Raspeig, Alicante 03690, Spain.
| | - Fernando Torres
- Department of Physics, System Engineering and Signal Theory, University of Alicante, San Vicente del Raspeig, Alicante 03690, Spain.
| |
Collapse
|
21
|
Path Planning and Navigation Framework for a Planetary Exploration Rover Using a Laser Range Finder. SPRINGER TRACTS IN ADVANCED ROBOTICS 2014. [DOI: 10.1007/978-3-642-40686-7_29] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
22
|
Dong H, Barfoot TD. Lighting-Invariant Visual Odometry using Lidar Intensity Imagery and Pose Interpolation. SPRINGER TRACTS IN ADVANCED ROBOTICS 2014. [DOI: 10.1007/978-3-642-40686-7_22] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
23
|
Li R, He S, Skopljak B, Meng X, Tang P, Yilmaz A, Jiang J, Oman CM, Banks M, Kim S. A Multisensor Integration Approach toward Astronaut Navigation for Landed Lunar Missions. J FIELD ROBOT 2013. [DOI: 10.1002/rob.21488] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Rongxing Li
- Mapping and GIS Laboratory, CEGE; The Ohio State University; 470 Hitchcock Hall, 2070 Neil Avenue Columbus Ohio 43210
| | - Shaojun He
- Mapping and GIS Laboratory, CEGE; The Ohio State University; 470 Hitchcock Hall, 2070 Neil Avenue Columbus Ohio 43210
| | - Boris Skopljak
- Mapping and GIS Laboratory, CEGE; The Ohio State University; 470 Hitchcock Hall, 2070 Neil Avenue Columbus Ohio 43210
| | - Xuelian Meng
- Mapping and GIS Laboratory, CEGE; The Ohio State University; 470 Hitchcock Hall, 2070 Neil Avenue Columbus Ohio 43210
| | - Pingbo Tang
- Mapping and GIS Laboratory, CEGE; The Ohio State University; 470 Hitchcock Hall, 2070 Neil Avenue Columbus Ohio 43210
| | - Alper Yilmaz
- Photogrammetric Computer Vision Laboratory; CEGE, The Ohio State University; 470 Hitchcock Hall, 2070 Neil Avenue Columbus Ohio 43210
| | - Jinwei Jiang
- Photogrammetric Computer Vision Laboratory; CEGE, The Ohio State University; 470 Hitchcock Hall, 2070 Neil Avenue Columbus Ohio 43210
| | - Charles M. Oman
- Department of Aeronautics and Astronautics; Massachusetts Institute of Technology; 77 Massachusetts Avenue 37-219 Cambridge Massachusetts 02139
| | - Martin Banks
- Visual Space Perception Laboratory (BANKSLAB); University of California-Berkeley; 360 Minor Hall Berkeley California 94720-2020
| | - Sunah Kim
- Visual Space Perception Laboratory (BANKSLAB); University of California-Berkeley; 360 Minor Hall Berkeley California 94720-2020
| |
Collapse
|
24
|
Gonzalez R, Rodriguez F, Guzman JL, Pradalier C, Siegwart R. Control of off-road mobile robots using visual odometry and slip compensation. Adv Robot 2013. [DOI: 10.1080/01691864.2013.791742] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
25
|
Ishigami G, Otsuki M, Kubota T. Range-dependent Terrain Mapping and Multipath Planning using Cylindrical Coordinates for a Planetary Exploration Rover. J FIELD ROBOT 2013. [DOI: 10.1002/rob.21462] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Genya Ishigami
- Department of Mechanical Engineering; Keio University; 3-14-1 Hiyoshi Yokohama 223-8522 Japan
| | - Masatsugu Otsuki
- Institute of Space and Astronautical Science; Japan Aerospace Exploration Agency 3-1-1 Yoshinodai; Sagamihara 252-5210 Japan
| | - Takashi Kubota
- Institute of Space and Astronautical Science; Japan Aerospace Exploration Agency 3-1-1 Yoshinodai; Sagamihara 252-5210 Japan
| |
Collapse
|
26
|
Shen Y, Wang Q. Sky Region Detection in a Single Image for Autonomous Ground Robot Navigation. INT J ADV ROBOT SYST 2013. [DOI: 10.5772/56884] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The sky region in an image provides horizontal and background information for autonomous ground robots and is important for vision-based autonomous ground robot navigation. This paper proposes a sky region detection algorithm within a single image based on gradient information and energy function optimization. Unlike most existing methods, the proposed algorithm is applicable to both colour and greyscale images. Firstly, the gradient information of the image is obtained. Then, the optimal segmentation threshold in the gradient domain is calculated according to the energy function optimization and the preliminary sky region is estimated. Finally, a post-processing method is applied in order to refine the preliminary sky region detection result when no sky region appears in the image or when objects extrude from the ground. Experimental results have proven that the detection accuracy is greater than 95% in our test set with 1,000 images, while the processing time is about 150ms for an image with a resolution of 640×480 on a modern laptop using only a single core.
Collapse
Affiliation(s)
- Yehu Shen
- Department of System Integration and IC Design, Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences
| | - Qicong Wang
- Department of Computer Science, Xiamen University, P. R. China
| |
Collapse
|
27
|
Maruyama K, Takase R, Kawai Y, Yoshimi T, Takahashi H, Tomita F. Semi-Automated Excavation System for Partially Buried Objects Using Stereo Vision-Based Three-Dimensional Localization. Adv Robot 2012. [DOI: 10.1163/016918610x493525] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Kenichi Maruyama
- a National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan;,
| | - Ryuichi Takase
- b National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan
| | - Yoshihiro Kawai
- c National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan
| | - Takashi Yoshimi
- d National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan
| | - Hironobu Takahashi
- e Applied Vision Systems Corp., Tsukuba City Industrial Promotion Center 205, 2-5-1 Azuma, Tsukuba, Ibaraki 305-0031, Japan
| | - Fumiaki Tomita
- f National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan
| |
Collapse
|
28
|
Lambert A, Furgale P, Barfoot TD, Enright J. Field testing of visual odometry aided by a sun sensor and inclinometer. J FIELD ROBOT 2012. [DOI: 10.1002/rob.21412] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
29
|
Abstract
SUMMARYIn this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at the surrounding environment. Comparisons with popular localization approaches, through physical experiments in off-road conditions, have shown the satisfactory behavior of the proposed strategy.
Collapse
|
30
|
Nourani-Vatani N, Borges PVK. Correlation-based visual odometry for ground vehicles. J FIELD ROBOT 2011. [DOI: 10.1002/rob.20407] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
31
|
Furgale P, Barfoot TD, Ghafoor N, Williams K, Osinski G. Field Testing of an Integrated Surface/Subsurface Modeling Technique for Planetary Exploration. Int J Rob Res 2010. [DOI: 10.1177/0278364910378179] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
While there has been much interest in developing ground-penetrating radar (GPR) technology for rover-based planetary exploration, relatively little work has been done on the data collection process. Starting from the manual method, we fully automate GPR data collection using only sensors typically found on a rover. Further, we produce two novel data products: (1) a three-dimensional, photorealistic surface model coupled with a ribbon of GPR data, and (2) a two-dimensional, topography-corrected GPR radargram with the surface topography plotted above. Each result is derived from only the onboard sensors of the rover, as would be required in a planetary exploration setting. These techniques were tested using data collected in a Mars analogue environment on Devon Island in the Canadian High Arctic. GPR transects were gathered over polygonal patterned ground similar to that seen on Mars by the Phoenix Lander. Using the techniques developed here, scientists may remotely explore the interaction of the surface topography and subsurface structure as if they were on site.
Collapse
Affiliation(s)
- Paul Furgale
- University of Toronto, Institute of Aerospace Studies, Toronto, Canada,
| | - Timothy D Barfoot
- University of Toronto, Institute of Aerospace Studies, Toronto, Canada
| | | | - Kevin Williams
- Buffalo State College, Department of Earth Sciences, Buffalo, NY, USA
| | - Gordon Osinski
- University of Western Ontario, Departments of Earth Science, Physics and Astronomy, London, Canada
| |
Collapse
|
32
|
Barrois B, Konrad M, Wöhler C, Groß HM. Resolving stereo matching errors due to repetitive structures using model information. Pattern Recognit Lett 2010. [DOI: 10.1016/j.patrec.2010.05.020] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
33
|
Loog M, Lauze F. The improbability of harris interest points. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2010; 32:1141-1147. [PMID: 20431138 DOI: 10.1109/tpami.2010.53] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
An elementary characterization of the map underlying Harris corners, also known as Harris interest points or key points, is provided. Two principal and basic assumptions made are: 1) Local image structure is captured in an uncommitted way, simply using weighted raw image values around every image location to describe the local image information, and 2) the lower the probability of observing the image structure present in a particular point, the more salient, or interesting, this position is, i.e., saliency is related to how uncommon it is to see a certain image structure, how surprising it is. Through the latter assumption, the axiomatization proposed makes a sound link between image saliency in computer vision on the one hand and, on the other, computational models of preattentive human visual perception, where exactly the same definition of saliency has been proposed. Because of this link, the characterization provides a compelling case in favor of Harris interest points over other approaches.
Collapse
Affiliation(s)
- Marco Loog
- Delft University of Technology, Delft, The Netherlands.
| | | |
Collapse
|
34
|
|
35
|
Konolige K, Agrawal M, Blas MR, Bolles RC, Gerkey B, Solà J, Sundaresan A. Mapping, navigation, and learning for off-road traversal. J FIELD ROBOT 2008. [DOI: 10.1002/rob.20271] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|