Active Vision - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Active Vision

The Experts below are selected from a list of 51834 Experts worldwide ranked by ideXlab platform

Active Vision – Free Register to Access Experts & Abstracts

François Chaumette – One of the best experts on this subject based on the ideXlab platform.

  • Active Vision for pose estimation applied to singularity avoidance in visual servoing
    , 2017
    Co-Authors: Don Agravante, François Chaumette
    Abstract:

    In Active Vision, the camera motion is controlled in order to improve a certain visual sensing strategy. In this paper, we formulate an Active Vision task function to improve pose estimation. This is done by defining an optimality metric on the Fisher Information Matrix. This task is then incorporated into a weighted multi-objective optimization framework. To test this approach, we apply it on the three image point visual servoing problem which has a degenerate configuration-a singularity cylinder. The simulation results show that the singular configurations of pose estimation are avoided during visual servoing. We then discuss the potential of Active Vision to be integrated into more complex multi-task frameworks.

  • IROS – Active Vision for pose estimation applied to singularity avoidance in visual servoing
    2017 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS), 2017
    Co-Authors: Don Agravante, François Chaumette
    Abstract:

    In Active Vision, the camera motion is controlled in order to improve a certain visual sensing strategy. In this paper, we formulate an Active Vision task function to improve pose estimation. This is done by defining an optimality metric on the Fisher Information Matrix. This task is then incorporated into a weighted multi-objective optimization framework. To test this approach, we apply it on the three image point visual servoing problem which has a degenerate configuration — a singularity cylinder. The simulation results show that the singular configurations of pose estimation are avoided during visual servoing. We then discuss the potential of Active Vision to be integrated into more complex multi-task frameworks.

  • Active Vision for complete scene reconstruction and exploration
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999
    Co-Authors: Eric Marchand, François Chaumette
    Abstract:

    Deals with the 3D structure estimation and exploration of static scenes using Active Vision. Our method is based on the structure from controlled motion approach that constrains camera motions to obtain an optimal estimation of the 3D structure of a geometrical primitive. Since this approach involves gazing on the considered primitive, we have developed perceptual strategies able to perform a succession of robust estimations. This leads to a gaze planning strategy that mainly uses a representation of known and unknown areas as a basis for selecting viewpoints. This approach ensures a reconstruction as complete as possible of the scene.

Rajeev Sharma – One of the best experts on this subject based on the ideXlab platform.

  • A framework for Active Vision-based robot control using neural networks
    Robotica, 1998
    Co-Authors: Rajeev Sharma, Narayan Srinivasa
    Abstract:

    Assembly robots that use an Active camera system for visual feedback can achieve greater flexibility, including the ability to operate in an uncertain and changing environment. Incorporating Active Vision into a robot control loop involves some inherent difficulties, including calibration, and the need for redefining the servoing goal as the camera configuration changes. In this paper, we propose a novel self-organizing neural network that learns a calibration-free spatial representation of 3D point targets in a manner that is invariant to changing camera configurations. This representation is used to develop a new framework for robot control with Active Vision. The salient feature of this framework is that it decouples Active camera control from robot control. The feasibility of this approach is established with the help of computer simulations and experiments with the University of Illinois Active Vision System (UIAVS).

  • Efficient learning of VAM-based representation of 3D targets and its Active Vision applications
    Neural networks : the official journal of the International Neural Network Society, 1998
    Co-Authors: Narayan Srinivasa, Rajeev Sharma
    Abstract:

    There has been a considerable interest in using Active Vision for various applications. This interest is primarily because Active Vision can enhance machine Vision capabilities by dynamically changing the camera parameters based on the content of the scene. An important issue in Active Vision is that of representing 3D targets in a manner that is invariant to changing camera configurations. This paper addresses this representation issue for a robotic Active Vision system. An efficient Vector Associative Map (VAM)-based learning scheme is proposed to learn a joint-based representation. Computer simulations and experiments are first performed to evaluate the effectiveness of this scheme using the University of Illinois Active Vision System (UIAVS). The invariance property of the learned representation is then exploited to develop several robotic applications. These include, detecting moving targets, saccade control, planning saccade sequences and controlling a robot manipulator.

  • Role of Active Vision in optimizing visual feedback for robot control
    Lecture Notes in Control and Information Sciences, 1998
    Co-Authors: Rajeev Sharma
    Abstract:

    A purposeful change of camera parameters or Active Vision can be used to improve the process of extracting visual information. Thus if a robot visual servo loop incorporates Active Vision, it can lead to a better performance while increasing the scope of the control tasks. Although significant advances have been made in this direction, much of the potential improvement is still unrealized. This chapter discusses the advantages of using Active Vision for visual servoing. It reviews some of the past research in Active Vision relevant to visual servoing, with the aim of improving: (1) the measurement of image parameters, (2) the process of interpreting the image parameters in terms of the corresponding world parameters, and (3) the control of a robot in terms of the visual information extracted.

D W Murray – One of the best experts on this subject based on the ideXlab platform.

  • Active Vision for Wearables
    IEE Eurowearable '03, 2003
    Co-Authors: W W Mayol, B. Tordoff, T.e. De Campos, Andrew J Davison, D W Murray
    Abstract:

    In this paper we report on our ongoing research on wearable Active Vision, where we have iteratively prototyped a wearable visual robot – a body mounted robot for which the main sensor is a camera. Two main areas have been studied: robot design and visual algorithms. In the design stage, we have analysed sensor placement through the computation of the field of view and body motion using a 3D model of the human form. A design methodology for the robot morphology was developed with the help of an optimisation algorithm based on the Pareto front. The wearability of the device has progressed over several iterations as have the sensor and control architectures. In terms of visual algorithms, we have studied methods of visual tracking fused with inertial sensors, real-time template tracking, human head pose recovery and more recently real-time simultaneous ego-localisation and autonomous 3D map building. Our main long-term application areas are enhanced remote collaboration and autonomous wearable assistants that use Vision.

  • simultaneous localization and map building using Active Vision
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002
    Co-Authors: Andrew J Davison, D W Murray
    Abstract:

    An Active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated simultaneous localization and map-building (SLAM) to be implemented with Vision, permitting repeatable longterm localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using Active Vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.

  • SMC – Towards wearable Active Vision platforms
    SMC 2000 Conference Proceedings. 2000 IEEE International Conference on Systems Man and Cybernetics. 'Cybernetics Evolving to Systems Humans Organizati, 1
    Co-Authors: W W Mayol, B. Tordoff, D W Murray
    Abstract:

    The paper describes the design and construction of a wearable Active Vision platform which is able to achieve substantial decoupling of the camera motion from the wearer’s motion. Design issues in sensor placement, robot kinematics and their relation to wearability are discussed and the prototype platform’s performance is evaluated in a number of important visual tasks. The paper also considers potential application scenarios for this kind of wearable visual robot.