Active Vision

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 51834 Experts worldwide ranked by ideXlab platform

François Chaumette - One of the best experts on this subject based on the ideXlab platform.

  • Active Vision for pose estimation applied to singularity avoidance in visual servoing
    2017
    Co-Authors: Don Agravante, François Chaumette
    Abstract:

    In Active Vision, the camera motion is controlled in order to improve a certain visual sensing strategy. In this paper, we formulate an Active Vision task function to improve pose estimation. This is done by defining an optimality metric on the Fisher Information Matrix. This task is then incorporated into a weighted multi-objective optimization framework. To test this approach, we apply it on the three image point visual servoing problem which has a degenerate configuration-a singularity cylinder. The simulation results show that the singular configurations of pose estimation are avoided during visual servoing. We then discuss the potential of Active Vision to be integrated into more complex multi-task frameworks.

  • IROS - Active Vision for pose estimation applied to singularity avoidance in visual servoing
    2017 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS), 2017
    Co-Authors: Don Agravante, François Chaumette
    Abstract:

    In Active Vision, the camera motion is controlled in order to improve a certain visual sensing strategy. In this paper, we formulate an Active Vision task function to improve pose estimation. This is done by defining an optimality metric on the Fisher Information Matrix. This task is then incorporated into a weighted multi-objective optimization framework. To test this approach, we apply it on the three image point visual servoing problem which has a degenerate configuration — a singularity cylinder. The simulation results show that the singular configurations of pose estimation are avoided during visual servoing. We then discuss the potential of Active Vision to be integrated into more complex multi-task frameworks.

  • Active Vision for complete scene reconstruction and exploration
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999
    Co-Authors: Eric Marchand, François Chaumette
    Abstract:

    Deals with the 3D structure estimation and exploration of static scenes using Active Vision. Our method is based on the structure from controlled motion approach that constrains camera motions to obtain an optimal estimation of the 3D structure of a geometrical primitive. Since this approach involves gazing on the considered primitive, we have developed perceptual strategies able to perform a succession of robust estimations. This leads to a gaze planning strategy that mainly uses a representation of known and unknown areas as a basis for selecting viewpoints. This approach ensures a reconstruction as complete as possible of the scene.

  • Active Vision for complete scene reconstruction and exploration
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999
    Co-Authors: E. Marchand, François Chaumette
    Abstract:

    This paper deals with the 3D structure estimation and exploration of static scenes using Active Vision. Our method is based on the structure from controlled motion approach that constrains camera motions to obtain an optimal estimation of the 3D structure of a geometrical primitive. Since this approach involves gazing on the considered primitive, we have developed perceptual strategies able to perform a succession of robust estimations. This leads to a gaze planning strategy that mainly uses a representation of known and unknown areas as a basis for selecting viewpoints. This approach ensures a reconstruction as complete as possible of the scene

  • Specifying and verifying Active Vision-based robotic systems with the Signal environment
    The International Journal of Robotics Research, 1998
    Co-Authors: Eric Marchand, Eric Rutten, Hervé Marchand, François Chaumette
    Abstract:

    Active Vision-based robot design involves a variety of techniques and formalisms, from kinematics to control theory, signal processing and computer science. The programming of such systems therefore requires environments with many different functionalities, in a very integrated fashion in order to ensure consistency of the different parts. In significant applications, the correct specification of the global controller is not simple to achieve, as it mixes different levels of behavior, and must respect properties. In this paper we want to advocate the use of a strongly integrated environment able to deal with the design of such systems from the specification of both continuous and discrete parts down to the verification of dynamic behavior. The synchronous language signal is used here as a candidate integrated environment for the design of Active Vision systems. Our experiments show that signal, while not being an environment devoted to for robotics (but more generally dedicated to control theory and signal processing), presents functionalities and a degree of integration that are relevant to the safe design of Active Vision-based robotics system.

Rajeev Sharma - One of the best experts on this subject based on the ideXlab platform.

  • A framework for Active Vision-based robot control using neural networks
    Robotica, 1998
    Co-Authors: Rajeev Sharma, Narayan Srinivasa
    Abstract:

    Assembly robots that use an Active camera system for visual feedback can achieve greater flexibility, including the ability to operate in an uncertain and changing environment. Incorporating Active Vision into a robot control loop involves some inherent difficulties, including calibration, and the need for redefining the servoing goal as the camera configuration changes. In this paper, we propose a novel self-organizing neural network that learns a calibration-free spatial representation of 3D point targets in a manner that is invariant to changing camera configurations. This representation is used to develop a new framework for robot control with Active Vision. The salient feature of this framework is that it decouples Active camera control from robot control. The feasibility of this approach is established with the help of computer simulations and experiments with the University of Illinois Active Vision System (UIAVS).

  • Role of Active Vision in optimizing visual feedback for robot control
    Lecture Notes in Control and Information Sciences, 1998
    Co-Authors: Rajeev Sharma
    Abstract:

    A purposeful change of camera parameters or Active Vision can be used to improve the process of extracting visual information. Thus if a robot visual servo loop incorporates Active Vision, it can lead to a better performance while increasing the scope of the control tasks. Although significant advances have been made in this direction, much of the potential improvement is still unrealized. This chapter discusses the advantages of using Active Vision for visual servoing. It reviews some of the past research in Active Vision relevant to visual servoing, with the aim of improving: (1) the measurement of image parameters, (2) the process of interpreting the image parameters in terms of the corresponding world parameters, and (3) the control of a robot in terms of the visual information extracted.

  • Efficient learning of VAM-based representation of 3D targets and its Active Vision applications
    Neural networks : the official journal of the International Neural Network Society, 1998
    Co-Authors: Narayan Srinivasa, Rajeev Sharma
    Abstract:

    There has been a considerable interest in using Active Vision for various applications. This interest is primarily because Active Vision can enhance machine Vision capabilities by dynamically changing the camera parameters based on the content of the scene. An important issue in Active Vision is that of representing 3D targets in a manner that is invariant to changing camera configurations. This paper addresses this representation issue for a robotic Active Vision system. An efficient Vector Associative Map (VAM)-based learning scheme is proposed to learn a joint-based representation. Computer simulations and experiments are first performed to evaluate the effectiveness of this scheme using the University of Illinois Active Vision System (UIAVS). The invariance property of the learned representation is then exploited to develop several robotic applications. These include, detecting moving targets, saccade control, planning saccade sequences and controlling a robot manipulator.

  • SOIM: a self-organizing invertible map with applications in Active Vision
    IEEE transactions on neural networks, 1997
    Co-Authors: Narayan Srinivasa, Rajeev Sharma
    Abstract:

    We propose a novel neural network, called the self-organized invertible map (SOIM), that is capable of learning many-to-one functionals mappings in a self-organized and online fashion. The design and performance of the SOIM are highlighted by learning a many-to-one functional mapping that exists in Active Vision for spatial representation of three-dimensional point targets. The learned spatial representation is invariant to changing camera configurations. The SOIM also possesses an invertible property that can be exploited for Active Vision. An efficient and experimentally feasible method was devised for learning this representation on a real Active Vision system. The proof of convergence during learning as well as conditions for invariance of the learned spatial representation are derived and then experimentally verified using the Active Vision system. We also demonstrate various Active Vision applications that benefit from the properties of the mapping learned by SOIM.

  • Active Vision for target pursuit by a mobile robot
    Applications of Artificial Intelligence X: Machine Vision and Robotics, 1992
    Co-Authors: Rajeev Sharma
    Abstract:

    We discuss and demonstrate the advantages of developing Active Vision techniques as an integral part of a mobile robot behavior. In particular, different visual motion analysis modules needed for pursuing a moving target are summarized. A detailed solution is the presented for the Active detection of independent motion to illustrate the methodology.

D W Murray - One of the best experts on this subject based on the ideXlab platform.

  • Active Vision for Wearables
    IEE Eurowearable '03, 2003
    Co-Authors: W W Mayol, Andrew J Davison, B. Tordoff, T.e. De Campos, D W Murray
    Abstract:

    In this paper we report on our ongoing research on wearable Active Vision, where we have iteratively prototyped a wearable visual robot - a body mounted robot for which the main sensor is a camera. Two main areas have been studied: robot design and visual algorithms. In the design stage, we have analysed sensor placement through the computation of the field of view and body motion using a 3D model of the human form. A design methodology for the robot morphology was developed with the help of an optimisation algorithm based on the Pareto front. The wearability of the device has progressed over several iterations as have the sensor and control architectures. In terms of visual algorithms, we have studied methods of visual tracking fused with inertial sensors, real-time template tracking, human head pose recovery and more recently real-time simultaneous ego-localisation and autonomous 3D map building. Our main long-term application areas are enhanced remote collaboration and autonomous wearable assistants that use Vision.

  • simultaneous localization and map building using Active Vision
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002
    Co-Authors: Andrew J Davison, D W Murray
    Abstract:

    An Active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated simultaneous localization and map-building (SLAM) to be implemented with Vision, permitting repeatable longterm localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using Active Vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.

  • SMC - Towards wearable Active Vision platforms
    SMC 2000 Conference Proceedings. 2000 IEEE International Conference on Systems Man and Cybernetics. 'Cybernetics Evolving to Systems Humans Organizati, 1
    Co-Authors: W W Mayol, B. Tordoff, D W Murray
    Abstract:

    The paper describes the design and construction of a wearable Active Vision platform which is able to achieve substantial decoupling of the camera motion from the wearer's motion. Design issues in sensor placement, robot kinematics and their relation to wearability are discussed and the prototype platform's performance is evaluated in a number of important visual tasks. The paper also considers potential application scenarios for this kind of wearable visual robot.

Eric Marchand - One of the best experts on this subject based on the ideXlab platform.

  • Active Vision for complete scene reconstruction and exploration
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999
    Co-Authors: Eric Marchand, François Chaumette
    Abstract:

    Deals with the 3D structure estimation and exploration of static scenes using Active Vision. Our method is based on the structure from controlled motion approach that constrains camera motions to obtain an optimal estimation of the 3D structure of a geometrical primitive. Since this approach involves gazing on the considered primitive, we have developed perceptual strategies able to perform a succession of robust estimations. This leads to a gaze planning strategy that mainly uses a representation of known and unknown areas as a basis for selecting viewpoints. This approach ensures a reconstruction as complete as possible of the scene.

  • Specifying and verifying Active Vision-based robotic systems with the Signal environment
    The International Journal of Robotics Research, 1998
    Co-Authors: Eric Marchand, Eric Rutten, Hervé Marchand, François Chaumette
    Abstract:

    Active Vision-based robot design involves a variety of techniques and formalisms, from kinematics to control theory, signal processing and computer science. The programming of such systems therefore requires environments with many different functionalities, in a very integrated fashion in order to ensure consistency of the different parts. In significant applications, the correct specification of the global controller is not simple to achieve, as it mixes different levels of behavior, and must respect properties. In this paper we want to advocate the use of a strongly integrated environment able to deal with the design of such systems from the specification of both continuous and discrete parts down to the verification of dynamic behavior. The synchronous language signal is used here as a candidate integrated environment for the design of Active Vision systems. Our experiments show that signal, while not being an environment devoted to for robotics (but more generally dedicated to control theory and signal processing), presents functionalities and a degree of integration that are relevant to the safe design of Active Vision-based robotics system.

Shuntaro Yamazaki - One of the best experts on this subject based on the ideXlab platform.

  • Temporal dithering of illumination for fast Active Vision
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2008
    Co-Authors: Srinivasa G Narasimhan, Sanjeev J. Koppal, Shuntaro Yamazaki
    Abstract:

    Active Vision techniques use programmable light sources, such as projectors, whose intensities can be controlled over space and time. We present a broad framework for fast Active Vision using Digital Light Processing (DLP) projectors. The digital micromirror array (DMD) in a DLP projector is capable of switching mirrors “on” and “off” at high speeds (106/s). An off-the-shelf DLP projector, however, effectively operates at much lower rates (30-60Hz) by emitting smaller intensities that are integrated over time by a sensor (eye or camera) to produce the desired brightness value. Our key idea is to exploit this “temporal dithering” of illumination, as observed by a high-speed camera. The dithering encodes each brightness value uniquely and may be used in conjunction with virtually any Active Vision technique. We apply our approach to five well-known problems: (a) structured light-based range finding, (b) photometric stereo, (c) illumination de-multiplexing, (d) high frequency preserving motion-blur and (e) separation of direct and global scene components, achieving significant speedups in performance. In all our methods, the projector receives a single image as input whereas the camera acquires a sequence of frames.

  • ECCV (4) - Temporal Dithering of Illumination for Fast Active Vision
    Lecture Notes in Computer Science, 2008
    Co-Authors: Srinivasa G Narasimhan, Sanjeev J. Koppal, Shuntaro Yamazaki
    Abstract:

    Active Vision techniques use programmable light sources, such as projectors, whose intensities can be controlled over space and time. We present a broad framework for fast Active Vision using Digital Light Processing (DLP) projectors. The digital micromirror array (DMD) in a DLP projector is capable of switching mirrors "on" and "off" at high speeds (106/s). An off-the-shelf DLP projector, however, effectively operates at much lower rates (30-60Hz) by emitting smaller intensities that are integrated over time by a sensor (eye or camera) to produce the desired brightness value. Our key idea is to exploit this "temporal dithering" of illumination, as observed by a high-speed camera. The dithering encodes each brightness value uniquely and may be used in conjunction with virtually any Active Vision technique. We apply our approach to five well-known problems: (a) structured light-based range finding, (b) photometric stereo, (c) illumination de-multiplexing, (d) high frequency preserving motion-blur and (e) separation of direct and global scene components, achieving significant speedups in performance. In all our methods, the projector receives a single image as input whereas the camera acquires a sequence of frames.