Absolute Scale - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Absolute Scale

The Experts below are selected from a list of 71607 Experts worldwide ranked by ideXlab platform

Absolute Scale – Free Register to Access Experts & Abstracts

Agostino Martinelli – One of the best experts on this subject based on the ideXlab platform.

  • vision and imu data fusion closed form solutions for attitude speed Absolute Scale and bias determination
    IEEE Transactions on Robotics, 2012
    Co-Authors: Agostino Martinelli

    Abstract:

    This paper investigates the problem of vision and inertial data fusion. A sensor assembling that is constituted by one monocular camera, three orthogonal accelerometers, and three orthogonal gyroscopes is considered. The first paper contribution is the analytical derivation of all the observable modes, i.e., all the physical quantities that can be determined by only using the information in the sensor data that are acquired during a short time interval. Specifically, the observable modes are the speed and attitude (roll and pitch angles), the Absolute Scale, and the biases that affect the inertial measurements. This holds even in the case when the camera only observes a single point feature. The analytical derivation of the aforementioned observable modes is based on a nonstandard observability analysis, which fully accounts for the system nonlinearities. The second contribution is the analytical derivation of closed-form solutions, which analytically express all the aforementioned observable modes in terms of the visual and inertial measurements that are collected during a very short time interval. This allows the introduction of a very simple and powerful new method that is able to simultaneously estimate all the observable modes with no need for any initialization or a priori knowledge. Both the observability analysis and the derivation of the closed-form solutions are carried out in several different contexts, including the case of biased and unbiased inertial measurements, the case of a single and multiple features, and in the presence and absence of gravity. In addition, in all these contexts, the minimum number of camera images that are necessary for the observability is derived. The performance of the proposed approach is evaluated via extensive Monte Carlo simulations and real experiments.

  • Vision and IMU Data Fusion: Closed-Form Determination of the Absolute Scale, Speed and Attitude
    , 2012
    Co-Authors: Agostino Martinelli, Roland Siegwart

    Abstract:

    This chapter describes an algorithm for determining the speed and the attitude of a sensor assembling constituted by a monocular camera and inertial sensors (three orthogonal accelerometers and three orthogonal gyroscopes). The system moves in a 3D unknown environment. The algorithm inputs are the visual and inertial measurements during a very short time interval. The outputs are: the speed and attitude, the Absolute Scale and the bias affecting the inertial measurements. The determination of these outputs is obtained by a simple closed form solution which analytically expresses the previous physical quantities in terms of the sensor measurements. This closed form determination allows performing the overall estimation in a very short time interval and without the need of any initialization or prior knowledge. This is a key advantage since allows eliminating the drift on the Absolute Scale and on the orientation. The performance of the proposed algorithm is evaluated with real experiments.

  • Vision-Aided Inertial Navigation: Closed-Form Determination of Absolute Scale, Speed and Attitude
    , 2011
    Co-Authors: Agostino Martinelli, Chiara Troiani, Alessandro Renzaglia

    Abstract:

    This paper investigates the problem of determining the speed and the attitude of a vehicle equipped with a monocular camera and inertial sensors. The vehicle moves in a 3D unknown environment. It is shown that, by collecting the visual and inertial measurements during a very short time interval, it is possible to determine the following physical quantities: the vehicle speed and attitude, the Absolute distance of the point features observed by the camera during the considered time interval and the bias affecting the inertial measurements. In particular, this determination, is based on a closed form solution which analytically expresses the previous physical quantities in terms of the sensor measurements. This closed form determination allows performing the overall estimation in a very short time interval and without the need of any initialization or prior knowledge. This is a key advantage since allows eliminating the drift on the Absolute Scale and on the vehicle orientation. In addition, the paper provides the minimum number of distinct camera images which are needed to perform this determination. Specifically, if the magnitude of the gravity is unknown, at least four camera images are necessary while if it is a priori known, three camera images are necessary. The performance of the proposed approach is evaluated by using real data.

Roland Siegwart – One of the best experts on this subject based on the ideXlab platform.

  • Vision and IMU Data Fusion: Closed-Form Determination of the Absolute Scale, Speed and Attitude
    , 2012
    Co-Authors: Agostino Martinelli, Roland Siegwart

    Abstract:

    This chapter describes an algorithm for determining the speed and the attitude of a sensor assembling constituted by a monocular camera and inertial sensors (three orthogonal accelerometers and three orthogonal gyroscopes). The system moves in a 3D unknown environment. The algorithm inputs are the visual and inertial measurements during a very short time interval. The outputs are: the speed and attitude, the Absolute Scale and the bias affecting the inertial measurements. The determination of these outputs is obtained by a simple closed form solution which analytically expresses the previous physical quantities in terms of the sensor measurements. This closed form determination allows performing the overall estimation in a very short time interval and without the need of any initialization or prior knowledge. This is a key advantage since allows eliminating the drift on the Absolute Scale and on the orientation. The performance of the proposed algorithm is evaluated with real experiments.

  • Closed-Form Solution for Absolute Scale Velocity Determination Combining Inertial Measurements and a Single Feature Correspondence
    , 2011
    Co-Authors: Kneip Laurent, Agostino Martinelli, Davide Scaramuzza, Stephane Weiss, Roland Siegwart

    Abstract:

    This paper presents a closed-form solution for metric velocity estimation of a single camera using inertial measurements. It combines accelerometer and attitude measurements with feature observations in order to compute both the distance to the feature and the speed of the camera inside the camera frame. Notably, we show that this is possible by just using three consecutive camera positions and a single feature correspondence. Our approach represents a compact linear and multirate solution for estimating complementary information to regular essential matrix computation, namely the Scale of the problem. The algorithm is thoroughly validated on simulated and real data and conditions for good quality of the results are identified.

  • Fusion of IMU and vision for Absolute Scale estimation in monocular SLAM
    Journal of Intelligent and Robotic Systems: Theory and Applications, 2011
    Co-Authors: Gabriel Nützi, Stephan Weiss, Davide Scaramuzza, Roland Siegwart

    Abstract:

    The fusion of inertial and visual data is widely used to improve an object’s pose estimation. However, this type of fusion is rarely used to estimate further unknowns in the visual framework. In this paper we present and compare two different approaches to estimate the unknown Scale parameter in a monocular SLAM framework. Directly linked to the Scale is the estimation of the object’s Absolute velocity and position in 3D. The first approach is a spline fitting task adapted from Jung and Taylor and the second is an extended Kalman filter. Both methods have been simulated offline on arbitrary camera paths to analyze their behavior and the quality of the resulting Scale estimation. We then embedded an online multi rate extended Kalman filter in the Parallel Tracking and Mapping (PTAM) algorithm of Klein and Murray together with an inertial sensor. In this inertial/monocular SLAM framework, we show a real time, robust and fast converging Scale estimation. Our approach does not depend on known patterns in the vision part nor a complex temporal synchronization between the visual and inertial sensor.

Alexander Velizhev – One of the best experts on this subject based on the ideXlab platform.

  • Estimation of Absolute Scale in Monocular SLAM Using Synthetic Data
    2019 IEEE CVF International Conference on Computer Vision Workshop (ICCVW), 2019
    Co-Authors: Danila Rukhovich, Daniel Mouritzen, Ralf Kaestner, Martin Rufli, Alexander Velizhev

    Abstract:

    This paper addresses the problem of Scale estimation in monocular SLAM by estimating Absolute distances between camera centers of consecutive image frames. These estimates would improve the overall performance of classical (not deep) SLAM systems and allow metric feature locations to be recovered from a single monocular camera. We propose several network architectures that lead to an improvement of Scale estimation accuracy over the state of the art. In addition, we exploit a possibility to train the neural network only with synthetic data derived from a computer graphics simulator. Our key insight is that, using only synthetic training inputs, we can achieve similar Scale estimation accuracy as that obtained from real data. This fact indicates that fully annotated simulated data is a viable alternative to existing deep-learning-based SLAM systems trained on real (unlabeled) data. Our experiments with unsupervised domain adaptation also show that the difference in visual appearance between simulated and real data does not affect Scale estimation results. Our method operates with low-resolution images (0.03 MP), which makes it practical for real-time SLAM applications with a monocular camera.

  • ICCV Workshops – Estimation of Absolute Scale in Monocular SLAM Using Synthetic Data
    2019 IEEE CVF International Conference on Computer Vision Workshop (ICCVW), 2019
    Co-Authors: Danila Rukhovich, Daniel Mouritzen, Ralf Kaestner, Martin Rufli, Alexander Velizhev

    Abstract:

    This paper addresses the problem of Scale estimation in monocular SLAM by estimating Absolute distances between camera centers of consecutive image frames. These estimates would improve the overall performance of classical (not deep) SLAM systems and allow metric feature locations to be recovered from a single monocular camera. We propose several network architectures that lead to an improvement of Scale estimation accuracy over the state of the art. In addition, we exploit a possibility to train the neural network only with synthetic data derived from a computer graphics simulator. Our key insight is that, using only synthetic training inputs, we can achieve similar Scale estimation accuracy as that obtained from real data. This fact indicates that fully annotated simulated data is a viable alternative to existing deep-learning-based SLAM systems trained on real (unlabeled) data. Our experiments with unsupervised domain adaptation also show that the difference in visual appearance between simulated and real data does not affect Scale estimation results. Our method operates with low-resolution images (0.03 MP), which makes it practical for real-time SLAM applications with a monocular camera.