Absolute Scale

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 71607 Experts worldwide ranked by ideXlab platform

Agostino Martinelli - One of the best experts on this subject based on the ideXlab platform.

  • vision and imu data fusion closed form solutions for attitude speed Absolute Scale and bias determination
    IEEE Transactions on Robotics, 2012
    Co-Authors: Agostino Martinelli
    Abstract:

    This paper investigates the problem of vision and inertial data fusion. A sensor assembling that is constituted by one monocular camera, three orthogonal accelerometers, and three orthogonal gyroscopes is considered. The first paper contribution is the analytical derivation of all the observable modes, i.e., all the physical quantities that can be determined by only using the information in the sensor data that are acquired during a short time interval. Specifically, the observable modes are the speed and attitude (roll and pitch angles), the Absolute Scale, and the biases that affect the inertial measurements. This holds even in the case when the camera only observes a single point feature. The analytical derivation of the aforementioned observable modes is based on a nonstandard observability analysis, which fully accounts for the system nonlinearities. The second contribution is the analytical derivation of closed-form solutions, which analytically express all the aforementioned observable modes in terms of the visual and inertial measurements that are collected during a very short time interval. This allows the introduction of a very simple and powerful new method that is able to simultaneously estimate all the observable modes with no need for any initialization or a priori knowledge. Both the observability analysis and the derivation of the closed-form solutions are carried out in several different contexts, including the case of biased and unbiased inertial measurements, the case of a single and multiple features, and in the presence and absence of gravity. In addition, in all these contexts, the minimum number of camera images that are necessary for the observability is derived. The performance of the proposed approach is evaluated via extensive Monte Carlo simulations and real experiments.

  • Vision and IMU Data Fusion: Closed-Form Determination of the Absolute Scale, Speed and Attitude
    2012
    Co-Authors: Agostino Martinelli, Roland Siegwart
    Abstract:

    This chapter describes an algorithm for determining the speed and the attitude of a sensor assembling constituted by a monocular camera and inertial sensors (three orthogonal accelerometers and three orthogonal gyroscopes). The system moves in a 3D unknown environment. The algorithm inputs are the visual and inertial measurements during a very short time interval. The outputs are: the speed and attitude, the Absolute Scale and the bias affecting the inertial measurements. The determination of these outputs is obtained by a simple closed form solution which analytically expresses the previous physical quantities in terms of the sensor measurements. This closed form determination allows performing the overall estimation in a very short time interval and without the need of any initialization or prior knowledge. This is a key advantage since allows eliminating the drift on the Absolute Scale and on the orientation. The performance of the proposed algorithm is evaluated with real experiments.

  • Vision-Aided Inertial Navigation: Closed-Form Determination of Absolute Scale, Speed and Attitude
    2011
    Co-Authors: Agostino Martinelli, Chiara Troiani, Alessandro Renzaglia
    Abstract:

    This paper investigates the problem of determining the speed and the attitude of a vehicle equipped with a monocular camera and inertial sensors. The vehicle moves in a 3D unknown environment. It is shown that, by collecting the visual and inertial measurements during a very short time interval, it is possible to determine the following physical quantities: the vehicle speed and attitude, the Absolute distance of the point features observed by the camera during the considered time interval and the bias affecting the inertial measurements. In particular, this determination, is based on a closed form solution which analytically expresses the previous physical quantities in terms of the sensor measurements. This closed form determination allows performing the overall estimation in a very short time interval and without the need of any initialization or prior knowledge. This is a key advantage since allows eliminating the drift on the Absolute Scale and on the vehicle orientation. In addition, the paper provides the minimum number of distinct camera images which are needed to perform this determination. Specifically, if the magnitude of the gravity is unknown, at least four camera images are necessary while if it is a priori known, three camera images are necessary. The performance of the proposed approach is evaluated by using real data.

  • Vision and IMU Data Fusion: Closed-Form Solutions for Attitude, Speed, Absolute Scale and Bias Determination
    IEEE Transactions on Robotics, 2011
    Co-Authors: Agostino Martinelli
    Abstract:

    This paper investigates the problem of vision and inertial data fusion. A sensor assembling constituted by one monocular camera, three orthogonal accelerometers and three orthogonal gyroscopes is considered. The first paper contribution is the analytical derivation of all the observable modes, i.e. all the physical quantities that can be determined by only using the information in the sensor data acquired during a short time interval. Specifically, the observable modes are the speed and attitude (roll and pitch angles), the Absolute Scale and the biases affecting the inertial measurements. This holds even in the case when the camera only observes a single point feature. The analytical derivation of the aforementioned observable modes is based on a non standard observability analysis, which fully accounts the system non linearities. The second contribution is the analytical derivation of closed-form solutions which analytically express all the aforementioned observable modes in terms of the visual and inertial measurements collected during a very short time interval. This allows introducing a very simple and powerful new method able to simultaneously estimate all the observable modes without the need of any initialization or a priori knowledge. Both the observability analysis and the derivation of the closed-form solutions are carried out in several different contexts, including the case of biased and unbiased inertial measurements, the case of a single and multiple features, and in presence and absence of gravity. In addition, in all these contexts, the minimum number of camera images necessary for the observability is derived. The performance of the proposed approach is evaluated via extensive Monte Carlo simulations and real experiments.

  • Closed-Form Solution for Absolute Scale Velocity Determination Combining Inertial Measurements and a Single Feature Correspondence
    2011
    Co-Authors: Kneip Laurent, Agostino Martinelli, Davide Scaramuzza, Stephane Weiss, Roland Siegwart
    Abstract:

    This paper presents a closed-form solution for metric velocity estimation of a single camera using inertial measurements. It combines accelerometer and attitude measurements with feature observations in order to compute both the distance to the feature and the speed of the camera inside the camera frame. Notably, we show that this is possible by just using three consecutive camera positions and a single feature correspondence. Our approach represents a compact linear and multirate solution for estimating complementary information to regular essential matrix computation, namely the Scale of the problem. The algorithm is thoroughly validated on simulated and real data and conditions for good quality of the results are identified.

Roland Siegwart - One of the best experts on this subject based on the ideXlab platform.

  • Vision and IMU Data Fusion: Closed-Form Determination of the Absolute Scale, Speed and Attitude
    2012
    Co-Authors: Agostino Martinelli, Roland Siegwart
    Abstract:

    This chapter describes an algorithm for determining the speed and the attitude of a sensor assembling constituted by a monocular camera and inertial sensors (three orthogonal accelerometers and three orthogonal gyroscopes). The system moves in a 3D unknown environment. The algorithm inputs are the visual and inertial measurements during a very short time interval. The outputs are: the speed and attitude, the Absolute Scale and the bias affecting the inertial measurements. The determination of these outputs is obtained by a simple closed form solution which analytically expresses the previous physical quantities in terms of the sensor measurements. This closed form determination allows performing the overall estimation in a very short time interval and without the need of any initialization or prior knowledge. This is a key advantage since allows eliminating the drift on the Absolute Scale and on the orientation. The performance of the proposed algorithm is evaluated with real experiments.

  • Closed-Form Solution for Absolute Scale Velocity Determination Combining Inertial Measurements and a Single Feature Correspondence
    2011
    Co-Authors: Kneip Laurent, Agostino Martinelli, Davide Scaramuzza, Stephane Weiss, Roland Siegwart
    Abstract:

    This paper presents a closed-form solution for metric velocity estimation of a single camera using inertial measurements. It combines accelerometer and attitude measurements with feature observations in order to compute both the distance to the feature and the speed of the camera inside the camera frame. Notably, we show that this is possible by just using three consecutive camera positions and a single feature correspondence. Our approach represents a compact linear and multirate solution for estimating complementary information to regular essential matrix computation, namely the Scale of the problem. The algorithm is thoroughly validated on simulated and real data and conditions for good quality of the results are identified.

  • Fusion of IMU and vision for Absolute Scale estimation in monocular SLAM
    Journal of Intelligent and Robotic Systems: Theory and Applications, 2011
    Co-Authors: Gabriel Nützi, Stephan Weiss, Davide Scaramuzza, Roland Siegwart
    Abstract:

    The fusion of inertial and visual data is widely used to improve an object’s pose estimation. However, this type of fusion is rarely used to estimate further unknowns in the visual framework. In this paper we present and compare two different approaches to estimate the unknown Scale parameter in a monocular SLAM framework. Directly linked to the Scale is the estimation of the object’s Absolute velocity and position in 3D. The first approach is a spline fitting task adapted from Jung and Taylor and the second is an extended Kalman filter. Both methods have been simulated offline on arbitrary camera paths to analyze their behavior and the quality of the resulting Scale estimation. We then embedded an online multi rate extended Kalman filter in the Parallel Tracking and Mapping (PTAM) algorithm of Klein and Murray together with an inertial sensor. In this inertial/monocular SLAM framework, we show a real time, robust and fast converging Scale estimation. Our approach does not depend on known patterns in the vision part nor a complex temporal synchronization between the visual and inertial sensor.

  • Absolute Scale in structure from motion from a single vehicle mounted camera by exploiting nonholonomic constraints
    International Conference on Computer Vision, 2009
    Co-Authors: Davide Scaramuzza, Friedrich Fraundorfer, Marc Pollefeys, Roland Siegwart
    Abstract:

    In structure-from-motion with a single camera it is well known that the scene can be only recovered up to a Scale. In order to compute the Absolute Scale, one needs to know the baseline of the camera motion or the dimension of at least one element in the scene. In this paper, we show that there exists a class of structure-from-motion problems where it is possible to compute the Absolute Scale completely automatically without using this knowledge, that is, when the camera is mounted on wheeled vehicles (e.g. cars, bikes, or mobile robots). The construction of these vehicles puts interesting constraints on the camera motion, which are known as “nonholonomic constraints”. The interesting case is when the camera has an offset to the vehicle's center of motion. We show that by just knowing this offset, the Absolute Scale can be computed with a good accuracy when the vehicle turns. We give a mathematical derivation and provide experimental results on both simulated and real data over a large image dataset collected during a 3 Km path. To our knowledge this is the first time nonholonomic constraints of wheeled vehicles are used to estimate the Absolute Scale. We believe that the proposed method can be useful in those research areas involving visual odometry and mapping with vehicle mounted cameras.

  • ICCV - Absolute Scale in structure from motion from a single vehicle mounted camera by exploiting nonholonomic constraints
    2009 IEEE 12th International Conference on Computer Vision, 2009
    Co-Authors: Davide Scaramuzza, Friedrich Fraundorfer, Marc Pollefeys, Roland Siegwart
    Abstract:

    In structure-from-motion with a single camera it is well known that the scene can be only recovered up to a Scale. In order to compute the Absolute Scale, one needs to know the baseline of the camera motion or the dimension of at least one element in the scene. In this paper, we show that there exists a class of structure-from-motion problems where it is possible to compute the Absolute Scale completely automatically without using this knowledge, that is, when the camera is mounted on wheeled vehicles (e.g. cars, bikes, or mobile robots). The construction of these vehicles puts interesting constraints on the camera motion, which are known as “nonholonomic constraints”. The interesting case is when the camera has an offset to the vehicle's center of motion. We show that by just knowing this offset, the Absolute Scale can be computed with a good accuracy when the vehicle turns. We give a mathematical derivation and provide experimental results on both simulated and real data over a large image dataset collected during a 3 Km path. To our knowledge this is the first time nonholonomic constraints of wheeled vehicles are used to estimate the Absolute Scale. We believe that the proposed method can be useful in those research areas involving visual odometry and mapping with vehicle mounted cameras.

Alexander Velizhev - One of the best experts on this subject based on the ideXlab platform.

  • Estimation of Absolute Scale in Monocular SLAM Using Synthetic Data
    2019 IEEE CVF International Conference on Computer Vision Workshop (ICCVW), 2019
    Co-Authors: Danila Rukhovich, Daniel Mouritzen, Ralf Kaestner, Martin Rufli, Alexander Velizhev
    Abstract:

    This paper addresses the problem of Scale estimation in monocular SLAM by estimating Absolute distances between camera centers of consecutive image frames. These estimates would improve the overall performance of classical (not deep) SLAM systems and allow metric feature locations to be recovered from a single monocular camera. We propose several network architectures that lead to an improvement of Scale estimation accuracy over the state of the art. In addition, we exploit a possibility to train the neural network only with synthetic data derived from a computer graphics simulator. Our key insight is that, using only synthetic training inputs, we can achieve similar Scale estimation accuracy as that obtained from real data. This fact indicates that fully annotated simulated data is a viable alternative to existing deep-learning-based SLAM systems trained on real (unlabeled) data. Our experiments with unsupervised domain adaptation also show that the difference in visual appearance between simulated and real data does not affect Scale estimation results. Our method operates with low-resolution images (0.03 MP), which makes it practical for real-time SLAM applications with a monocular camera.

  • ICCV Workshops - Estimation of Absolute Scale in Monocular SLAM Using Synthetic Data
    2019 IEEE CVF International Conference on Computer Vision Workshop (ICCVW), 2019
    Co-Authors: Danila Rukhovich, Daniel Mouritzen, Ralf Kaestner, Martin Rufli, Alexander Velizhev
    Abstract:

    This paper addresses the problem of Scale estimation in monocular SLAM by estimating Absolute distances between camera centers of consecutive image frames. These estimates would improve the overall performance of classical (not deep) SLAM systems and allow metric feature locations to be recovered from a single monocular camera. We propose several network architectures that lead to an improvement of Scale estimation accuracy over the state of the art. In addition, we exploit a possibility to train the neural network only with synthetic data derived from a computer graphics simulator. Our key insight is that, using only synthetic training inputs, we can achieve similar Scale estimation accuracy as that obtained from real data. This fact indicates that fully annotated simulated data is a viable alternative to existing deep-learning-based SLAM systems trained on real (unlabeled) data. Our experiments with unsupervised domain adaptation also show that the difference in visual appearance between simulated and real data does not affect Scale estimation results. Our method operates with low-resolution images (0.03 MP), which makes it practical for real-time SLAM applications with a monocular camera.

Alessandro Renzaglia - One of the best experts on this subject based on the ideXlab platform.

  • Vision-Aided Inertial Navigation: Closed-Form Determination of Absolute Scale, Speed and Attitude
    2011
    Co-Authors: Agostino Martinelli, Chiara Troiani, Alessandro Renzaglia
    Abstract:

    This paper investigates the problem of determining the speed and the attitude of a vehicle equipped with a monocular camera and inertial sensors. The vehicle moves in a 3D unknown environment. It is shown that, by collecting the visual and inertial measurements during a very short time interval, it is possible to determine the following physical quantities: the vehicle speed and attitude, the Absolute distance of the point features observed by the camera during the considered time interval and the bias affecting the inertial measurements. In particular, this determination, is based on a closed form solution which analytically expresses the previous physical quantities in terms of the sensor measurements. This closed form determination allows performing the overall estimation in a very short time interval and without the need of any initialization or prior knowledge. This is a key advantage since allows eliminating the drift on the Absolute Scale and on the vehicle orientation. In addition, the paper provides the minimum number of distinct camera images which are needed to perform this determination. Specifically, if the magnitude of the gravity is unknown, at least four camera images are necessary while if it is a priori known, three camera images are necessary. The performance of the proposed approach is evaluated by using real data.

  • IROS - Vision-aided inertial navigation: Closed-form determination of Absolute Scale, speed and attitude
    2011 IEEE RSJ International Conference on Intelligent Robots and Systems, 2011
    Co-Authors: Agostino Martinelli, Chiara Troiani, Alessandro Renzaglia
    Abstract:

    This paper investigates the problem of determining the speed and the attitude of a vehicle equipped with a monocular camera and inertial sensors. The vehicle moves in a 3D unknown environment. It is shown that, by collecting the visual and inertial measurements during a very short time interval, it is possible to determine the following physical quantities: the vehicle speed and attitude, the Absolute distance of the point features observed by the camera during the considered time interval and the bias affecting the inertial measurements. In particular, this determination, is based on a closed form solution which analytically expresses the previous physical quantities in terms of the sensor measurements. This closed form determination allows performing the overall estimation in a very short time interval and without the need of any initialization or prior knowledge. This is a key advantage since allows eliminating the drift on the Absolute Scale and on the vehicle orientation. In addition, the paper provides the minimum number of distinct camera images which are needed to perform this determination. Specifically, if the magnitude of the gravity is unknown, at least four camera images are necessary while if it is a priori known, three camera images are necessary. The performance of the proposed approach is evaluated by using real data.

Bruno Siciliano - One of the best experts on this subject based on the ideXlab platform.

  • MAV indoor navigation based on a closed-form solution for Absolute Scale velocity estimation using Optical Flow and inertial data
    Proceedings of the IEEE Conference on Decision and Control, 2011
    Co-Authors: Vincenzo Lippiello, Giuseppe Loianno, Bruno Siciliano
    Abstract:

    A new vision-based obstacle avoidance technique for indoor navigation of Micro Aerial Vehicles (MAVs) is presented in this paper. The vehicle trajectory is modified according to the obstacles detected through the Depth Map of the surrounding environment, which is computed online using the Optical Flow provided by a single onboard omnidirectional camera. An existing closed-form solution for the Absolute-Scale velocity estimation based on visual correspondences and inertial measurements is generalized and here employed for the Depth Map estimation. Moreover, a dynamic region-of-interest for image features extraction and a self-limitation control for the navigation velocity are proposed to improve safety in view of the estimated vehicle velocity. The proposed solutions are validated by means of simulations.

  • CDC-ECE - MAV indoor navigation based on a closed-form solution for Absolute Scale velocity estimation using Optical Flow and inertial data
    IEEE Conference on Decision and Control and European Control Conference, 2011
    Co-Authors: Vincenzo Lippiello, Giuseppe Loianno, Bruno Siciliano
    Abstract:

    A new vision-based obstacle avoidance technique for indoor navigation of Micro Aerial Vehicles (MAVs) is presented in this paper. The vehicle trajectory is modified according to the obstacles detected through the Depth Map of the surrounding environment, which is computed online using the Optical Flow provided by a single onboard omnidirectional camera. An existing closed-form solution for the Absolute-Scale velocity estimation based on visual correspondences and inertial measurements is generalized and here employed for the Depth Map estimation. Moreover, a dynamic region-of-interest for image features extraction and a self-limitation control for the navigation velocity are proposed to improve safety in view of the estimated vehicle velocity. The proposed solutions are validated by means of simulations.