Rigid Transformation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 14097 Experts worldwide ranked by ideXlab platform

Laurent Trassoudaine - One of the best experts on this subject based on the ideXlab platform.

  • global registration of 3d lidar point clouds based on scene features application to structured environments
    Remote Sensing, 2017
    Co-Authors: Julia Sanchez, Paul Checchin, Florence Denis, Florent Dupont, Laurent Trassoudaine
    Abstract:

    Acquiring 3D data with LiDAR systems involves scanning multiple scenes from different points of view. In actual systems, the ICP algorithm (Iterative Closest Point) is commonly used to register the acquired point clouds together to form a unique one. However, this method faces local minima issues and often needs a coarse initial alignment to converge to the optimum. This paper develops a new method for registration adapted to indoor environments and based on structure priors of such scenes. Our method works without odometric data or physical targets. The rotation and translation of the Rigid Transformation are computed separately, using, respectively, the Gaussian image of the point clouds and a correlation of histograms. To evaluate our algorithm on challenging registration cases, two datasets were acquired and are available for comparison with other methods online. The evaluation of our algorithm on four datasets against six existing methods shows that the proposed method is more robust against sampling and scene complexity. Moreover, the time performances enable a real-time implementation. Data Set: https://liris.cnrs.fr/3d-registration/ Data Set License: ODC Attribute License

  • radar scan matching slam using the fourier mellin transform
    Field and Service Robotics, 2010
    Co-Authors: Franck Gérossier, Laurent Trassoudaine, Paul Checchin, Roland Chapuis, Christophe Blanc
    Abstract:

    This paper is concerned with the Simultaneous Localization And Mapping (SLAM) problem using data obtained from a microwave radar sensor. The radar scanner is based on Frequency Modulated Continuous Wave (FMCW) technology. In order to meet the needs of radar image analysis complexity, a trajectoryoriented EKF-SLAM technique using data from a 360. field of view radar sensor has been developed. This process makes no landmark assumptions and avoids the data association problem. The method of egomotion estimation makes use of the Fourier-Mellin Transform for registering radar images in a sequence, from which the rotation and translation of the sensor motion can be estimated. In the context of the scan-matching SLAM, the use of the Fourier-Mellin Transform is original and provides an accurate and efficient way of computing the Rigid Transformation between consecutive scans. Experimental results on real-world data are presented.

  • Trajectory-oriented EKF-SLAM using the Fourier-Mellin transform applied to microwave radar images
    2009 IEEE RSJ International Conference on Intelligent Robots and Systems IROS 2009, 2009
    Co-Authors: Franck Gérossier, Paul Checchin, Roland Chapuis, Christophe Blanc, Laurent Trassoudaine
    Abstract:

    This paper is concerned with the simultaneous localization and mapping (SLAM) application using data obtained from a microwave radar sensor. The radar scanner is based on the frequency modulated continuous wave (FMCW) technology. In order to overcome the complexity of radar image analysis, a trajectory-oriented EKF-SLAM technique using data from a 360° field of view radar sensor has been developed. This process makes no landmark assumptions and avoids the data association problem. The method of egomotion estimation makes use of the Fourier-Mellin transform for registering radar images in a sequence, from which the rotation and translation of the sensor motion can be estimated. In the context of the scan-matching SLAM, the use of the Fourier-Mellin transform is original and provides an accurate and efficient way of computing the Rigid Transformation between consecutive scans. Experimental results on real-world data are presented. Moreover a performance evaluation of the results is carried out. A comparative study between the output data of the proposed method and the data processed with smoothing approaches to SLAM is also achieved.

Stergios I. Roumeliotis - One of the best experts on this subject based on the ideXlab platform.

  • A Kalman filter-based algorithm for IMU-camera calibration: Observability analysis and performance evaluation
    IEEE Transactions on Robotics, 2008
    Co-Authors: Faraz M. Mirzaei, Stergios I. Roumeliotis
    Abstract:

    Vision-aided inertial navigation systems (V-INSs) can provide precise state estimates for the 3-D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved by combining inertial measurements from an inertial measurement unit (IMU) with visual observations from a camera under the assumption that the Rigid Transformation between the two sensors is known. Errors in the IMU-camera extrinsic calibration process cause biases that reduce the estimation accuracy and can even lead to divergence of any estimator processing the measurements from both sensors. In this paper, we present an extended Kalman filter for precisely determining the unknown Transformation between a camera and an IMU. Contrary to previous approaches, we explicitly account for the time correlation of the IMU measurements and provide a figure of merit (covariance) for the estimated Transformation. The proposed method does not require any special hardware (such as spin table or 3-D laser scanner) except a calibration target. Furthermore, we employ the observability rank criterion based on Lie derivatives and prove that the nonlinear system describing the IMU-camera calibration process is observable. Simulation and experimental results are presented that validate the proposed method and quantify its accuracy.

  • 1 a kalman filter based algorithm for imu camera calibration
    Intelligent Robots and Systems, 2007
    Co-Authors: Faraz M. Mirzaei, Stergios I. Roumeliotis
    Abstract:

    Vision-aided Inertial Navigation Systems (V-INS) can provide precise state estimates for the 3D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved by combining inertial measurements from an IMU with visual observations from a camera under the assumption that the Rigid Transformation between the two sensors is known. Errors in the IMU-camera calibration process causes biases that reduce the accuracy of the estimation process and can even lead to divergence. In this paper, we present a Kalman filter-based algorithm for precisely determining the unknown Transformation between a camera and an IMU. Contrary to previous approaches, we explicitly account for the time correlations of the IMU measurements and provide a figure of merit (covariance) for the estimated Transformation. The proposed method does not require any special hardware (such as spin table or 3D laser scanner) except a calibration target. Simulation and experimental results are presented that validate the proposed method and quantify its accuracy.

Justin Solomon - One of the best experts on this subject based on the ideXlab platform.

  • deep closest point learning representations for point cloud registration
    International Conference on Computer Vision, 2019
    Co-Authors: Yue Wang, Justin Solomon
    Abstract:

    Point cloud registration is a key problem for computer vision applied to robotics, medical imaging, and other applications. This problem involves finding a Rigid Transformation from one point cloud into another so that they align. Iterative Closest Point (ICP) and its variants provide simple and easily-implemented iterative methods for this task, but these algorithms can converge to spurious local optima. To address local optima and other difficulties in the ICP pipeline, we propose a learning-based method, titled Deep Closest Point (DCP), inspired by recent techniques in computer vision and natural language processing. Our model consists of three parts: a point cloud embedding network, an attention-based module combined with a pointer generation layer to approximate combinatorial matching, and a differentiable singular value decomposition (SVD) layer to extract the final Rigid Transformation. We train our model end-to-end on the ModelNet40 dataset and show in several settings that it performs better than ICP, its variants (e.g., Go-ICP, FGR), and the recently-proposed learning-based method PointNetLK. Beyond providing a state-of-the-art registration technique, we evaluate the suitability of our learned features transferred to unseen objects. We also provide preliminary analysis of our learned model to help understand whether domain-specific and/or global features facilitate Rigid registration.

Paul Checchin - One of the best experts on this subject based on the ideXlab platform.

  • global registration of 3d lidar point clouds based on scene features application to structured environments
    Remote Sensing, 2017
    Co-Authors: Julia Sanchez, Paul Checchin, Florence Denis, Florent Dupont, Laurent Trassoudaine
    Abstract:

    Acquiring 3D data with LiDAR systems involves scanning multiple scenes from different points of view. In actual systems, the ICP algorithm (Iterative Closest Point) is commonly used to register the acquired point clouds together to form a unique one. However, this method faces local minima issues and often needs a coarse initial alignment to converge to the optimum. This paper develops a new method for registration adapted to indoor environments and based on structure priors of such scenes. Our method works without odometric data or physical targets. The rotation and translation of the Rigid Transformation are computed separately, using, respectively, the Gaussian image of the point clouds and a correlation of histograms. To evaluate our algorithm on challenging registration cases, two datasets were acquired and are available for comparison with other methods online. The evaluation of our algorithm on four datasets against six existing methods shows that the proposed method is more robust against sampling and scene complexity. Moreover, the time performances enable a real-time implementation. Data Set: https://liris.cnrs.fr/3d-registration/ Data Set License: ODC Attribute License

  • radar scan matching slam using the fourier mellin transform
    Field and Service Robotics, 2010
    Co-Authors: Franck Gérossier, Laurent Trassoudaine, Paul Checchin, Roland Chapuis, Christophe Blanc
    Abstract:

    This paper is concerned with the Simultaneous Localization And Mapping (SLAM) problem using data obtained from a microwave radar sensor. The radar scanner is based on Frequency Modulated Continuous Wave (FMCW) technology. In order to meet the needs of radar image analysis complexity, a trajectoryoriented EKF-SLAM technique using data from a 360. field of view radar sensor has been developed. This process makes no landmark assumptions and avoids the data association problem. The method of egomotion estimation makes use of the Fourier-Mellin Transform for registering radar images in a sequence, from which the rotation and translation of the sensor motion can be estimated. In the context of the scan-matching SLAM, the use of the Fourier-Mellin Transform is original and provides an accurate and efficient way of computing the Rigid Transformation between consecutive scans. Experimental results on real-world data are presented.

  • Trajectory-oriented EKF-SLAM using the Fourier-Mellin transform applied to microwave radar images
    2009 IEEE RSJ International Conference on Intelligent Robots and Systems IROS 2009, 2009
    Co-Authors: Franck Gérossier, Paul Checchin, Roland Chapuis, Christophe Blanc, Laurent Trassoudaine
    Abstract:

    This paper is concerned with the simultaneous localization and mapping (SLAM) application using data obtained from a microwave radar sensor. The radar scanner is based on the frequency modulated continuous wave (FMCW) technology. In order to overcome the complexity of radar image analysis, a trajectory-oriented EKF-SLAM technique using data from a 360° field of view radar sensor has been developed. This process makes no landmark assumptions and avoids the data association problem. The method of egomotion estimation makes use of the Fourier-Mellin transform for registering radar images in a sequence, from which the rotation and translation of the sensor motion can be estimated. In the context of the scan-matching SLAM, the use of the Fourier-Mellin transform is original and provides an accurate and efficient way of computing the Rigid Transformation between consecutive scans. Experimental results on real-world data are presented. Moreover a performance evaluation of the results is carried out. A comparative study between the output data of the proposed method and the data processed with smoothing approaches to SLAM is also achieved.

Bisheng Yang - One of the best experts on this subject based on the ideXlab platform.

  • nrli uav non Rigid registration of sequential raw laser scans and images for low cost uav lidar point cloud quality improvement
    Isprs Journal of Photogrammetry and Remote Sensing, 2019
    Co-Authors: Bisheng Yang, Chi Chen, Ayman Habib
    Abstract:

    Abstract Accurate registration of light detection and ranging (LiDAR) point clouds and images is a prerequisite for integrating the spectral and geometrical information collected by low-cost unmanned aerial vehicle (UAV) systems. Most registration approaches take the directly georeferenced LiDAR point cloud as a Rigid body, based on the assumption that the high-precision positioning and orientation system (POS) in the LiDAR system provides sufficient precision, and that the POS errors are negligible. However, due to the large errors of the low-precision POSs commonly used in the low-cost UAV LiDAR systems (ULSs), dramatic deformation may exist in the directly georeferenced ULS point cloud, resulting in non-Rigid Transformation between the images and the deformed ULS point cloud. As a result, registration may fail when using a Rigid Transformation between the images and the directly georeferenced LiDAR point clouds. To address this problem, we proposed NRLI-UAV, which is a non-Rigid registration method for registration of sequential raw laser scans and images collected by low-cost UAV systems. NRLI-UAV is a two-step registration method that exploits trajectory correction and discrepancy minimization between the depths derived from structure from motion (SfM) and the raw laser scans to achieve LiDAR point cloud quality improvement. Firstly, the coarse registration procedure utilizes global navigation satellite system (GNSS) and inertial measurement unit (IMU)-aided SfM to obtain accurate image orientation and corrects the errors of the low-precision POS. Secondly, the fine registration procedure transforms the original 2D-3D registration to 3D-3D registration. This is performed by setting the oriented images as the reference, and iteratively minimizing the discrepancy between the depth maps derived from SfM and the raw laser scans, resulting in accurate registration between the images and the LiDAR point clouds. In addition, an improved LiDAR point cloud is generated in the mapping frame. Experiments were conducted with data collected by a low-cost UAV system in three challenging scenes to evaluate NRLI-UAV. The final registration errors of the images and the LiDAR point cloud are less than one pixel in image space and less than 0.13 m in object space. The LiDAR point cloud quality was also evaluated by plane fitting, and the results show that the LiDAR point cloud quality is improved by 8.8 times from 0.45 m (root-mean-square error [RMSE] of plane fitting) to 0.05 m (RMSE of plane fitting) using NRLI-UAV, demonstrating a high level of automation, robustness, and accuracy.