Imaging Sonar

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1542 Experts worldwide ranked by ideXlab platform

Michael Kaess - One of the best experts on this subject based on the ideXlab platform.

  • IROS - Wide Aperture Imaging Sonar Reconstruction using Generative Models
    2019 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS), 2019
    Co-Authors: Eric Westman, Michael Kaess
    Abstract:

    In this paper we propose a new framework for reconstructing underwater surfaces from wide aperture Imaging Sonar sequences. We demonstrate that when the leading object edge in each Sonar image can be accurately triangulated in 3D, the remaining surface may be “filled in” using a generative sensor model. This process generates a full three-dimensional point cloud for each image in the sequence. We propose integrating these surface measurements into a cohesive global map using a truncated signed distance field (TSDF) to fuse the point clouds generated by each image. This allows for reconstructing surfaces with significantly fewer Sonar images and viewpoints than previous methods. The proposed method is evaluated by reconstructing a mock-up piling structure and a real world underwater piling, in a test tank environment and in the field, respectively. Our surface reconstructions are quantitatively compared to ground-truth models and are shown to be more accurate than previous state-of-the-art algorithms.

  • wide aperture Imaging Sonar reconstruction using generative models
    Intelligent Robots and Systems, 2019
    Co-Authors: Eric Westman, Michael Kaess
    Abstract:

    In this paper we propose a new framework for reconstructing underwater surfaces from wide aperture Imaging Sonar sequences. We demonstrate that when the leading object edge in each Sonar image can be accurately triangulated in 3D, the remaining surface may be “filled in” using a generative sensor model. This process generates a full three-dimensional point cloud for each image in the sequence. We propose integrating these surface measurements into a cohesive global map using a truncated signed distance field (TSDF) to fuse the point clouds generated by each image. This allows for reconstructing surfaces with significantly fewer Sonar images and viewpoints than previous methods. The proposed method is evaluated by reconstructing a mock-up piling structure and a real world underwater piling, in a test tank environment and in the field, respectively. Our surface reconstructions are quantitatively compared to ground-truth models and are shown to be more accurate than previous state-of-the-art algorithms.

  • degeneracy aware Imaging Sonar simultaneous localization and mapping
    IEEE Journal of Oceanic Engineering, 2019
    Co-Authors: Eric Westman, Michael Kaess
    Abstract:

    High-frequency Imaging Sonar sensors have recently been applied to aid underwater vehicle localization, by providing frame-to-frame odometry measurements or loop closures over large timescales. Previous methods have often assumed a planar environment, thereby restricting the use of such algorithms mostly to seafloor mapping. We propose an algorithm to generate pose-to-pose constraints for pairs of Sonar images, which may also be applied to larger sets of images, that makes no assumptions about the environmental geometry. The algorithm is sensitive to the inherent degeneracies of the Imaging Sonar sensor model, and may be tuned to tradeoff between providing more constraints on the sensor motion and not overfitting to noise in the measurements. For real-time localization, we fuse the resulting pair-wise Sonar pose constraints with vehicle odometry in a pose graph optimization framework. We rigorously evaluate the proposed method and demonstrate improvement in accuracy over previously proposed formulations both in simulation and real-world experiments.

  • feature based slam for Imaging Sonar with under constrained landmarks
    International Conference on Robotics and Automation, 2018
    Co-Authors: Eric Westman, Akshay Hinduja, Michael Kaess
    Abstract:

    Recent algorithms have demonstrated the feasibility of underwater feature-based SLAM using Imaging Sonar. But previous methods have either relied on manual feature extraction and correspondence or used prior knowledge of the scene, such as the planar scene assumption. Our proposed system provides a general-purpose method for feature-point extraction and correspondence in arbitrary scenes. Additionally, we develop a method of identifying point landmarks that are likely to be well-constrained and reliably reconstructed. Finally, we demonstrate that while under-constrained landmarks cannot be accurately reconstructed themselves, they can still be used to constrain and correct the sensor motion. These advances represent a large step towards general-purpose, feature-based SLAM with Imaging Sonar.

  • ICRA - Feature-Based SLAM for Imaging Sonar with Under-Constrained Landmarks
    2018 IEEE International Conference on Robotics and Automation (ICRA), 2018
    Co-Authors: Eric Westman, Akshay Hinduja, Michael Kaess
    Abstract:

    Recent algorithms have demonstrated the feasibility of underwater feature-based SLAM using Imaging Sonar. But previous methods have either relied on manual feature extraction and correspondence or used prior knowledge of the scene, such as the planar scene assumption. Our proposed system provides a general-purpose method for feature-point extraction and correspondence in arbitrary scenes. Additionally, we develop a method of identifying point landmarks that are likely to be well-constrained and reliably reconstructed. Finally, we demonstrate that while under-constrained landmarks cannot be accurately reconstructed themselves, they can still be used to constrain and correct the sensor motion. These advances represent a large step towards general-purpose, feature-based SLAM with Imaging Sonar.

Soncheol Yu - One of the best experts on this subject based on the ideXlab platform.

  • 3-D Reconstruction of Underwater Objects Using Image Sequences from Optical Camera and Imaging Sonar
    OCEANS 2019 MTS IEEE SEATTLE, 2019
    Co-Authors: Seokyong Song, Soncheol Yu
    Abstract:

    This paper proposes a method for 3-D reconstruction of underwater objects by using an optical camera and an Imaging Sonar. The optical camera built a volumetric model by performing simultaneous localization and mapping, whereas the Imaging Sonar built another volumetric model by using a space-carving approach. Two volumetric models corrected each other, and then the corrected models were integrated into a hybrid model to complete the reconstruction. The proposed method could reconstruct underwater objects with less volumetric errors than conventional reconstruction methods, but required more computation time. The proposed method could be applicable to underwater missions where the reconstruction accuracy is more valuable than the operation time.

  • Optimal Strategy for Seabed 3D Mapping of AUV Based on Imaging Sonar
    2018 OCEANS - MTS IEEE Kobe Techno-Oceans (OTO), 2018
    Co-Authors: Soncheol Yu
    Abstract:

    An Imaging Sonar loses information of elevation angle while a mapping process. To overcome this limitation, the motion of the autonomous underwater vehicle (AUV) can be used to obtain 3D information using the Imaging Sonar. In this paper, we propose a two-stage mapping strategy for accurately generating underwater 3D maps based on an Imaging Sonar. It consists of searching and scanning stage. In the scanning stage, multi-directional scanning is performed on an object. To process 3D point cloud data obtained by multi-directional scanning, we propose a polygonal approximation method. This method reduces the uncertainty of 3D point cloud data by extracting intersection area of multiple data groups. To verify the feasibility of proposed strategies, we conducted indoor tank experiments using a hovering-type AUV ‘Cyclops’ and acoustic lens-based multibeam Sonar (ALMS) ‘DIDSON’.

  • Imaging Sonar based navigation method for backtracking of AUV
    2017
    Co-Authors: Seokyong Song, Soncheol Yu
    Abstract:

    We propose an Imaging Sonar-based backtracking method as a navigation strategy for an underwater investigation of AUVs. The purpose of backtracking method is to reduce a drift error caused by the inaccuracy of navigation sensors when an AUV returns to its previous position. The AUV divides the trajectory into several intervals and returns to the previous position while correcting small drift error for each interval. For the backtracking, we suggest a method obtaining terrain information using an Imaging Sonar. We create a 3D point cloud by scanning the seafloor. We can use the 3D point cloud to detect objects and to select natural landmarks. The selected natural landmark can be used as a reference in the backtracking process. To verify the feasibility of proposed methods, we conducted field experiments using a hovering-type AUV ‘Cyclops’.

  • Imaging Sonar based real time underwater object detection utilizing adaboost method
    IEEE International Underwater Technology Symposium, 2017
    Co-Authors: Soncheol Yu
    Abstract:

    We propose a real-time underwater object detection algorithm using forward-looking Imaging Sonar. Considering the characteristics of Sonar image, the Haar-like feature is used to construct each weak classifier. We construct a strong classifier by combining several weak classifiers. An adaptive Boosting (AdaBoost) algorithm is utilized to determine coefficients of each weak classifier and weights of training dataset. Moreover, we improve the efficiency of calculation using a cascade structure. To verify our method, we use the field data obtained by hovering-type AUV “Cyclops”. From this data, we create a training dataset and conduct the learning process of detector. The experiment results show the accuracy and tolerance of the object detector made by the proposed approach.

  • development of Imaging Sonar based autonomous trajectory backtracking using auvs
    IEEE OES Autonomous Underwater Vehicles, 2016
    Co-Authors: Soncheol Yu
    Abstract:

    We proposed an autonomous trajectory backtracking method using forward-looking Imaging Sonar. Additionally, we also dealt with a Fourier-based Sonar image processing for the autonomous trajectory backtracking. We suggested an algorithm to estimate translational shifts and rotation angle between two Sonar images. By feeding back the estimated data, AUV can compensate the drift error of dead-reckoning. To verify these algorithms, we used the field data obtained by the forward-looking Imaging Sonar of hovering-type AUV ‘Cyclops’. We verified accuracy and tolerance of the proposed algorithm by performing experiments.

Eric Westman - One of the best experts on this subject based on the ideXlab platform.

  • Underwater Localization and Mapping with Imaging Sonar
    2019
    Co-Authors: Eric Westman
    Abstract:

    Acoustic Imaging Sonars have been used for a variety of tasks intended to increase the autonomous capabilities of underwater vehicles. Among the most critical tasks of any autonomous vehicle are localization and mapping, which are the focus of this work. The difficulties presented by the Imaging Sonar sensor have led many previous attempts at localization and mapping with the Imaging Sonar to makerestrictive assumptions, such as a planar seafloor environment or planar sensor motion. Lifting such restrictions is an important step toward achieving general-purpose autonomous localization and mapping in real-world environments. In this dissertation, I take inspiration from related problems in the field of computer vision and demonstrate that Imaging Sonar localization and mapping may be modeled and solved using similar methods. To achieve accurate large-scale localization, I present degeneracy-aware acoustic bundle adjustment, a feature-based algorithm inspired by optical bundle adjustment. To achieve 3-D mapping of underwater surfaces, I propose several distinct algorithms. First, I present a method akin toshape-from-shading that uses a generative sensor model to infer a dense pointcloud from each image and fuses multiple such observations into a global model. Second, I describe a volumetric albedo framework for general-purpose Sonar reconstruction, which derives from the related problem of non-line-of-sight reconstruction. This method performs inference over the elevation aperture and generates best results with a rich variety of viewpoints. Lastly, I present the theory of Fermat paths for Sonar reconstruction, which utilizes the 2-D Fermat flow equation to reconstruct aparticular set of object surface points with short baseline motion.

  • wide aperture Imaging Sonar reconstruction using generative models
    Intelligent Robots and Systems, 2019
    Co-Authors: Eric Westman, Michael Kaess
    Abstract:

    In this paper we propose a new framework for reconstructing underwater surfaces from wide aperture Imaging Sonar sequences. We demonstrate that when the leading object edge in each Sonar image can be accurately triangulated in 3D, the remaining surface may be “filled in” using a generative sensor model. This process generates a full three-dimensional point cloud for each image in the sequence. We propose integrating these surface measurements into a cohesive global map using a truncated signed distance field (TSDF) to fuse the point clouds generated by each image. This allows for reconstructing surfaces with significantly fewer Sonar images and viewpoints than previous methods. The proposed method is evaluated by reconstructing a mock-up piling structure and a real world underwater piling, in a test tank environment and in the field, respectively. Our surface reconstructions are quantitatively compared to ground-truth models and are shown to be more accurate than previous state-of-the-art algorithms.

  • IROS - Wide Aperture Imaging Sonar Reconstruction using Generative Models
    2019 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS), 2019
    Co-Authors: Eric Westman, Michael Kaess
    Abstract:

    In this paper we propose a new framework for reconstructing underwater surfaces from wide aperture Imaging Sonar sequences. We demonstrate that when the leading object edge in each Sonar image can be accurately triangulated in 3D, the remaining surface may be “filled in” using a generative sensor model. This process generates a full three-dimensional point cloud for each image in the sequence. We propose integrating these surface measurements into a cohesive global map using a truncated signed distance field (TSDF) to fuse the point clouds generated by each image. This allows for reconstructing surfaces with significantly fewer Sonar images and viewpoints than previous methods. The proposed method is evaluated by reconstructing a mock-up piling structure and a real world underwater piling, in a test tank environment and in the field, respectively. Our surface reconstructions are quantitatively compared to ground-truth models and are shown to be more accurate than previous state-of-the-art algorithms.

  • degeneracy aware Imaging Sonar simultaneous localization and mapping
    IEEE Journal of Oceanic Engineering, 2019
    Co-Authors: Eric Westman, Michael Kaess
    Abstract:

    High-frequency Imaging Sonar sensors have recently been applied to aid underwater vehicle localization, by providing frame-to-frame odometry measurements or loop closures over large timescales. Previous methods have often assumed a planar environment, thereby restricting the use of such algorithms mostly to seafloor mapping. We propose an algorithm to generate pose-to-pose constraints for pairs of Sonar images, which may also be applied to larger sets of images, that makes no assumptions about the environmental geometry. The algorithm is sensitive to the inherent degeneracies of the Imaging Sonar sensor model, and may be tuned to tradeoff between providing more constraints on the sensor motion and not overfitting to noise in the measurements. For real-time localization, we fuse the resulting pair-wise Sonar pose constraints with vehicle odometry in a pose graph optimization framework. We rigorously evaluate the proposed method and demonstrate improvement in accuracy over previously proposed formulations both in simulation and real-world experiments.

  • feature based slam for Imaging Sonar with under constrained landmarks
    International Conference on Robotics and Automation, 2018
    Co-Authors: Eric Westman, Akshay Hinduja, Michael Kaess
    Abstract:

    Recent algorithms have demonstrated the feasibility of underwater feature-based SLAM using Imaging Sonar. But previous methods have either relied on manual feature extraction and correspondence or used prior knowledge of the scene, such as the planar scene assumption. Our proposed system provides a general-purpose method for feature-point extraction and correspondence in arbitrary scenes. Additionally, we develop a method of identifying point landmarks that are likely to be well-constrained and reliably reconstructed. Finally, we demonstrate that while under-constrained landmarks cannot be accurately reconstructed themselves, they can still be used to constrain and correct the sensor motion. These advances represent a large step towards general-purpose, feature-based SLAM with Imaging Sonar.

John Leonard - One of the best experts on this subject based on the ideXlab platform.

  • Imaging Sonar-aided navigation for autonomous underwater harbor surveillance
    IEEE RSJ 2010 International Conference on Intelligent Robots and Systems IROS 2010 - Conference Proceedings, 2010
    Co-Authors: Hordur Johannsson, Franz Hover, Michael Kaess, Brendan Englot, John Leonard
    Abstract:

    In this paper we address the problem of drift-free navigation for underwater vehicles performing harbor surveillance and ship hull inspection. Maintaining accurate localization for the duration of a mission is important for a variety of tasks, such as planning the vehicle trajectory and ensuring coverage of the area to be inspected. Our approach only uses onboard sensors in a simultaneous localization and mapping setting and removes the need for any external infrastructure like acoustic beacons. We extract dense features from a forward-looking Imaging Sonar and apply pair-wise registration between Sonar frames. The registrations are combined with onboard velocity, attitude and acceleration sensors to obtain an improved estimate of the vehicle trajectory. We show results from several experiments that demonstrate drift-free navigation in various underwater environments.

Emili Hernandez - One of the best experts on this subject based on the ideXlab platform.

  • dam wall detection and tracking using a mechanically scanned Imaging Sonar
    International Conference on Robotics and Automation, 2009
    Co-Authors: Wajahat Kazmi, David Ribas, Pere Ridao, Emili Hernandez
    Abstract:

    In Dam inspection tasks, an underwater robot has to grab images while surveying the wall meanwhile maintaining a certain distance and relative orientation. This paper proposes the use of an MSIS (Mechanically Scanned Imaging Sonar) for relative positioning of a robot with respect to the wall. An Imaging Sonar gathers polar image scans from which depth images (Range & Bearing) are generated. Depth scans are first processed to extract a line corresponding to the wall (with the Hough Transform), which is then tracked by means of an EKF (Extended Kalman Filter) using a static motion model and an implicit measurement equation associating the sensed points to the candidate line. The line estimate is referenced to the robot fixed frame and represented in polar coordinates (ρ&θ) which directly corresponds to the actual distance and relative orientation of the robot with respect to the wall. The proposed system has been tested in simulation as well as in water tank conditions.

  • ICRA - Dam wall detection and tracking using a Mechanically Scanned Imaging Sonar
    2009 IEEE International Conference on Robotics and Automation, 2009
    Co-Authors: Wajahat Kazmi, David Ribas, Pere Ridao, Emili Hernandez
    Abstract:

    In Dam inspection tasks, an underwater robot has to grab images while surveying the wall meanwhile maintaining a certain distance and relative orientation. This paper proposes the use of an MSIS (Mechanically Scanned Imaging Sonar) for relative positioning of a robot with respect to the wall. An Imaging Sonar gathers polar image scans from which depth images (Range & Bearing) are generated. Depth scans are first processed to extract a line corresponding to the wall (with the Hough Transform), which is then tracked by means of an EKF (Extended Kalman Filter) using a static motion model and an implicit measurement equation associating the sensed points to the candidate line. The line estimate is referenced to the robot fixed frame and represented in polar coordinates (ρ&θ) which directly corresponds to the actual distance and relative orientation of the robot with respect to the wall. The proposed system has been tested in simulation as well as in water tank conditions.

  • pose based slam with probabilistic scan matching algorithm using a mechanical scanned Imaging Sonar
    OCEANS Conference, 2009
    Co-Authors: Angelos Mallios, David Ribas, Pere Ridao, Emili Hernandez, Francesco Maurelli, Yvan Petillot
    Abstract:

    This paper proposes a pose-based algorithm to solve the full SLAM problem for an Autonomous Underwater Vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a Mechanical Scanning Imaging Sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method utilizes two Extended Kalman Filters (EKF). The first, estimates the local path travelled by the robot while grabbing the scan as well as its uncertainty and provides position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augment state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600m path within a marina environment, showing the viability of the proposed approach.

  • msispic a probabilistic scan matching algorithm using a mechanical scanned Imaging Sonar
    Journal of Physical Agents, 2009
    Co-Authors: Emili Hernandez, David Ribas, Pere Ridao, J Batlle
    Abstract:

    This paper compares two well known scan matching algorithms: the MbICP and the pIC. As a result of the study, it is proposed the MSISpIC, a probabilistic scan matching algorithm for the localization of an Autonomous Underwater Vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), and the robot displacement estimated through dead-reckoning with the help of a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method is an extension of the pIC algorithm. Its major contribution consists in: 1) using an EKF to estimate the local path traveled by the robot while grabbing the scan as well as its uncertainty and 2) proposing a method to group into a unique scan, with a convenient uncertainty model, all the data grabbed along the path described by the robot. The algorithm has been tested on an AUV guided along a 600m path within a marina environment with satisfactory results. Index Terms—Autonomous robots.

  • Pose-based slam with probabilistic scan matching algorithm using a mechanical scanned Imaging Sonar
    Instrumentation viewpoint, 2009
    Co-Authors: Angelos Mallios, Emili Hernandez, Pere Ridao, David Ribas
    Abstract:

    This paper proposes a pose-based algorithm to solve the full SLAM problem for an Autonomous Underwater Vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a Mechanical Scanned Imaging Sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600m path within a marina environment, showing the viability of the proposed approach.