Navigation Aid

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 10200 Experts worldwide ranked by ideXlab platform

Siegfried Wahl - One of the best experts on this subject based on the ideXlab platform.

  • Navigation Aid for blind persons by visual to auditory sensory substitution a pilot study
    PLOS ONE, 2020
    Co-Authors: Alexander Neugebauer, Katharina Rifai, M Getzlaff, Siegfried Wahl
    Abstract:

    Purpose In this study, we investigate to what degree augmented reality technology can be used to create and evaluate a visual-to-auditory sensory substitution device to improve the performance of blind persons in Navigation and recognition tasks. Methods A sensory substitution algorithm that translates 3D visual information into audio feedback was designed. This algorithm was integrated in an augmented reality based mobile phone application. Using the mobile device as sensory substitution device, a study with blind participants (n = 7) was performed. The participants navigated through pseudo-randomized obstacle courses using either the sensory substitution device, a white cane or a combination of both. In a second task, virtual 3D objects and structures had to be identified by the participants using the same sensory substitution device. Results The realized application for mobile devices enabled participants to complete the Navigation and object recognition tasks in an experimental environment already within the first trials without previous training. This demonstrates the general feasibility and low entry barrier of the designed sensory substitution algorithm. In direct comparison to the white cane, within the study duration of ten hours the sensory substitution device did not offer a statistically significant improvement in Navigation.

Karin Wardell - One of the best experts on this subject based on the ideXlab platform.

Amirhossein Tamjidi - One of the best experts on this subject based on the ideXlab platform.

  • 6 dof pose estimation of a robotic Navigation Aid by tracking visual and geometric features
    International Conference on Robotics and Automation, 2015
    Co-Authors: Soonhac Hong, Amirhossein Tamjidi
    Abstract:

    This paper presents a 6-DOF Pose Estimation (PE) method for a Robotic Navigation Aid (RNA) for the visually impaired. The RNA uses a single 3D camera for PE and object detection. The proposed method processes the camera's intensity and range data to estimates the camera's egomotion that is then used by an Extended Kalman Filter (EKF) as the motion model to track a set of visual features for PE. A RANSAC process is employed in the EKF to identify inliers from the visual feature correspondences between two image frames. Only the inliers are used to update the EKF's state. The EKF integrates the egomotion into the camera's pose in the world coordinate system. To retain the EKF's consistency, the distance between the camera and the floor plane (extracted from the range data) is used by the EKF as the observation of the camera's z coordinate. Experimental results demonstrate that the proposed method results in accurate pose estimates for positioning the RNA in indoor environments. Based on the PE method, a wayfinding system is developed for localization of the RNA in a home environment. The system uses the estimated pose and the floorplan to locate the RNA user in the home environment and announces the points of interest and Navigational commands to the user through a speech interface. NOTE TO PRACTITIONERS This work was motivated by the limitations of the existing Navigation technology for the visually impaired. Most of the existing methods use a point/line measurement sensor for indoor object detection. Therefore, they lack capability in detecting 3D objects and positioning a blind traveler. Stereovision has been used in recent research. However, it cannot provide reliable depth data for object detection. Also, it tends to produce a lower localization accuracy because its depth measurement error quadratically increases with the true distance. This paper suggests a new approach for navigating a blind traveler. The method uses a single 3D time-of-flight camera for both 6-DOF PE and 3D object detection and thus results in a small-sized but powerful RNA. Due to the camera's constant depth accuracy, the proposed egomotion estimation method results in a smaller error than that of existing methods. A new EKF method is proposed to integrate the egomotion into the RNA's 6-DOF pose in the world coordinate system by tracking both visual and geometric features of the operating environment. The proposed method substantially reduces the pose error of a standard EKF method and thus supports a longer range Navigation task. One limitation of the method is that it requires a feature-rich environment to work well.

  • 6 dof pose estimation of a portable Navigation Aid for the visually impaired
    IEEE International Symposium on Robotic and Sensors Environments, 2013
    Co-Authors: Amirhossein Tamjidi, Soonhac Hong
    Abstract:

    In this paper, we present a 6-DOF pose estimation method for a Portable Navigation Aid for the visually impaired. The Navigation Aid uses a single 3D camera-SwissRanger SR4000-for both pose estimation and object/obstacle detection. The SR4000 provides intensity and range data of the scene. These data are simultaneously processed to estimate the camera's egomotion, which is then used as the motion model by an Extended Kalman Filter (EKF) to track the visual features maintained in a local map. In order to create correct feature correspondences between images, a 3-point RANSAC (RANdom SAmple Consensus) process is devised to identify the inliers from the feature correspondences based on the SIFT (Scale Invariant Feature Transform) descriptors. Only the inliers are used to update the EKF's state. Additional inliers caused by the updated state are then located and used to perform another state update. The EKF integrates the egomotion into the camera's pose in the world coordinate with a relatively small error. Since the camera's y coordinate may be measured as the distance between the camera and the floor plane, it is used as an additional observation in this work. Experimental results indicate that the proposed pose estimation method results in accurate pose estimates for positioning the visually impaired in an indoor environment.

M Grund - One of the best experts on this subject based on the ideXlab platform.

  • experimental results in synchronous clock one way travel time acoustic Navigation for autonomous underwater vehicles
    International Conference on Robotics and Automation, 2007
    Co-Authors: Ryan M Eustice, Hanumant Singh, Louis L Whitcomb, M Grund
    Abstract:

    This paper reports recent experimental results in the development and deployment of a synchronous-clock acoustic Navigation system suitable for the simultaneous Navigation of multiple underwater vehicles. The goal of this work is to enable the task of navigating multiple autonomous underwater vehicles (AUVs) over length scales of O(100 km), while maintaining error tolerances commensurate with conventional long-baseline transponder-based Navigation systems (i.e., O(1 m)), but without the requisite need for deploying, calibrating, and recovering seafloor anchored acoustic transponders. Our Navigation system is comprised of an acoustic modem-based communication/Navigation system that allows for onboard Navigational data to be broadcast as a data packet by a source node, and for all passively receiving nodes to be able to decode the data packet to obtain a one-way travel time pseudo-range measurement and ephemeris data. We present results for two different field experiments using a two-node configuration consisting of a global positioning system (GPS) equipped surface ship acting as a global Navigation Aid to a Doppler-Aided AUV. In each experiment, vehicle position was independently corroborated by other standard Navigation means. Initial results for a maximum-likelihood sensor fusion framework are reported.

  • Multi-band acoustic modem for the communications and Navigation Aid AUV
    Proceedings of OCEANS 2005 MTS IEEE, 2024
    Co-Authors: Lee Freitag, M Grund, Jim Partan, Keenan Ball, Sandipa Singh, Peter Koski
    Abstract:

    An acoustic communications system with the capability to operate at multiple data rates in two frequency bands has been designed and developed for use in 21-inch AUVs. The system is specifically designed around the 21-inch diameter Bluefin Robotics AUV, though it could be adapted to smaller vehicles (12-inch), or similar free-flooded vehicles. The system includes both high (25 kHz) and mid-frequency (3 kHz) modems and supports data rates from 80 bps to more than 5000 bps. Both of the modems utilize four-channel arrays to increase reliability. The high-frequency modem is also used to support multi-vehicle Navigation via one-way travel time measurements using synchronized clocks on all of the vehicles in a work group

Soonhac Hong - One of the best experts on this subject based on the ideXlab platform.

  • co robotic cane a new robotic Navigation Aid for the visually impaired
    IEEE Systems Man and Cybernetics Magazine, 2016
    Co-Authors: Soonhac Hong, Xiangfei Qian
    Abstract:

    This article presents a new robotic Navigation Aid (RNA) called a co-robotic cane (CRC). The CRC uses a three-dimensional (3-D) camera for both pose estimation and object recognition in an unknown indoor environment. The six-degrees-of-freedom (6-DOF) pose estimation method determines the CRC's pose change by an egomotion estimation method and the iterative closest point algorithm and reduces the pose integration error by a pose graph optimization algorithm. The pose estimation method does not require any prior knowledge of the environment. The object recognition method detects indoor structures, such as stairways and doorways, and objects, such as tables and computer monitors, by a Gaussian mixture model (GMM)-based pattern-recognition method. Some structures/objects (e.g., stairways) can be used as Navigational waypoints and others for obstacle avoidance. The CRC can be used in either robot cane (active) mode or white cane (passive) mode. In the active mode, it guides the user by steering itself into the desired direction of travel, while in the passive mode it functions as a computer-visionenhanced white cane. The CRC is a co-robot. It can detect human intent and use the intent to select a suitable mode automatically.

  • 6 dof pose estimation of a robotic Navigation Aid by tracking visual and geometric features
    International Conference on Robotics and Automation, 2015
    Co-Authors: Soonhac Hong, Amirhossein Tamjidi
    Abstract:

    This paper presents a 6-DOF Pose Estimation (PE) method for a Robotic Navigation Aid (RNA) for the visually impaired. The RNA uses a single 3D camera for PE and object detection. The proposed method processes the camera's intensity and range data to estimates the camera's egomotion that is then used by an Extended Kalman Filter (EKF) as the motion model to track a set of visual features for PE. A RANSAC process is employed in the EKF to identify inliers from the visual feature correspondences between two image frames. Only the inliers are used to update the EKF's state. The EKF integrates the egomotion into the camera's pose in the world coordinate system. To retain the EKF's consistency, the distance between the camera and the floor plane (extracted from the range data) is used by the EKF as the observation of the camera's z coordinate. Experimental results demonstrate that the proposed method results in accurate pose estimates for positioning the RNA in indoor environments. Based on the PE method, a wayfinding system is developed for localization of the RNA in a home environment. The system uses the estimated pose and the floorplan to locate the RNA user in the home environment and announces the points of interest and Navigational commands to the user through a speech interface. NOTE TO PRACTITIONERS This work was motivated by the limitations of the existing Navigation technology for the visually impaired. Most of the existing methods use a point/line measurement sensor for indoor object detection. Therefore, they lack capability in detecting 3D objects and positioning a blind traveler. Stereovision has been used in recent research. However, it cannot provide reliable depth data for object detection. Also, it tends to produce a lower localization accuracy because its depth measurement error quadratically increases with the true distance. This paper suggests a new approach for navigating a blind traveler. The method uses a single 3D time-of-flight camera for both 6-DOF PE and 3D object detection and thus results in a small-sized but powerful RNA. Due to the camera's constant depth accuracy, the proposed egomotion estimation method results in a smaller error than that of existing methods. A new EKF method is proposed to integrate the egomotion into the RNA's 6-DOF pose in the world coordinate system by tracking both visual and geometric features of the operating environment. The proposed method substantially reduces the pose error of a standard EKF method and thus supports a longer range Navigation task. One limitation of the method is that it requires a feature-rich environment to work well.

  • 6 dof pose estimation of a portable Navigation Aid for the visually impaired
    IEEE International Symposium on Robotic and Sensors Environments, 2013
    Co-Authors: Amirhossein Tamjidi, Soonhac Hong
    Abstract:

    In this paper, we present a 6-DOF pose estimation method for a Portable Navigation Aid for the visually impaired. The Navigation Aid uses a single 3D camera-SwissRanger SR4000-for both pose estimation and object/obstacle detection. The SR4000 provides intensity and range data of the scene. These data are simultaneously processed to estimate the camera's egomotion, which is then used as the motion model by an Extended Kalman Filter (EKF) to track the visual features maintained in a local map. In order to create correct feature correspondences between images, a 3-point RANSAC (RANdom SAmple Consensus) process is devised to identify the inliers from the feature correspondences based on the SIFT (Scale Invariant Feature Transform) descriptors. Only the inliers are used to update the EKF's state. Additional inliers caused by the updated state are then located and used to perform another state update. The EKF integrates the egomotion into the camera's pose in the world coordinate with a relatively small error. Since the camera's y coordinate may be measured as the distance between the camera and the floor plane, it is used as an additional observation in this work. Experimental results indicate that the proposed pose estimation method results in accurate pose estimates for positioning the visually impaired in an indoor environment.