Omnidirectional Camera

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4305 Experts worldwide ranked by ideXlab platform

Roland Siegwart - One of the best experts on this subject based on the ideXlab platform.

  • Topological Mapping and Scene Recognition With Lightweight Color Descriptors for an Omnidirectional Camera
    IEEE Transactions on Robotics, 2014
    Co-Authors: Ming Liu, Roland Siegwart
    Abstract:

    Scene recognition problems for mobile robots have been extensively studied. This is important for tasks such as visual topological mapping. Usually, sophisticated key-point-based descriptors are used, which can be computationally expensive. In this paper, we describe a lightweight novel scene recognition method using an adaptive descriptor, which is based on color features and geometric information that are extracted from an uncalibrated Omnidirectional Camera. The proposed method enables a mobile robot to perform online registration of new scenes onto a topological representation automatically and solve the localization problem to topological regions simultaneously, all in real time. We adopt a Dirichlet process mixture model (DPMM) to describe the online inference process. It is based on an approximation of conditional probabilities of the new measurements given incrementally estimated reference models. It enables online inference speeds of up to 50 Hz for a normal CPU. We compare it with state-of-the-art key-point descriptors and show the advantage of the proposed algorithm in terms of performance and computational efficiency. A real-world experiment is carried out with a mobile robot equipped with an Omnidirectional Camera. Finally, we show the results on extended datasets.

  • visual homing from scale with an uncalibrated Omnidirectional Camera
    IEEE Transactions on Robotics, 2013
    Co-Authors: Ming Liu, Cedric Pradalier, Roland Siegwart
    Abstract:

    Visual homing enables a mobile robot to move to a reference position using only visual information. The approaches that we present in this paper utilize matched image key points (e.g., scale-invariant feature transform) that are extracted from an Omnidirectional Camera as inputs. First, we propose three visual homing methods that are based on feature scale, bearing, and the combination of both, under an image-based visual servoing framework. Second, considering computational cost, we propose a simplified homing method which takes an advantage of the scale information of key-point features to compute control commands. The observability and controllability of the algorithm are proved. An outlier rejection algorithm is also introduced and evaluated. The results of all these methods are compared both in simulations and experiments. We report the performance of all related methods on a series of commonly cited indoor datasets, showing the advantages of the proposed method. Furthermore, they are tested on a compact dataset of Omnidirectional panoramic images, which is captured under dynamic conditions with ground truth for future research and comparison.

  • scale only visual homing from an Omnidirectional Camera
    International Conference on Robotics and Automation, 2012
    Co-Authors: Ming Liu, Cedric Pradalier, Francois Pomerleau, Roland Siegwart
    Abstract:

    Visual Homing is the process by which a mobile robot moves to a Home position using only information extracted from visual data. The approach we present in this paper uses image keypoints (e.g. SIFT) extracted from Omnidirectional images and matches the current set of keypoints with the set recorded at the Home location. In this paper, we first formulate three different visual homing problems using uncalibrated Omnidirectional Camera within the Image Based Visual Servoing (IBVS) framework; then we propose a novel simplified homing approach, which is inspired by IBVS, based only on the scale information of the SIFT features, with its computational cost linear to the number of features. This paper reports on the application of our method on a commonly cited indoor database where it outperforms other approaches. We also briefly present results on a real robot and allude on the integration into a topological navigation framework.

  • dp fact towards topological mapping and scene recognition with color for Omnidirectional Camera
    International Conference on Robotics and Automation, 2012
    Co-Authors: Ming Liu, Roland Siegwart
    Abstract:

    Topological mapping and scene recognition problems are still challenging, especially for online realtime vision-based applications. We develop a hierarchical probabilistic model to tackle them using color information. This work is stimulated by our previous work [1] which defined a lightweight descriptor using color and geometry information from segmented panoramic images. Our novel model uses a Dirichlet Process Mixture Model to combine color and geometry features which are extracted from Omnidirectional images. The inference of the model is based on an approximation of conditional probabilities of observations given estimated models. It allows online inference of the mixture model in real-time (at 50Hz), which outperforms other existing approaches. A real experiment is carried out on a mobile robot equipped with an Omnidirectional Camera. The results show the competence against the state-of-art.

  • a flexible technique for accurate Omnidirectional Camera calibration and structure from motion
    International Conference on Computer Vision Systems, 2006
    Co-Authors: Davide Scaramuzza, Agostino Martinelli, Roland Siegwart
    Abstract:

    In this paper, we present a flexible new technique for single viewpoint Omnidirectional Camera calibration. The proposed method only requires the Camera to observe a planar pattern shown at a few different orientations. Either the Camera or the planar pattern can be freely moved. No a priori knowledge of the motion is required, nor a specific model of the Omnidirectional sensor. The only assumption is that the image projection function can be described by a Taylor series expansion whose coefficients are estimated by solving a two-step least-squares linear minimization problem. To test the proposed technique, we calibrated a panoramic Camera having a field of view greater than 200 in the vertical direction, and we obtained very good results. To investigate the accuracy of the calibration, we also used the estimated omni-Camera model in a structure from motion experiment. We obtained a 3D metric reconstruction of a scene from two highly distorted Omnidirectional images by using image correspondences only. Compared with classical techniques, which rely on a specific parametric model of the Omnidirectional Camera, the proposed procedure is independent of the sensor, easy to use, and flexible.

Ming Liu - One of the best experts on this subject based on the ideXlab platform.

  • IEEE TRANSACTIONS ON ROBOTICS 1 Visual Homing From Scale With an Uncalibrated Omnidirectional Camera
    2016
    Co-Authors: Ming Liu, Student Member
    Abstract:

    Abstract—Visual homing enables a mobile robot to move to a reference position using only visual information. The approaches that we present in this paper utilize matched image key points (e.g., scale-invariant feature transform) that are extracted from an Omnidirectional Camera as inputs. First, we propose three vi-sual homing methods that are based on feature scale, bearing, and the combination of both, under an image-based visual servoing framework. Second, considering computational cost, we propose a simplified homing method which takes an advantage of the scale information of key-point features to compute control commands. The observability and controllability of the algorithm are proved. An outlier rejection algorithm is also introduced and evaluated. The results of all these methods are compared both in simulations and experiments. We report the performance of all related meth-ods on a series of commonly cited indoor datasets, showing the advantages of the proposed method. Furthermore, they are tested on a compact dataset of Omnidirectional panoramic images, which is captured under dynamic conditions with ground truth for future research and comparison. Index Terms—Omnidirectional Camera, topological visual navigation, visual homing, visual servoing. I

  • Topological Mapping and Scene Recognition With Lightweight Color Descriptors for an Omnidirectional Camera
    IEEE Transactions on Robotics, 2014
    Co-Authors: Ming Liu, Roland Siegwart
    Abstract:

    Scene recognition problems for mobile robots have been extensively studied. This is important for tasks such as visual topological mapping. Usually, sophisticated key-point-based descriptors are used, which can be computationally expensive. In this paper, we describe a lightweight novel scene recognition method using an adaptive descriptor, which is based on color features and geometric information that are extracted from an uncalibrated Omnidirectional Camera. The proposed method enables a mobile robot to perform online registration of new scenes onto a topological representation automatically and solve the localization problem to topological regions simultaneously, all in real time. We adopt a Dirichlet process mixture model (DPMM) to describe the online inference process. It is based on an approximation of conditional probabilities of the new measurements given incrementally estimated reference models. It enables online inference speeds of up to 50 Hz for a normal CPU. We compare it with state-of-the-art key-point descriptors and show the advantage of the proposed algorithm in terms of performance and computational efficiency. A real-world experiment is carried out with a mobile robot equipped with an Omnidirectional Camera. Finally, we show the results on extended datasets.

  • visual homing from scale with an uncalibrated Omnidirectional Camera
    IEEE Transactions on Robotics, 2013
    Co-Authors: Ming Liu, Cedric Pradalier, Roland Siegwart
    Abstract:

    Visual homing enables a mobile robot to move to a reference position using only visual information. The approaches that we present in this paper utilize matched image key points (e.g., scale-invariant feature transform) that are extracted from an Omnidirectional Camera as inputs. First, we propose three visual homing methods that are based on feature scale, bearing, and the combination of both, under an image-based visual servoing framework. Second, considering computational cost, we propose a simplified homing method which takes an advantage of the scale information of key-point features to compute control commands. The observability and controllability of the algorithm are proved. An outlier rejection algorithm is also introduced and evaluated. The results of all these methods are compared both in simulations and experiments. We report the performance of all related methods on a series of commonly cited indoor datasets, showing the advantages of the proposed method. Furthermore, they are tested on a compact dataset of Omnidirectional panoramic images, which is captured under dynamic conditions with ground truth for future research and comparison.

  • scale only visual homing from an Omnidirectional Camera
    International Conference on Robotics and Automation, 2012
    Co-Authors: Ming Liu, Cedric Pradalier, Francois Pomerleau, Roland Siegwart
    Abstract:

    Visual Homing is the process by which a mobile robot moves to a Home position using only information extracted from visual data. The approach we present in this paper uses image keypoints (e.g. SIFT) extracted from Omnidirectional images and matches the current set of keypoints with the set recorded at the Home location. In this paper, we first formulate three different visual homing problems using uncalibrated Omnidirectional Camera within the Image Based Visual Servoing (IBVS) framework; then we propose a novel simplified homing approach, which is inspired by IBVS, based only on the scale information of the SIFT features, with its computational cost linear to the number of features. This paper reports on the application of our method on a commonly cited indoor database where it outperforms other approaches. We also briefly present results on a real robot and allude on the integration into a topological navigation framework.

  • dp fact towards topological mapping and scene recognition with color for Omnidirectional Camera
    International Conference on Robotics and Automation, 2012
    Co-Authors: Ming Liu, Roland Siegwart
    Abstract:

    Topological mapping and scene recognition problems are still challenging, especially for online realtime vision-based applications. We develop a hierarchical probabilistic model to tackle them using color information. This work is stimulated by our previous work [1] which defined a lightweight descriptor using color and geometry information from segmented panoramic images. Our novel model uses a Dirichlet Process Mixture Model to combine color and geometry features which are extracted from Omnidirectional images. The inference of the model is based on an approximation of conditional probabilities of observations given estimated models. It allows online inference of the mixture model in real-time (at 50Hz), which outperforms other existing approaches. A real experiment is carried out on a mobile robot equipped with an Omnidirectional Camera. The results show the competence against the state-of-art.

Tsuyoshi Tasaki - One of the best experts on this subject based on the ideXlab platform.

  • Obstacle Location Classification and Self-Localization by Using a Mobile Omnidirectional Camera Based on Tracked Floor Boundary Points and Tracked Scale-Rotation Invariant Feature Points
    Journal of Robotics and Mechatronics, 2011
    Co-Authors: Tsuyoshi Tasaki, Seiji Tokura, Takafumi Sonoura, Fumio Ozaki, Nobuto Matsuhira
    Abstract:

    For a mobile robot self-localization and knowledge of the location of all obstacles around it is essential. Moreover, classification of the obstacles as stable or unstable and fast self-localization using a single sensor such as an Omnidirectional Camera are also important to achieve smooth movements and to reduce the cost of the robot. However, there are few studies on locating and classifying all obstacles around the robot and localizing its self-position fast during its motion by using only one Omnidirectional Camera. In order to locate obstacles and localize the robot, we have developed a new method that uses two kinds of points that can be detected and tracked fast even in Omnidirectional images. In the obstacle location and classification process, we use floor boundary points where the distance from the robot can be measured using an Omnidirectional Camera. By tracking those points, we can classify obstacles by comparing the movement of each tracked point with odometry data. Our method changes a threshold to detect the points based on the result of this comparison in order to enhance classification. In the self-localization process, we use tracked scale and rotation invariant feature points as new landmarks that are detected for a long time by using both a fast tracking method and a slow Speed Up Robust Features (SURF) method. Once landmarks are detected, they can be tracked fast. Therefore, we can achieve fast self-localization. The classification ratio of our method is 85.0%, which is four times higher than that of a previous method. Our robot can localize 2.9 times faster and 4.2 times more accurately by using our method, in comparison to the use of the SURF method alone.

  • IROS - Mobile robot self-localization based on tracked scale and rotation invariant feature points by using an Omnidirectional Camera
    2010 IEEE RSJ International Conference on Intelligent Robots and Systems, 2010
    Co-Authors: Tsuyoshi Tasaki, Seiji Tokura, Takafumi Sonoura, Fumio Ozaki, Nobuto Matsuhira
    Abstract:

    Self-localization is important for mobile robots in order to move accurately, and many works use an Omnidirectional Camera for self-localization. However, it is difficult to realize fast and accurate self-localization by using only one Omnidirectional Camera without any calibration. For its realization, we use “tracked scale and rotation invariant feature points” that are regarded as landmarks. These landmarks can be tracked and do not change for a “long” time. In a landmark selection phase, robots detect the feature points by using both a fast tracking method and a slow “Speed Up Robust Features (SURF)” method. After detection, robots select landmarks from among detected feature points by using Support Vector Machine (SVM) trained by feature vectors based on observation positions. In a self-localization phase, robots detect landmarks while switching detection methods dynamically based on a tracking error criterion that is calculated easily even in the uncalibrated Omnidirectional image. We performed experiments in an approximately 10 [m] × 10 [m] mock supermarket by using a navigation robot ApriTau™ that had an Omnidirectional Camera on its top. The results showed that ApriTau™ could localize 2.9 times faster and 4.2 times more accurately by using the developed method than by using only the SURF method. The results also showed that ApriTau™ could arrive at a goal within a 3 [cm] error from various initial positions at the mock supermarket.

  • IROS - Obstacle classification and location by using a mobile Omnidirectional Camera based on tracked floor boundary points
    2009 IEEE RSJ International Conference on Intelligent Robots and Systems, 2009
    Co-Authors: Tsuyoshi Tasaki, Fumio Ozaki
    Abstract:

    Locating all obstacles around a moving robot and classifying them as stable obstacles or not by a sensor such as an Omnidirectional Camera are essential for the robot's smooth movement and avoiding problems in calibrating many Cameras. However, there are few works on locating and classifying all obstacles around a robot while it is moving by only one Omnidirectional Camera. In order to locate obstacles, we regard floor boundary points where robots can measure the distance from the robot by one Omnidirectional Camera as obstacles. Tracking them, we can classify obstacles by comparing the movement of each tracked point with odometry data. Moreover, our method changes a threshold to detect the points based on the result of comparing in order to enhance classification. The classification ratio of our method is 85.0%, which is four times higher than that of a method without changing a parameter to detect the points.

Cedric Pradalier - One of the best experts on this subject based on the ideXlab platform.

  • visual homing from scale with an uncalibrated Omnidirectional Camera
    IEEE Transactions on Robotics, 2013
    Co-Authors: Ming Liu, Cedric Pradalier, Roland Siegwart
    Abstract:

    Visual homing enables a mobile robot to move to a reference position using only visual information. The approaches that we present in this paper utilize matched image key points (e.g., scale-invariant feature transform) that are extracted from an Omnidirectional Camera as inputs. First, we propose three visual homing methods that are based on feature scale, bearing, and the combination of both, under an image-based visual servoing framework. Second, considering computational cost, we propose a simplified homing method which takes an advantage of the scale information of key-point features to compute control commands. The observability and controllability of the algorithm are proved. An outlier rejection algorithm is also introduced and evaluated. The results of all these methods are compared both in simulations and experiments. We report the performance of all related methods on a series of commonly cited indoor datasets, showing the advantages of the proposed method. Furthermore, they are tested on a compact dataset of Omnidirectional panoramic images, which is captured under dynamic conditions with ground truth for future research and comparison.

  • scale only visual homing from an Omnidirectional Camera
    International Conference on Robotics and Automation, 2012
    Co-Authors: Ming Liu, Cedric Pradalier, Francois Pomerleau, Roland Siegwart
    Abstract:

    Visual Homing is the process by which a mobile robot moves to a Home position using only information extracted from visual data. The approach we present in this paper uses image keypoints (e.g. SIFT) extracted from Omnidirectional images and matches the current set of keypoints with the set recorded at the Home location. In this paper, we first formulate three different visual homing problems using uncalibrated Omnidirectional Camera within the Image Based Visual Servoing (IBVS) framework; then we propose a novel simplified homing approach, which is inspired by IBVS, based only on the scale information of the SIFT features, with its computational cost linear to the number of features. This paper reports on the application of our method on a commonly cited indoor database where it outperforms other approaches. We also briefly present results on a real robot and allude on the integration into a topological navigation framework.

Fumio Ozaki - One of the best experts on this subject based on the ideXlab platform.

  • Obstacle Location Classification and Self-Localization by Using a Mobile Omnidirectional Camera Based on Tracked Floor Boundary Points and Tracked Scale-Rotation Invariant Feature Points
    Journal of Robotics and Mechatronics, 2011
    Co-Authors: Tsuyoshi Tasaki, Seiji Tokura, Takafumi Sonoura, Fumio Ozaki, Nobuto Matsuhira
    Abstract:

    For a mobile robot self-localization and knowledge of the location of all obstacles around it is essential. Moreover, classification of the obstacles as stable or unstable and fast self-localization using a single sensor such as an Omnidirectional Camera are also important to achieve smooth movements and to reduce the cost of the robot. However, there are few studies on locating and classifying all obstacles around the robot and localizing its self-position fast during its motion by using only one Omnidirectional Camera. In order to locate obstacles and localize the robot, we have developed a new method that uses two kinds of points that can be detected and tracked fast even in Omnidirectional images. In the obstacle location and classification process, we use floor boundary points where the distance from the robot can be measured using an Omnidirectional Camera. By tracking those points, we can classify obstacles by comparing the movement of each tracked point with odometry data. Our method changes a threshold to detect the points based on the result of this comparison in order to enhance classification. In the self-localization process, we use tracked scale and rotation invariant feature points as new landmarks that are detected for a long time by using both a fast tracking method and a slow Speed Up Robust Features (SURF) method. Once landmarks are detected, they can be tracked fast. Therefore, we can achieve fast self-localization. The classification ratio of our method is 85.0%, which is four times higher than that of a previous method. Our robot can localize 2.9 times faster and 4.2 times more accurately by using our method, in comparison to the use of the SURF method alone.

  • IROS - Mobile robot self-localization based on tracked scale and rotation invariant feature points by using an Omnidirectional Camera
    2010 IEEE RSJ International Conference on Intelligent Robots and Systems, 2010
    Co-Authors: Tsuyoshi Tasaki, Seiji Tokura, Takafumi Sonoura, Fumio Ozaki, Nobuto Matsuhira
    Abstract:

    Self-localization is important for mobile robots in order to move accurately, and many works use an Omnidirectional Camera for self-localization. However, it is difficult to realize fast and accurate self-localization by using only one Omnidirectional Camera without any calibration. For its realization, we use “tracked scale and rotation invariant feature points” that are regarded as landmarks. These landmarks can be tracked and do not change for a “long” time. In a landmark selection phase, robots detect the feature points by using both a fast tracking method and a slow “Speed Up Robust Features (SURF)” method. After detection, robots select landmarks from among detected feature points by using Support Vector Machine (SVM) trained by feature vectors based on observation positions. In a self-localization phase, robots detect landmarks while switching detection methods dynamically based on a tracking error criterion that is calculated easily even in the uncalibrated Omnidirectional image. We performed experiments in an approximately 10 [m] × 10 [m] mock supermarket by using a navigation robot ApriTau™ that had an Omnidirectional Camera on its top. The results showed that ApriTau™ could localize 2.9 times faster and 4.2 times more accurately by using the developed method than by using only the SURF method. The results also showed that ApriTau™ could arrive at a goal within a 3 [cm] error from various initial positions at the mock supermarket.

  • IROS - Obstacle classification and location by using a mobile Omnidirectional Camera based on tracked floor boundary points
    2009 IEEE RSJ International Conference on Intelligent Robots and Systems, 2009
    Co-Authors: Tsuyoshi Tasaki, Fumio Ozaki
    Abstract:

    Locating all obstacles around a moving robot and classifying them as stable obstacles or not by a sensor such as an Omnidirectional Camera are essential for the robot's smooth movement and avoiding problems in calibrating many Cameras. However, there are few works on locating and classifying all obstacles around a robot while it is moving by only one Omnidirectional Camera. In order to locate obstacles, we regard floor boundary points where robots can measure the distance from the robot by one Omnidirectional Camera as obstacles. Tracking them, we can classify obstacles by comparing the movement of each tracked point with odometry data. Moreover, our method changes a threshold to detect the points based on the result of comparing in order to enhance classification. The classification ratio of our method is 85.0%, which is four times higher than that of a method without changing a parameter to detect the points.