Keyframes

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6717 Experts worldwide ranked by ideXlab platform

Andrea L Thomaz - One of the best experts on this subject based on the ideXlab platform.

  • IROS - An evaluation of GUI and kinesthetic teaching methods for constrained-keyframe skills
    2015 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS), 2015
    Co-Authors: Andrey Kurenkov, Baris Akgun, Andrea L Thomaz
    Abstract:

    Keyframe-based Learning from Demonstration has been shown to be an effective method for allowing end-users to teach robots skills. We propose a method for using multiple keyframe demonstrations to learn skills as sequences of positional constraints (c-Keyframes) which can be planned between for skill execution. We also introduce an interactive GUI which can be used for displaying the learned c-Keyframes to the teacher, for altering aspects of the skill after it has been taught, or for specifying a skill directly without providing kinesthetic demonstrations. We compare 3 methods of teaching c-keyframe skills: kinesthetic teaching, GUI teaching, and kinesthetic teaching followed by GUI editing of the learned skill (K-GUI teaching). Based on user evaluation, the K-GUI method of teaching is found to be the most preferred, and the GUI to be the least preferred. Kinesthetic teaching is also shown to result in more robust constraints than GUI teaching, and several use cases of K-GUI teaching are discussed to show how the GUI can be used to improve the results of kinesthetic teaching.

  • Keyframe-based Learning from Demonstration Method and Evaluation
    International Journal of Social Robotics, 2012
    Co-Authors: Baris Akgun, Maya Cakmak, Karl Jiang, Andrea L Thomaz
    Abstract:

    We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a Human–Robot Interaction perspective. Our approach—Keyframe-based Learning from Demonstration (KLfD)—takes demonstrations that consist of Keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to Keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters, which we call Sequential Pose Distributions (SPD). The skill is reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and scooping, pouring and placing skills on a humanoid robot. KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when demonstration type is suited for the skill.

  • trajectories and Keyframes for kinesthetic teaching a human robot interaction perspective
    Human-Robot Interaction, 2012
    Co-Authors: Baris Akgun, Maya Cakmak, Jae Wook Yoo, Andrea L Thomaz
    Abstract:

    Kinesthetic teaching is an approach to providing demonstrations to a robot in Learning from Demonstration whereby a human physically guides a robot to perform a skill. In the common usage of kinesthetic teaching, the robot's trajectory during a demonstration is recorded from start to end. In this paper we consider an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive Keyframes that can be connected to perform the skill. We present a user-study (n = 34) comparing the two approaches and highlighting their complementary nature. The study also tests and shows the potential benefits of iterative and adaptive versions of keyframe demonstrations. Finally, we introduce a hybrid method that combines trajectories and Keyframes in a single demonstration.

  • HRI - Trajectories and Keyframes for kinesthetic teaching: a human-robot interaction perspective
    Proceedings of the seventh annual ACM IEEE international conference on Human-Robot Interaction - HRI '12, 2012
    Co-Authors: Baris Akgun, Maya Cakmak, Jae Wook Yoo, Andrea L Thomaz
    Abstract:

    Kinesthetic teaching is an approach to providing demonstrations to a robot in Learning from Demonstration whereby a human physically guides a robot to perform a skill. In the common usage of kinesthetic teaching, the robot's trajectory during a demonstration is recorded from start to end. In this paper we consider an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive Keyframes that can be connected to perform the skill. We present a user-study (n=34) comparing the two approaches and highlighting their complementary nature. The study also tests and shows the potential benefits of iterative and adaptive versions of keyframe demonstrations. Finally, we introduce a hybrid method that combines trajectories and Keyframes in a single demonstration.

Hong Zhang - One of the best experts on this subject based on the ideXlab platform.

  • ROBIO - Image similarity from feature-flow for keyframe detection in appearance-based SLAM
    2011 IEEE International Conference on Robotics and Biomimetics, 2011
    Co-Authors: Robert Stewart, Hong Zhang
    Abstract:

    In appearance based SLAM (Simultaneous Localisation and Mapping), a robot typically represents its environment through a set of acquired images that are associated with nodes in a topological map. Rather than storing every acquired image, which can be memory intensive, a selection of images (Keyframes) representative of the places visited can be stored. Keyframe detection (i.e. choosing when to add a new keyframe) typically requires a means of determining the similarity of images. In this paper we develop three new metrics for computing image similarity. The metrics are based on the degree of feature-flow between features matched in a reference image (e.g. previous keyframe) and a test image (e.g. candidate keyframe), where a low degree of feature-flow indicates a high image similarity value. The new metrics and an existing metric are computed for synthetic and real data and their performance is evaluated with respect to a number of attributes important for keyframe detection. The results suggest similarity metrics based on feature-flow are preferable for use in keyframe detection.

  • IROS - Keyframe detection for appearance-based visual SLAM
    2010 IEEE RSJ International Conference on Intelligent Robots and Systems, 2010
    Co-Authors: Hong Zhang, Dan Yang
    Abstract:

    This paper is concerned with the problem of keyframe detection in appearance-based visual SLAM. Appearance SLAM models a robot's environment topologically by a graph whose nodes represent strategically interesting places that have been visited by the robot and whose arcs represent spatial connectivity between these places. Specifically, we discuss and compare various methods for identifying the next location that is sufficiently different visually from the previously visited location or node in the map graph in order to decide whether a new node should be created. We survey existing techniques of keyframe detection in image retrieval and video analysis. Using experimental results obtained from visual SLAM datasets, we conclude that the feature matching method offers the best performance among five representative methods in terms of accurately measuring the amount of appearance change between robot's views and thus can serve as a simple and effective metric for detecting Keyframes. This study fills an important but missing step in the current appearance SLAM research.

Baris Akgun - One of the best experts on this subject based on the ideXlab platform.

  • IROS - An evaluation of GUI and kinesthetic teaching methods for constrained-keyframe skills
    2015 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS), 2015
    Co-Authors: Andrey Kurenkov, Baris Akgun, Andrea L Thomaz
    Abstract:

    Keyframe-based Learning from Demonstration has been shown to be an effective method for allowing end-users to teach robots skills. We propose a method for using multiple keyframe demonstrations to learn skills as sequences of positional constraints (c-Keyframes) which can be planned between for skill execution. We also introduce an interactive GUI which can be used for displaying the learned c-Keyframes to the teacher, for altering aspects of the skill after it has been taught, or for specifying a skill directly without providing kinesthetic demonstrations. We compare 3 methods of teaching c-keyframe skills: kinesthetic teaching, GUI teaching, and kinesthetic teaching followed by GUI editing of the learned skill (K-GUI teaching). Based on user evaluation, the K-GUI method of teaching is found to be the most preferred, and the GUI to be the least preferred. Kinesthetic teaching is also shown to result in more robust constraints than GUI teaching, and several use cases of K-GUI teaching are discussed to show how the GUI can be used to improve the results of kinesthetic teaching.

  • Keyframe-based Learning from Demonstration Method and Evaluation
    International Journal of Social Robotics, 2012
    Co-Authors: Baris Akgun, Maya Cakmak, Karl Jiang, Andrea L Thomaz
    Abstract:

    We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a Human–Robot Interaction perspective. Our approach—Keyframe-based Learning from Demonstration (KLfD)—takes demonstrations that consist of Keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to Keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters, which we call Sequential Pose Distributions (SPD). The skill is reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and scooping, pouring and placing skills on a humanoid robot. KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when demonstration type is suited for the skill.

  • trajectories and Keyframes for kinesthetic teaching a human robot interaction perspective
    Human-Robot Interaction, 2012
    Co-Authors: Baris Akgun, Maya Cakmak, Jae Wook Yoo, Andrea L Thomaz
    Abstract:

    Kinesthetic teaching is an approach to providing demonstrations to a robot in Learning from Demonstration whereby a human physically guides a robot to perform a skill. In the common usage of kinesthetic teaching, the robot's trajectory during a demonstration is recorded from start to end. In this paper we consider an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive Keyframes that can be connected to perform the skill. We present a user-study (n = 34) comparing the two approaches and highlighting their complementary nature. The study also tests and shows the potential benefits of iterative and adaptive versions of keyframe demonstrations. Finally, we introduce a hybrid method that combines trajectories and Keyframes in a single demonstration.

  • HRI - Trajectories and Keyframes for kinesthetic teaching: a human-robot interaction perspective
    Proceedings of the seventh annual ACM IEEE international conference on Human-Robot Interaction - HRI '12, 2012
    Co-Authors: Baris Akgun, Maya Cakmak, Jae Wook Yoo, Andrea L Thomaz
    Abstract:

    Kinesthetic teaching is an approach to providing demonstrations to a robot in Learning from Demonstration whereby a human physically guides a robot to perform a skill. In the common usage of kinesthetic teaching, the robot's trajectory during a demonstration is recorded from start to end. In this paper we consider an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive Keyframes that can be connected to perform the skill. We present a user-study (n=34) comparing the two approaches and highlighting their complementary nature. The study also tests and shows the potential benefits of iterative and adaptive versions of keyframe demonstrations. Finally, we introduce a hybrid method that combines trajectories and Keyframes in a single demonstration.

In So Kweon - One of the best experts on this subject based on the ideXlab platform.

  • Bayesian filtering for keyframe-based visual SLAM
    International Journal of Robotics Research, 2015
    Co-Authors: Jungho Kim, Kuk-jin Yoon, In So Kweon
    Abstract:

    Keyframe-based camera tracking methods can reduce error accumulation in that they reduce the number of camera poses to be estimated by selecting a set of Keyframes from an image sequence. In this paper, we propose a novel Bayesian filtering framework for keyframe-based camera tracking and 3D mapping. Our Bayesian filtering enables an effective estimation of keyframe poses using all measurements obtained at non-keyframe locations, which improves the accuracy of the estimated path. In addition, we discuss the independence problem between the process noise and the measurement noise when employing vision-based motion estimation approaches for the process model, and we present a method of ensuring independence by dividing the measurements obtained from a single sensor into two sets which are exclusively used for the process and measurement models. We demonstrate the performance of the proposed approach in terms of the consistency of the global map and the accuracy of the estimated path.

Dan Yang - One of the best experts on this subject based on the ideXlab platform.

  • IROS - Keyframe detection for appearance-based visual SLAM
    2010 IEEE RSJ International Conference on Intelligent Robots and Systems, 2010
    Co-Authors: Hong Zhang, Dan Yang
    Abstract:

    This paper is concerned with the problem of keyframe detection in appearance-based visual SLAM. Appearance SLAM models a robot's environment topologically by a graph whose nodes represent strategically interesting places that have been visited by the robot and whose arcs represent spatial connectivity between these places. Specifically, we discuss and compare various methods for identifying the next location that is sufficiently different visually from the previously visited location or node in the map graph in order to decide whether a new node should be created. We survey existing techniques of keyframe detection in image retrieval and video analysis. Using experimental results obtained from visual SLAM datasets, we conclude that the feature matching method offers the best performance among five representative methods in terms of accurately measuring the amount of appearance change between robot's views and thus can serve as a simple and effective metric for detecting Keyframes. This study fills an important but missing step in the current appearance SLAM research.