Assembly Tasks

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 19131 Experts worldwide ranked by ideXlab platform

Katsushi Ikeuchi - One of the best experts on this subject based on the ideXlab platform.

  • recognizing Assembly Tasks through human demonstration
    The International Journal of Robotics Research, 2007
    Co-Authors: Jun Takamatsu, Koichi Ogawara, Hiroshi Kimura, Katsushi Ikeuchi
    Abstract:

    As one of the methods for reducing the work of programming, the Learning-from-Observation (LFO) paradigm has been heavily promoted. This paradigm requires the programmer only to perform a task in front of a robot and does not require expertise. In this paper, the LFO paradigm is applied to Assembly Tasks by two rigid polyhedral objects. A method is proposed for recognizing these Tasks as a sequence of movement primitives from noise-contaminated data obtained by a conventional 6 degree-of-freedom (DOF) object-tracking system. The system is implemented on a robot with a real-time stereo vision system and dual arms with dexterous hands, and its effectiveness is demonstrated.

  • Abstraction of Assembly Tasks to Automatically Generate Robot Motion from Observation
    2005
    Co-Authors: Jun Takamatsu, Koichi Ogawara, Hiroshi Kimura, Katsushi Ikeuchi
    Abstract:

    ion of Assembly Tasks to Automatically Generate Robot Motion from Observation Jun Takamatsu,† Koichi Ogawara,† Hiroshi Kimura†† and Katsushi Ikeuchi† The ability of robots to learn human Tasks from observation is one of the long-awaited demands in the field of robotics. Here, we limit the scope of the target Tasks to Assembly Tasks because the domain is one of the central research areas in robotics and has a wide application area. We propose a method to recognize Assembly Tasks based on transitions of contact relations between two polyhedral objects. Concretely speaking, we propose a method for: (1)representing task models based on such transitions, (2)determining correct contact relations and transitions of them from noise-contaminated visual information, and (3) generating a corresponding sequence of movement-primitives (referred to as sub-skills) from those task models. We have implemented the system on our robot, with real time stereo system and a pair of arms with dextrous hands. In actuality, we have demonstrated the system’s effectiveness.

  • recognizing Assembly Tasks using face contact relations
    Computer Vision and Pattern Recognition, 1992
    Co-Authors: Katsushi Ikeuchi, T Suehiro
    Abstract:

    A novel method for programming a robot, called the Assembly-plan-from-observation (APO) method, is proposed. The APO method aims to build a system that has the capability of observing a human performing an Assembly task, understanding the task based on the observation, and generating the robot program to achieve the same task. Assembly relations that serve as the basic representation of each Assembly task are defined. It is verified that such Assembly relations can be recovered from the observation of human Assembly Tasks, and that from such Assembly relations it is possible to generate robot motion commands to repeat the same Assembly task. An APO system based on the Assembly relations is demonstrated. >

  • CVPR - Recognizing Assembly Tasks using face-contact relations
    Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1
    Co-Authors: Katsushi Ikeuchi, T Suehiro
    Abstract:

    A novel method for programming a robot, called the Assembly-plan-from-observation (APO) method, is proposed. The APO method aims to build a system that has the capability of observing a human performing an Assembly task, understanding the task based on the observation, and generating the robot program to achieve the same task. Assembly relations that serve as the basic representation of each Assembly task are defined. It is verified that such Assembly relations can be recovered from the observation of human Assembly Tasks, and that from such Assembly relations it is possible to generate robot motion commands to repeat the same Assembly task. An APO system based on the Assembly relations is demonstrated. >

  • ICRA - Generating visual sensing strategies in Assembly Tasks
    Proceedings of 1995 IEEE International Conference on Robotics and Automation, 1
    Co-Authors: Jun Miura, Katsushi Ikeuchi
    Abstract:

    It is generally very difficult, if not impossible, for a robot to perform fine manipulation Tasks without the benefit of some form of sensory feedback during actual task execution. As a result, sensing planning is an important component in Assembly task planning. This paper describes a method of generating visual sensing strategies based on knowledge of the task to be performed. The generation of the appropriate visual sensing strategy entails knowing what information to extract and where to get it. This is facilitated by the knowledge of the task, which describes how objects are assembled. This knowledge, coupled with known sensor modeling, results in an abstract template of sensing strategy called the sensing task model. By instantiating the appropriate sensing task model at planning time, the sensing strategy is efficiently generated. Our method has been implemented using a laser range finder as the sensor. Experimental results involving typical Assembly Tasks show the feasibility of the method.

Paolo Rocco - One of the best experts on this subject based on the ideXlab platform.

  • prediction of human activity patterns for human robot collaborative Assembly Tasks
    IEEE Transactions on Industrial Informatics, 2019
    Co-Authors: Andrea Maria Zanchettin, Andrea Casalino, Luigi Piroddi, Paolo Rocco
    Abstract:

    It is widely agreed that future manufacturing environments will be populated by humans and robots sharing the same workspace. However, the real collaboration can be sporadic, especially in the case of Assembly Tasks, which might involve autonomous operations to be executed by either the robot or the human worker. In this scenario, it might be beneficial to predict the actions of the human in order to control the robot both safely and efficiently. In this paper, we propose a method to predict human activity patterns in order to early infer when a specific collaborative operation will be requested by the human and to allow the robot to perform alternative autonomous Tasks in the meanwhile. The prediction algorithm is based on higher-order Markov chains and is experimentally verified in a realistic scenario involving a dual-arm robot employed in a small part collaborative Assembly task.

  • Prediction of Human Activity Patterns for Human–Robot Collaborative Assembly Tasks
    IEEE Transactions on Industrial Informatics, 2019
    Co-Authors: Andrea Maria Zanchettin, Andrea Casalino, Luigi Piroddi, Paolo Rocco
    Abstract:

    It is widely agreed that future manufacturing environments will be populated by humans and robots sharing the same workspace. However, the real collaboration can be sporadic, especially in the case of Assembly Tasks, which might involve autonomous operations to be executed by either the robot or the human worker. In this scenario, it might be beneficial to predict the actions of the human in order to control the robot both safely and efficiently. In this paper, we propose a method to predict human activity patterns in order to early infer when a specific collaborative operation will be requested by the human and to allow the robot to perform alternative autonomous Tasks in the meanwhile. The prediction algorithm is based on higher-order Markov chains and is experimentally verified in a realistic scenario involving a dual-arm robot employed in a small part collaborative Assembly task.

Ken Chen - One of the best experts on this subject based on the ideXlab platform.

  • Feedback Deep Deterministic Policy Gradient With Fuzzy Reward for Robotic Multiple Peg-in-Hole Assembly Tasks
    IEEE Transactions on Industrial Informatics, 2019
    Co-Authors: Jing Xu, Bohao Xu, Kuangen Zhang, Wei Wang, Ken Chen
    Abstract:

    The automatic completion of multiple peg-in-hole Assembly Tasks by robots remains a formidable challenge because the traditional control strategies require a complex analysis of the contact model. In this paper, the Assembly task is formulated as a Markov decision process, and a model-driven deep deterministic policy gradient algorithm is proposed to accomplish the Assembly task through the learned policy without analyzing the contact states. In our algorithm, the learning process is driven by a simple traditional force controller. In addition, a feedback exploration strategy is proposed to ensure that our algorithm can efficiently explore the optimal Assembly policy and avoid risky actions, which can address the data efficiency and guarantee stability in realistic Assembly scenarios. To improve the learning efficiency, we utilize a fuzzy reward system for the complex Assembly process. Then, simulations and realistic experiments of a dual peg-in-hole Assembly demonstrate the effectiveness of the proposed algorithm. The advantages of the fuzzy reward system and feedback exploration strategy are validated by comparing the performances of different cases in simulations and experiments.

  • knowledge driven deep deterministic policy gradient for robotic multiple peg in hole Assembly Tasks
    Robotics and Biomimetics, 2018
    Co-Authors: Zhimin Hou, Kuangen Zhang, Quan Gao, Haiming Dong, Ken Chen
    Abstract:

    It remains a formidable challenge for traditional control strategies to perform automatic multiple peg-in-hole Assembly Tasks due to the complicated and dynamic contact states. Inspired by that human could generalize the learned skills to perform the different Assembly Tasks well, a general learning-based algorithm based on deep deterministic policy gradient (DDPG) is proposed. To make robots learn the multiple peg-in-hole Assembly skills from experience efficiently and stably, the learning process is driven by the basic knowledge like PD force control strategy. To achieve a fast learning process in the real-world Assembly Tasks, a hybrid exploration strategy is applied to drive a efficient exploration during policy search phase. A dual peg-in-hole Assembly simulation and real-world experiments are implemented to verify the effectiveness of the proposed algorithm. The performance measured by the Assembly time and the maximum contact forces demonstrates that the multiple peg-in-hole Assembly skills could be improved only after 150 training episodes in dual peg-in-hole Assembly task.

  • ROBIO - Knowledge-Driven Deep Deterministic Policy Gradient for Robotic Multiple Peg-in-Hole Assembly Tasks
    2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2018
    Co-Authors: Zhimin Hou, Kuangen Zhang, Dong Haiming, Quan Gao, Ken Chen
    Abstract:

    It remains a formidable challenge for traditional control strategies to perform automatic multiple peg-in-hole Assembly Tasks due to the complicated and dynamic contact states. Inspired by that human could generalize the learned skills to perform the different Assembly Tasks well, a general learning-based algorithm based on deep deterministic policy gradient (DDPG) is proposed. To make robots learn the multiple peg-in-hole Assembly skills from experience efficiently and stably, the learning process is driven by the basic knowledge like PD force control strategy. To achieve a fast learning process in the real-world Assembly Tasks, a hybrid exploration strategy is applied to drive a efficient exploration during policy search phase. A dual peg-in-hole Assembly simulation and real-world experiments are implemented to verify the effectiveness of the proposed algorithm. The performance measured by the Assembly time and the maximum contact forces demonstrates that the multiple peg-in-hole Assembly skills could be improved only after 150 training episodes in dual peg-in-hole Assembly task.

Bram Vanderborght - One of the best experts on this subject based on the ideXlab platform.

  • Design of a collaborative architecture for human-robot Assembly Tasks
    IEEE International Conference on Intelligent Robots and Systems, 2017
    Co-Authors: Ilias El Makrini, Kelly Merckaert, Dirk Lefeber, Bram Vanderborght
    Abstract:

    Collaborative robots, the so-called cobots, that work together with the human, are becoming more and more popular in the industrial world. An example of an application where these robots are useful is the Assembly task. In this case, the human and the robot complement each other. On one side, the human can achieve more dexterous Tasks, while on the other side, the robot can assist the Assembly process to lower the physical and cognitive work load, e.g. to avoid errors, and in the same way reduce absenteeism. This paper describes a novel collaborative architecture for human-robot Assembly Tasks. The developed architecture is composed of four modules; face recognition, gesture recognition and human-like robot behavior modules are used to enhance the human-robot interaction, while the visual inspection module is utilized for quality control during the Assembly process. A collaborative task consisting of the Assembly of a box whereby the robot assists the human was designed and implemented on the Baxter robot. This was used as the application use case to validate the developed collaborative architecture.

  • IROS - Design of a collaborative architecture for human-robot Assembly Tasks
    2017 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS), 2017
    Co-Authors: Ilias El Makrini, Kelly Merckaert, Dirk Lefeber, Bram Vanderborght
    Abstract:

    Collaborative robots, the so-called cobots, that work together with the human, are becoming more and more popular in the industrial world. An example of an application where these robots are useful is the Assembly task. In this case, the human and the robot complement each other. On one side, the human can achieve more dexterous Tasks, while on the other side, the robot can assist the Assembly process to lower the physical and cognitive work load, e.g. to avoid errors, and in the same way reduce absenteeism. This paper describes a novel collaborative architecture for human-robot Assembly Tasks. The developed architecture is composed of four modules; face recognition, gesture recognition and human-like robot behavior modules are used to enhance the human-robot interaction, while the visual inspection module is utilized for quality control during the Assembly process. A collaborative task consisting of the Assembly of a box whereby the robot assists the human was designed and implemented on the Baxter robot. This was used as the application use case to validate the developed collaborative architecture.

Andrea Maria Zanchettin - One of the best experts on this subject based on the ideXlab platform.

  • prediction of human activity patterns for human robot collaborative Assembly Tasks
    IEEE Transactions on Industrial Informatics, 2019
    Co-Authors: Andrea Maria Zanchettin, Andrea Casalino, Luigi Piroddi, Paolo Rocco
    Abstract:

    It is widely agreed that future manufacturing environments will be populated by humans and robots sharing the same workspace. However, the real collaboration can be sporadic, especially in the case of Assembly Tasks, which might involve autonomous operations to be executed by either the robot or the human worker. In this scenario, it might be beneficial to predict the actions of the human in order to control the robot both safely and efficiently. In this paper, we propose a method to predict human activity patterns in order to early infer when a specific collaborative operation will be requested by the human and to allow the robot to perform alternative autonomous Tasks in the meanwhile. The prediction algorithm is based on higher-order Markov chains and is experimentally verified in a realistic scenario involving a dual-arm robot employed in a small part collaborative Assembly task.

  • Prediction of Human Activity Patterns for Human–Robot Collaborative Assembly Tasks
    IEEE Transactions on Industrial Informatics, 2019
    Co-Authors: Andrea Maria Zanchettin, Andrea Casalino, Luigi Piroddi, Paolo Rocco
    Abstract:

    It is widely agreed that future manufacturing environments will be populated by humans and robots sharing the same workspace. However, the real collaboration can be sporadic, especially in the case of Assembly Tasks, which might involve autonomous operations to be executed by either the robot or the human worker. In this scenario, it might be beneficial to predict the actions of the human in order to control the robot both safely and efficiently. In this paper, we propose a method to predict human activity patterns in order to early infer when a specific collaborative operation will be requested by the human and to allow the robot to perform alternative autonomous Tasks in the meanwhile. The prediction algorithm is based on higher-order Markov chains and is experimentally verified in a realistic scenario involving a dual-arm robot employed in a small part collaborative Assembly task.