Hand Configuration

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 46023 Experts worldwide ranked by ideXlab platform

Kouhei Ohnishi - One of the best experts on this subject based on the ideXlab platform.

  • Eye-to-Hand approach on eye-in-Hand Configuration within real-time visual servoing
    IEEE ASME Transactions on Mechatronics, 2005
    Co-Authors: Abdul Muis, Kouhei Ohnishi
    Abstract:

    This paper presents a framework of Hand-eye relation for visual servoing with more precision, mobility, and global view. Mainly, there are two types of camera utilization for visual servoing: eye-in-Hand and eye-to-Hand Configurations. Both have own merits and drawbacks regarding to precision and view limit that oppose each other. Based on both behaviors, this paper employs a mobile manipulator as the second robot to hold the camera. Here, the camera architecture is eye-to-Hand Configuration for the main robot, but mainly behaves as eye-in-Hand Configuration for the second robot. Having this framework, the drawback of each Configuration is resolved by the benefit of the other. Here, the camera becomes mobile with more precision and global view. In addition, since there is no additional camera, the vision algorithm can be kept simple. In order to gain real-time visual servoing, this paper also addresses real-time constraints on vision system and data communication between robot and vision. Here, a hexagon pattern of artificial marker with a simplified image processing is developed. A grasp positioning problem is considered with position-based dynamic look and move visual control through object pose estimation. The system performance is validated by the experimental result.

  • Eye-to-Hand approach on eye-in-Hand Configuration within real-time visual servoing
    The 8th IEEE International Workshop on Advanced Motion Control 2004. AMC '04., 1
    Co-Authors: Abdul Muis, Kouhei Ohnishi
    Abstract:

    Hand-eye relation in visual servoing involves eye-in-Hand and eye-to-Hand Configuration. Both have its own merit and drawback regarding to its precision and sight range. This paper addresses this problem and introduces the camera utilization as eye-to-Hand Configuration for the second robot, while it retains as eye-in-Hand for the first robot. Hence, the camera becomes mobile and provides more precision and more global sight of scene. Moreover, this paper also addresses the real-time constraints within real-time vision system and real-time data exchange due to different processing units for robot and vision system. Here, a pattern design and simplified image processing are considered. This paper considers a 3D visual servoing within dynamic look and move scheme based on object pose. The system performance is validated by the experimental result.

Masayuki Fujita - One of the best experts on this subject based on the ideXlab platform.

  • Predictive Visual Feedback Control with Eye-in/to-Hand Configuration via Stabilizing Receding Horizon Approach
    IFAC Proceedings Volumes, 2008
    Co-Authors: Toshiyuki Murao, Hiroyuki Kawai, Masayuki Fujita
    Abstract:

    Abstract This paper investigates vision based robot control via a receding horizon control strategy for an eye-in/to-Hand system, as a predictive visual feedback control. Firstly, the dynamic visual feedback system with the eye-in/to-Hand Configuration is reconstructed in order to improve the performance of the estimation. Next, a stabilizing receding horizon control for the 3D dynamic visual feedback system, a highly nonlinear and relatively fast system, is proposed. The stability of the receding horizon control scheme is guaranteed by using the terminal cost derived from an energy function of the visual feedback system. Furthermore, simulation results are assessed with respect to the stability and the performance.

  • Passivity-based control of dynamic visual feedback systems with movable camera Configuration
    IEEJ Transactions on Electronics Information and Systems, 2008
    Co-Authors: Toshiyuki Murao, Hiroyuki Kawai, Masayuki Fujita
    Abstract:

    This paper deals with control of dynamic visual feedback systems with a movable camera Configuration. This Configuration consists of a robot manipulator and a camera that is attached to the end-effector of another robot manipulator. This system which can be interpreted the dynamic visual feedback system with an eye-in-Hand Configuration and a fixed camera one as a special case, can enlarge the field of view. Firstly the dynamic visual feedback system with an eye-to-Hand Configuration is given with the fundamental representation of a relative rigid body motion. Secondly we construct the dynamic visual feedback system with a movable camera Configuration by combining the camera control error system. Next, we derive the passivity of the dynamic visual feedback system. Based on the passivity, stability and L2-gain performance analysis are discussed. Finally the validity of the proposed control law can be confirmed by comparing the experimental results.

  • passivity based visual force feedback control for planar manipulators with eye in Hand Configuration
    International Conference on Control Applications, 2007
    Co-Authors: Hiroyuki Kawai, Toshiyuki Murao, Masayuki Fujita
    Abstract:

    This paper investigates visual force feedback control for planar manipulators with the eye-in-Hand Configuration based on passivity. The vision/force control is applied to horizontal/vertical direction for the environment which is thought as a frictionless, elastically compliant plane. We show passivity of the visual force feedback system which allows us to prove stability in the sense of Lyapunov. The L2-gain performance analysis for the disturbance attenuation problem is considered via the dissipative systems theory. Finally simulation results are shown to verify the stability and L2-gain performance of the visual force feedback system.

  • CCA - Passivity-based Visual Force Feedback Control for Planar Manipulators with Eye-in-Hand Configuration
    2007 IEEE International Conference on Control Applications, 2007
    Co-Authors: Hiroyuki Kawai, Toshiyuki Murao, Masayuki Fujita
    Abstract:

    This paper investigates visual force feedback control for planar manipulators with the eye-in-Hand Configuration based on passivity. The vision/force control is applied to horizontal/vertical direction for the environment which is thought as a frictionless, elastically compliant plane. We show passivity of the visual force feedback system which allows us to prove stability in the sense of Lyapunov. The L2-gain performance analysis for the disturbance attenuation problem is considered via the dissipative systems theory. Finally simulation results are shown to verify the stability and L2-gain performance of the visual force feedback system.

Daniel Weingaertner - One of the best experts on this subject based on the ideXlab platform.

  • libras sign language Hand Configuration recognition based on 3d meshes
    Systems Man and Cybernetics, 2013
    Co-Authors: Andres Jesse Porfirio, Kelly Lais Wiggers, Luiz S Oliveira, Daniel Weingaertner
    Abstract:

    This paper presents a method for recognizing Hand Configurations of the Brazilian sign language (LIBRAS) using 3D meshes and 2D projections of the Hand. Five actors performing 61 different Hand Configurations of the LIBRAS language were recorded twice, and the videos were manually segmented to extract one frame with a frontal and one with a lateral view of the Hand. For each frame pair, a 3D mesh of the Hand was constructed using the Shape from Silhouette method, and the rotation, translation and scale invariant Spherical Harmonics method was used to extract features for classification. A Support Vector Machine (SVM) achieved a correct classification of Rank1 = 86.06% and Rank3 = 96.83% on a database composed of 610 meshes. SVM classification was also performed on a database composed of 610 image pairs using 2D horizontal and vertical projections as features, resulting in Rank1 = 88.69% and Rank3 = 98.36%. Results encourage the use of 3D meshes as opposed to videos or images, given that their direct, real time acquisition is becoming possible due to devices like Leap Motion® or high resolution depth cameras.

  • SMC - LIBRAS Sign Language Hand Configuration Recognition Based on 3D Meshes
    2013 IEEE International Conference on Systems Man and Cybernetics, 2013
    Co-Authors: Andres Jesse Porfirio, Kelly Lais Wiggers, Luiz S Oliveira, Daniel Weingaertner
    Abstract:

    This paper presents a method for recognizing Hand Configurations of the Brazilian sign language (LIBRAS) using 3D meshes and 2D projections of the Hand. Five actors performing 61 different Hand Configurations of the LIBRAS language were recorded twice, and the videos were manually segmented to extract one frame with a frontal and one with a lateral view of the Hand. For each frame pair, a 3D mesh of the Hand was constructed using the Shape from Silhouette method, and the rotation, translation and scale invariant Spherical Harmonics method was used to extract features for classification. A Support Vector Machine (SVM) achieved a correct classification of Rank1 = 86.06% and Rank3 = 96.83% on a database composed of 610 meshes. SVM classification was also performed on a database composed of 610 image pairs using 2D horizontal and vertical projections as features, resulting in Rank1 = 88.69% and Rank3 = 98.36%. Results encourage the use of 3D meshes as opposed to videos or images, given that their direct, real time acquisition is becoming possible due to devices like Leap Motion® or high resolution depth cameras.

Bin Xian - One of the best experts on this subject based on the ideXlab platform.

D M Dawson - One of the best experts on this subject based on the ideXlab platform.

  • adaptive visual servo regulation control for camera in Hand Configuration with a fixed camera extension
    Conference on Decision and Control, 2007
    Co-Authors: Enver Tatlicioglu, D M Dawson, Bin Xian
    Abstract:

    In this paper, image-based regulation control of a robot manipulator with an uncalibrated vision system is discussed. To compensate for the unknown camera calibration parameters, a novel prediction error formulation is presented. To achieve the control objectives, a Lyapunov-based adaptive control strategy is employed. The control development for the camera-in-Hand problem is presented in detail and a fixed-camera problem is included as an extension.

  • CDC - Adaptive visual servo regulation control for camera-in-Hand Configuration with a fixed-camera extension
    2007 46th IEEE Conference on Decision and Control, 2007
    Co-Authors: Enver Tatlicioglu, D M Dawson, Bin Xian
    Abstract:

    In this paper, image-based regulation control of a robot manipulator with an uncalibrated vision system is discussed. To compensate for the unknown camera calibration parameters, a novel prediction error formulation is presented. To achieve the control objectives, a Lyapunov-based adaptive control strategy is employed. The control development for the camera-in-Hand problem is presented in detail and a fixed-camera problem is included as an extension.

  • Robust Visual‐Servo Control of Robot Manipulators in the Presence of Uncertainty
    Journal of Robotic Systems, 2003
    Co-Authors: Erkan Zergeroglu, D M Dawson, M.s. De Queiroz, P. Setlur
    Abstract:

    This paper considers the problem of position control of planar robot manipulators via visual servoing in the presence of uncertainty associated with the robot mechanical dynamics and the camera system for both fixed-camera and camera-in-Hand Configurations. Specifically, we first design a robust controller that compensates for uncertainty throughout the whole robot-camera system and ensures global uniformly ultimately bounded position tracking for the fixed-camera Configuration. Under the same class of uncertainty, we then develop a setpoint controller for the camera-in-Hand Configuration that achieves global uniformly ultimately bounded regulation. Experimental results illustrating the performance of both controllers are also included. © 2003 Wiley Periodicals, Inc.

  • Robust visual-servo control of robot manipulators in the presence of uncertainty
    Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304), 1
    Co-Authors: Erkan Zergeroglu, D M Dawson, M.s. De Queiroz, Siddharth P. Nagarkatti
    Abstract:

    This paper considers the problem of position control of planar robot manipulators via visual servoing in the presence of uncertainty associated with the robot mechanical dynamics and the camera system for both fixed-camera and camera-in-Hand Configurations. Specifically, we first design a robust controller that compensates for uncertainty throughout the whole robot-camera system and ensures global uniformly ultimately bounded position tracking for the fixed-camera Configuration. Under the same class of uncertainty, we then develop a setpoint controller for the camera-in-Hand Configuration that achieves global uniformly ultimately bounded regulation.