Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Bridge between 2D Tensorflow-based human pose estimation and 3D estimation from stereovision

License

NotificationsYou must be signed in to change notification settings

robotology/skeleton3D

Repository files navigation

Introduction

Bridge between 2D human pose estimation and 3D estimation from stereovision

Table of Contents

  1. Dependencies
  2. Results
  3. Build
  4. How-to
  5. References

Dependencies

Results

Safe HRI demo

Safe pHRI

Hand-over demo

Hand-over

Build

Build and Install normally, i.e.

mkdir build && cd buildccmake ..make install

How-to

Safe HRI demo

  1. Open the application with openpose,PS_modulation_iCub_skeleton3D_openpose, or application with deepcut,PPS_modulation_iCub_skeleton3D inyarpmanager. Note that application with deeperCut provides more responsive robot's actions.
  2. Launch all module and connect.
  3. (Optional) If you want to use application with deeperCut, you have to runskeleton2D.py in terminal rather than yarpmanager. The possibility to run python script from yarp manager is broken now.
    # Open a terminal and ssh to machine with GPU, e.g. `icub-cuda`ssh icub-cudaskeleton2D.py --des /skeleton2D --gpu 0.7
  4. Users can log intorpc service of the module to set the parameters by:
    yarp rpc /skeleton3D/rpc# help function by typing:help
  5. Move the icub's neck to look down about 23 degree, e.g. withyarpmotorgui. If you runicubCollaboration (see below), this step is not necessary.
  6. Connect to therpc service ofreact-controller, and make thecontrolled arm (left by default) move:
    • To a fix position: in this mode, robot tries to keep its end-effector at a fix position, e.g. (-0.3,-0.15,0.1) forleft_arm of icub, while avoiding human's body parts
    yarp rpc /reactController/rpc:i  # for the *left_arm*set_xd (-0.3 -0.15 0.1)# or for the *right_arm*set_xd (-0.3 0.15 0.1)# to stop typing:stop
    • In a circle: in this mode, robot moves its end-effector along a circle trajectory in the y and z axes, relative to the current end-effector position, while avoiding human's body parts. The first command moves robot's arm to a testedsafe initial position for the circle trajectory.
    set_xd (-0.3 -0.15 0.1)set_relative_circular_xd 0.08 0.27# to stop typing:stop
  • Note: users can tune the workspace parameters inconfiguration file to constrain the robot's partner. The module currently works with only one partner at a time.

Hand-over demo

  1. First, do all the above step
  2. Open the application script,ontheflyRecognition_PPS_both, inyarpmanager. This app allows on-hand object training and on-hand object recognition.
    # Connect to **skeleton3D**:yarp rpc /skeleton3D/rpcenable_tool_training right# Connect to **onTheFlyRecognition_right**yarp rpc /onTheFlyRecognition_right/human:io# Hold object on the right hand and type:train <object_name> 0# The whole procedure can be applied for the left hand also
  3. Open the application script,iolVM_Phuong, inyarpmanager. This app allows on-table object recognition for grasping
  4. Open the application script,grasp-processor, inyarpmanager. This app allows robot to grasp recognized object on the table.
  5. Run moduleicubCollaboration. Currently, all connections to other modules are internally, so it needs to run after all others.
  6. Connect all ports.
    # the robot arm using for **icubCollaboration** needs to be the same as **react-ctrl** aboveicubCollaboration --robot icub --part <right_arm/left_arm># rpc access to the moduleyarp rpc /icubCollaboration/rpc# type help for all support commandshelp# hold a trained object (within the robot's reachable area) and type:receive <object_name> # robot should detect the object, take-over it and put it on the table (see the video)# ask robot to give the object on the tablepre_grasp_poshand_over_object <object_name> <handRight/handLeft>

References

D. H. P. Nguyen, M. Hoffmann, A. Roncone, U. Pattacini, and G. Metta,“Compact Real-time Avoidance on a Humanoid Robot for Human-robot Interaction,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 416–424.

P. D. Nguyen, F. Bottarel, U. Pattacini, H. Matej, L. Natale, and G. Metta,“Merging physical and social interaction for effective human-robot collaboration,” in Humanoid Robots (Humanoids), 2018 IEEE-RAS 18th International Conference on, 2018, pp. 710–717.

About

Bridge between 2D Tensorflow-based human pose estimation and 3D estimation from stereovision

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp