- Notifications
You must be signed in to change notification settings - Fork2
Bridge between 2D Tensorflow-based human pose estimation and 3D estimation from stereovision
License
robotology/skeleton3D
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Bridge between 2D human pose estimation and 3D estimation from stereovision
- YARP
- iCub
- icub-contrib-common
- icub-hri
- objectsPropertiesCollector (OPC): a robot's working memory.
- stereo-vision: 3D estimation from stereo-vision camera
- deeperCut based skeleton2D oryarpOpenPose: 2D human pose tracking
- Optional modules: for human-robot interaction demos
- Peripersonal Space
- react-ctrl
- modified onthefly-recognition: the built module's name is changed, so there will be no conflict with the official module.
- cardinal-points-grasp
Build and Install normally, i.e.
mkdir build && cd buildccmake ..make install
- Open the application with openpose,PS_modulation_iCub_skeleton3D_openpose, or application with deepcut,PPS_modulation_iCub_skeleton3D inyarpmanager. Note that application with deeperCut provides more responsive robot's actions.
- Launch all module and connect.
- (Optional) If you want to use application with deeperCut, you have to runskeleton2D.py in terminal rather than yarpmanager. The possibility to run python script from yarp manager is broken now.
# Open a terminal and ssh to machine with GPU, e.g. `icub-cuda`ssh icub-cudaskeleton2D.py --des /skeleton2D --gpu 0.7
- Users can log intorpc service of the module to set the parameters by:
yarp rpc /skeleton3D/rpc# help function by typing:help
- Move the icub's neck to look down about 23 degree, e.g. withyarpmotorgui. If you runicubCollaboration (see below), this step is not necessary.
- Connect to therpc service ofreact-controller, and make thecontrolled arm (left by default) move:
- To a fix position: in this mode, robot tries to keep its end-effector at a fix position, e.g. (-0.3,-0.15,0.1) forleft_arm of icub, while avoiding human's body parts
yarp rpc /reactController/rpc:i # for the *left_arm*set_xd (-0.3 -0.15 0.1)# or for the *right_arm*set_xd (-0.3 0.15 0.1)# to stop typing:stop
- In a circle: in this mode, robot moves its end-effector along a circle trajectory in the y and z axes, relative to the current end-effector position, while avoiding human's body parts. The first command moves robot's arm to a testedsafe initial position for the circle trajectory.
set_xd (-0.3 -0.15 0.1)set_relative_circular_xd 0.08 0.27# to stop typing:stop
- Note: users can tune the workspace parameters inconfiguration file to constrain the robot's partner. The module currently works with only one partner at a time.
- First, do all the above step
- Open the application script,ontheflyRecognition_PPS_both, inyarpmanager. This app allows on-hand object training and on-hand object recognition.
# Connect to **skeleton3D**:yarp rpc /skeleton3D/rpcenable_tool_training right# Connect to **onTheFlyRecognition_right**yarp rpc /onTheFlyRecognition_right/human:io# Hold object on the right hand and type:train <object_name> 0# The whole procedure can be applied for the left hand also
- Open the application script,iolVM_Phuong, inyarpmanager. This app allows on-table object recognition for grasping
- Open the application script,grasp-processor, inyarpmanager. This app allows robot to grasp recognized object on the table.
- Run moduleicubCollaboration. Currently, all connections to other modules are internally, so it needs to run after all others.
- Connect all ports.
# the robot arm using for **icubCollaboration** needs to be the same as **react-ctrl** aboveicubCollaboration --robot icub --part <right_arm/left_arm># rpc access to the moduleyarp rpc /icubCollaboration/rpc# type help for all support commandshelp# hold a trained object (within the robot's reachable area) and type:receive <object_name> # robot should detect the object, take-over it and put it on the table (see the video)# ask robot to give the object on the tablepre_grasp_poshand_over_object <object_name> <handRight/handLeft>
D. H. P. Nguyen, M. Hoffmann, A. Roncone, U. Pattacini, and G. Metta,“Compact Real-time Avoidance on a Humanoid Robot for Human-robot Interaction,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 416–424.
P. D. Nguyen, F. Bottarel, U. Pattacini, H. Matej, L. Natale, and G. Metta,“Merging physical and social interaction for effective human-robot collaboration,” in Humanoid Robots (Humanoids), 2018 IEEE-RAS 18th International Conference on, 2018, pp. 710–717.
About
Bridge between 2D Tensorflow-based human pose estimation and 3D estimation from stereovision