Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Simple baselines and RNNs for predicting human motion in tensorflow. Presented at CVPR 17.

License

NotificationsYou must be signed in to change notification settings

una-dinosauria/human-motion-prediction

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is the code for the paper

Julieta Martinez, Michael J. Black, Javier Romero.On human motion prediction using recurrent neural networks. In CVPR 17.

It can be found on arxiv as well:https://arxiv.org/pdf/1705.02445.pdf

The code in this repository was written byJulieta Martinez andJavier Romero.

Dependencies

Get this code and the data

First things first, clone this repo and get the human3.6m dataset on exponential map format.

git clone https://github.com/una-dinosauria/human-motion-prediction.gitcd human-motion-predictionmkdir datacd datawget http://www.cs.stanford.edu/people/ashesh/h3.6m.zipunzip h3.6m.ziprm h3.6m.zipcd ..

Quick demo and visualization

For a quick demo, you can train for a few iterations and visualize the outputsof your model.

To train, run

python src/translate.py --action walking --seq_length_out 25 --iterations 10000

To save some samples of the model, run

python src/translate.py --action walking --seq_length_out 25 --iterations 10000 --sample --load 10000

Finally, to visualize the samples run

python src/forward_kinematics.py

This should create a visualization similar to this one



Running average baselines

To reproduce the running average baseline results from our paper, run

python src/baselines.py

RNN models

To train and reproduce the results of our models, use the following commands

modelargumentstraining time (gtx 1080)notes
Sampling-based loss (SA)python src/translate.py --action walking --seq_length_out 2545s / 1000 itersRealistic long-term motion, loss computed over 1 second.
Residual (SA)python src/translate.py --residual_velocities --action walking35s / 1000 iters
Residual unsup. (MA)python src/translate.py --residual_velocities --learning_rate 0.005 --omit_one_hot65s / 1000 iters
Residual sup. (MA)python src/translate.py --residual_velocities --learning_rate 0.00565s / 1000 itersbest quantitative.
Untiedpython src/translate.py --residual_velocities --learning_rate 0.005 --architecture basic70s / 1000 iters

You can substitute the--action walking parameter for any action in

["directions", "discussion", "eating", "greeting", "phoning", "posing", "purchases", "sitting", "sittingdown", "smoking", "takingphoto", "waiting", "walking", "walkingdog", "walkingtogether"]

or--action all (default) to train on all actions.

The code will log the error in Euler angles for each action totensorboard. You can track the progress during training by typingtensorboard --logdir experiments in the terminal and checking the board underhttp://127.0.1.1:6006/ in your browser (occasionally, tensorboard might pick another url).

Citing

If you use our code, please cite our work

@inproceedings{julieta2017motion,  title={On human motion prediction using recurrent neural networks},  author={Martinez, Julieta and Black, Michael J. and Romero, Javier},  booktitle={CVPR},  year={2017}}

Other implementations

Acknowledgments

The pre-processed human 3.6m dataset and some of our evaluation code (specially undersrc/data_utils.py) was ported/adapted fromSRNN by@asheshjain399.

Licence

MIT

About

Simple baselines and RNNs for predicting human motion in tensorflow. Presented at CVPR 17.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors5

Languages


[8]ページ先頭

©2009-2025 Movatter.jp