Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[ITS'21] Human Trajectory Forecasting in Crowds: A Deep Learning Perspective

License

NotificationsYou must be signed in to change notification settings

vita-epfl/trajnetplusplusbaselines

Repository files navigation

TrajNet++ : The Trajectory Forecasting Framework

PyTorch implementation ofHuman Trajectory Forecasting in Crowds: A Deep Learning Perspective

docs/train/cover.png

TrajNet++ is a large scale interaction-centric trajectory forecasting benchmark comprising explicit agent-agent scenarios. Our framework provides proper indexing of trajectories by defining a hierarchy of trajectory categorization. In addition, we provide an extensive evaluation system to test the gathered methods for a fair comparison. In our evaluation, we go beyond the standard distance-based metrics and introduce novel metrics that measure the capability of a model to emulate pedestrian behavior in crowds. Finally, we provide code implementations of > 15 popular human trajectory forecasting baselines.

We host theTrajnet++ Challenge on AICrowd allowing researchers to objectively evaluate and benchmark trajectory forecasting models on interaction-centric data. We rely on the spirit of crowdsourcing and the challenge has > 1800 submissions. We encourage researchers to submit their sequences to TrajNet++, so the quality of trajectory forecasting models can keep increasing in tackling more challenging scenarios.

Data Setup

The detailed step-by-step procedure for setting up the TrajNet++ framework can be foundhere

Converting External Datasets

To convert external datasets into the TrajNet++ framework, refer to thisguide

Training Models

LSTM

The training script and its help menu:python -m trajnetbaselines.lstm.trainer --help

Run Example

## Our Proposed D-LSTMpython -m trajnetbaselines.lstm.trainer --type directional --augment## Social LSTMpython -m trajnetbaselines.lstm.trainer --type social --augment --n 16 --embedding_arch two_layer --layer_dims 1024

GAN

The training script and its help menu:python -m trajnetbaselines.sgan.trainer --help

Run Example

## Social GAN (L2 Loss + Adversarial Loss)python -m trajnetbaselines.sgan.trainer --type directional --augment## Social GAN (Variety Loss only)python -m trajnetbaselines.sgan.trainer --type directional --augment --d_steps 0 --k 3

Evaluation

The evaluation script and its help menu:python -m evaluator.lstm.trajnet_evaluator --help

Run Example

## LSTM (saves model predictions. Useful for submission to TrajNet++ benchmark)python -m evaluator.lstm.trajnet_evaluator --output OUTPUT_BLOCK/trajdata/lstm_directional_None.pkl --path <path_to_test_file>## SGAN (saves model predictions. Useful for submission to TrajNet++ benchmark)python -m evaluator.sgan.trajnet_evaluator --output OUTPUT_BLOCK/trajdata/sgan_directional_None.pkl --path <path_to_test_file>

More details regarding TrajNet++ evaluator are providedhere

Evaluation on datasplits is based on the followingcategorization

Results

Unimodal Comparison of interaction encoder designs on interacting trajectories of TrajNet++ real world dataset. Errors reported are ADE / FDE in meters, collisions in mean % (std. dev. %) across 5 independent runs. Our goal is to reduce collisions in model predictions without compromising distance-based metrics.

MethodADE/FDECollisions
LSTM0.60/1.3013.6 (0.2)
S-LSTM0.53/1.146.7 (0.2)
S-Attn0.56/1.219.0 (0.3)
S-GAN0.64/1.406.9 (0.5)
D-LSTM (ours)0.56/1.225.4(0.3)

Interpreting Forecasting Models

docs/train/LRP.gif

Visualizations of the decision-making of social interaction modulesusing layer-wise relevance propagation (LRP). The darker the yellowcircles, the more is the weight provided by the primary pedestrian(blue) to the corresponding neighbour (yellow).

Code implementation for explaining trajectory forecasting models using LRP can be foundhere

Benchmarking Models

We host theTrajnet++ Challenge on AICrowd allowing researchers to objectively evaluate and benchmark trajectory forecasting models on interaction-centric data. We rely on the spirit of crowdsourcing, and encourage researchers to submit their sequences to our benchmark, so the quality of trajectory forecasting models can keep increasing in tackling more challenging scenarios.

Citation

If you find this code useful in your research then please cite

@article{Kothari2020HumanTF,  author={Kothari, Parth and Kreiss, Sven and Alahi, Alexandre},  journal={IEEE Transactions on Intelligent Transportation Systems},  title={Human Trajectory Forecasting in Crowds: A Deep Learning Perspective},  year={2021},  volume={},  number={},  pages={1-15},  doi={10.1109/TITS.2021.3069362} }

[8]ページ先頭

©2009-2025 Movatter.jp