Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on Jun 12, 2024. It is now read-only.

API to support AIST++ Dataset:https://google.github.io/aistplusplus_dataset

License

NotificationsYou must be signed in to change notification settings

google/aistplusplus_api

This repo contains starter code for using the AIST++ dataset. To download thedataset or explore details of this dataset, please go to our datasetwebsite.

Installation

The code has been tested onpython>=3.7. You can install the dependencies and this repo by:

pip install -r requirements.txtpython setup.py install

You also need to make sureffmpeg is installed on your machine, if you would like to visualize the annotations using this api.

How to use

We provide demo code for loading and visualizing AIST++ annotations.NoteAIST++ annotations andvideos,as well as theSMPL model (for SMPL visualization only) are required to run the demo code.

The directory structure of the data is expected to be:

<ANNOTATIONS_DIR>├── motions/├── keypoints2d/├── keypoints3d/├── splits/├── cameras/└── ignore_list.txt<VIDEO_DIR>└── *.mp4<SMPL_DIR>├── SMPL_MALE.pkl└── SMPL_FEMALE.pkl

Visualize 2D keypoints annotation

The command below will plot 2D keypoints onto the raw video and save it to thedirectory./visualization/.

python demos/run_vis.py \  --anno_dir<ANNOTATIONS_DIR> \  --video_dir<VIDEO_DIR> \  --save_dir ./visualization/ \  --video_name gWA_sFM_c01_d27_mWA2_ch21 \  --mode 2D

Visualize 3D keypoints annotation

The command below will project 3D keypoints onto the raw video using camera parameters, and save it to thedirectory./visualization/.

python demos/run_vis.py \  --anno_dir<ANNOTATIONS_DIR> \  --video_dir<VIDEO_DIR> \  --save_dir ./visualization/ \  --video_name gWA_sFM_c01_d27_mWA2_ch21 \  --mode 3D

Visualize the SMPL joints annotation

The command below will first calculate the SMPL joint locations from our motionannotations (joint rotations and root trajectories), then project them onto theraw video and plot. The result will be saved into the directory./visualization/.

python demos/run_vis.py \  --anno_dir<ANNOTATIONS_DIR> \  --video_dir<VIDEO_DIR>\  --smpl_dir<SMPL_DIR> \  --save_dir ./visualization/\  --video_name gWA_sFM_c01_d27_mWA2_ch21\  --mode SMPL

Visualize the SMPL Mesh

The command below will calculate the first frame SMPL mesh from our motionannotations (joint rotations and root trajectories), and visualize in 3D.

# install some additional libraries for 3D mesh visualizationpip install vedo trimeshpython demos/run_vis.py \  --anno_dir<ANNOTATIONS_DIR> \  --smpl_dir<SMPL_DIR> \  --video_name gWA_sFM_c01_d27_mWA2_ch21\  --mode SMPLMesh

Extract SMPL motion features

The command below will calculate and print two types of features for a motion sequence in SMPL format. We take reference fromfairmotion to calculate the features.

python demos/extract_motion_feats.py \  --anno_dir<ANNOTATIONS_DIR> \  --smpl_dir<SMPL_DIR> \  --video_name gWA_sFM_c01_d27_mWA2_ch21

Multi-view 3D keypoints and motion reconstruction

This repo also provides code we used for constructing this dataset from themulti-viewAIST Dance Video Database. Theconstruction pipeline starts with frame-by-frame 2D keypoint detection andmanual camera estimation. Then triangulation and bundle adjustment are applied to optimize thecamera parameters as well as the 3D keypoints. Finally we sequentially fit the SMPL model to 3D keypoints to get a motion sequence represented using joint angles and a root trajectory. The following figure shows our pipeline overview.

AIST++ construction pipeline overview.

The annotations in AIST++ are inCOCO-format for 2D & 3D keypoints, andSMPL-format for human motion annotations. It is designed to serve generalresearch purposes. However, in some cases you might need the data in different format(e.g.,Openpose /Alphapose keypoints format, orSTAR human motionformat).With the code we provide, it should be easy to construct your ownversion of AIST++, with your own keypoint detector or human model definition.

Step 1. Assume you have your own 2D keypoint detection results stored in<KEYPOINTS_DIR>, you can start by preprocessing the keypoints into the.pkl format that we support. The code we used at this step is as follows but you might need to modify the scriptrun_preprocessing.py in order to be compatible with your own data.

python processing/run_preprocessing.py \  --keypoints_dir<KEYPOINTS_DIR> \  --save_dir<ANNOTATIONS_DIR>/keypoints2d/

Step 2. Then you can estimate the camera parameters using your 2D keypoints. This stepis optional as you can still use our camera parameter estimates which arequite accurate. At this step, you will need the<ANNOTATIONS_DIR>/cameras/mapping.txt file which stores the mapping from videos to different environment settings.

# install some additional librariespip install -r processing/requirements.txt# If you would like to estimate your own camera parameters:python processing/run_estimate_camera.py \  --anno_dir<ANNOTATIONS_DIR> \  --save_dir<ANNOTATIONS_DIR>/cameras/# Or you can skip this step by just using our camera parameter estimates.

Step 3. Next step is to perform 3D keypoints reconstruction from multi-view 2D keypointsand camera parameters. You can just run:

python processing/run_estimate_keypoints.py \  --anno_dir<ANNOTATIONS_DIR> \  --save_dir<ANNOTATIONS_DIR>/keypoints3d/

Step 4. Finally we can estimate SMPL-format human motion data by fittingthe 3D keypoints to the SMPL model. If you would like to use another human model suchasSTAR, you will need to do some modifications in the scriptrun_estimate_smpl.py. The following command runs SMPL fitting.

python processing/run_estimate_smpl.py \  --anno_dir<ANNOTATIONS_DIR> \  --smpl_dir<SMPL_DIR> \  --save_dir<ANNOTATIONS_DIR>/motions/

Note that this step will take several days to process the entire dataset if your machine has only one GPU.In practise, we run this step on a cluster, but are only able to provide the single-threaded version.

MISC.

  • COCO-format keypoint definition:
["nose", "left_eye", "right_eye", "left_ear", "right_ear", "left_shoulder","right_shoulder", "left_elbow", "right_elbow", "left_wrist", "right_wrist", "left_hip", "right_hip", "left_knee", "right_knee", "left_ankle", "right_ankle"]
  • SMPL-format body joint definition:
["root",     "lhip", "rhip", "belly",    "lknee", "rknee", "spine",    "lankle", "rankle", "chest",     "ltoes", "rtoes", "neck", "linshoulder", "rinshoulder",     "head",  "lshoulder", "rshoulder",      "lelbow", "relbow",      "lwrist", "rwrist",     "lhand", "rhand",]

Releases

No releases published

Packages

No packages published

Contributors2

  •  
  •  

Languages


[8]ページ先頭

©2009-2025 Movatter.jp