- Notifications
You must be signed in to change notification settings - Fork100
Convolutional Mesh Autoencoders for Generating 3D Faces
License
anuragranj/coma
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This is an official repository ofGenerating 3D Faces using Convolutional Mesh Autoencoders
UPDATE : Thank you for using and supporting this repository over the last two years. This will no longer be maintained. Alternatively, please use:
- sw-gong/coma, thanks to Shunwang Gong.
- pixelite1201/pytorch_coma, thanks to Priyanka Patel.
This code is tested on Tensorflow 1.3. Requirements (including tensorflow) can be installed using:
pip install -r requirements.txt
Install mesh processing libraries fromMPI-IS/mesh.
Download the data from theProject Page.
Preprocess the data
python processData.py --data<PATH_OF_RAW_DATA> --save_path<PATH_TO_SAVE_PROCESSED DATA>
Data pre-processing creates numpy files for the interpolation experiment and extrapolation experiment (Section X of the paper).This creates 13 different train and test files.sliced_[train|test]
is for the interpolation experiment.<EXPRESSION>_[train|test]
are for cross validation cross 12 different expression sequences.
To train, specify a name, and choose a particular train test split. For example,
python main.py --data data/sliced --name sliced
To test, specify a name, and data. For example,
python main.py --data data/sliced --name sliced --modetest
Run the following script. The models are slightly better (~1% on average) than ones reported in the paper.
sh generateErrors.sh
To sample faces from the latent space, specify a model and data. For example,
python main.py --data data/sliced --name sliced --mode latent
A face template pops up. You can then use the keysqwertyui
to sample faces by moving forward in each of the 8 latent dimensions. Useasdfghjk
to move backward in the latent space.
For more flexible usage, refer tolib/visualize_latent_space.py.
We thankRaffi Enficiaud andAhmed Osman for pushing the release ofpsbody.mesh, an essential dependency for this project.
The code contained in this repository is under MIT License and is free for commercial and non-commercial purposes. The dependencies, in particular,MPI-IS/mesh and ourdata have their own license terms which can be found on their respective webpages. The dependencies and data are NOT covered by MIT License associated with this repository.
CAPE (CVPR 2020): Based on CoMA, we build a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making a generative, animatable model of people in clothing. A large-scale mesh dataset of clothed humans in motion is also included!
Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and Michael J. Black. "Generating 3D faces using Convolutional Mesh Autoencoders." European Conference on Computer Vision (ECCV) 2018.