- Notifications
You must be signed in to change notification settings - Fork2.4k
This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
Rudrabha/Wav2Lip
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
A commercial version of Wav2Lip can be directly accessed athttps://sync.so
Are you looking to integrate this into a product? We have a turn-key hosted API with new and improved lip-syncing models here:https://sync.so/For any other commercial / enterprise requests, please contact us atpavan@sync.so andprady@sync.soTo reach out to the authors directly you can reach us atprajwal@sync.so,rudrabha@sync.so.This code is part of the paper:A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild published at ACM Multimedia 2020.
📑 Original Paper | 📰 Project Page | 🌀 Demo | ⚡ Live Testing | 📔 Colab Notebook |
---|---|---|---|---|
Paper | Project Page | Demo Video | Interactive Demo | Colab Notebook /Updated Collab Notebook |
- Weights of the visual quality disc has been updated in readme!
- Lip-sync videos to any target speech with high accuracy 💯. Try ourinteractive demo.
- ✨ Works for any identity, voice, and language. Also works for CGI faces and synthetic voices.
- Complete training code, inference code, and pretrained models are available 💥
- Or, quick-start with the Google Colab Notebook:Link. Checkpoints and samples are available in a Google Drivefolder as well. There is also atutorial video on this, courtesy ofWhat Make Art. Also, thanks toEyal Gruss, there is a more accessibleGoogle Colab notebook with more useful features. A tutorial collab notebook is present at thislink.
- 🔥 🔥 Several new, reliable evaluation benchmarks and metrics[
evaluation/
folder of this repo] released. Instructions to calculate the metrics reported in the paper are also present.
All results from this open-source code or ourdemo website should only be used for research/academic/personal purposes only. As the models are trained on theLRS2 dataset, any form of commercial use is strictly prohibited. For commercial requests please contact us directly!Prerequisites
Python 3.6
- ffmpeg:
sudo apt-get install ffmpeg
- Install necessary packages using
pip install -r requirements.txt
. Alternatively, instructions for using a docker image is providedhere. Have a look atthis comment and comment onthe gist if you encounter any issues. - Face detectionpre-trained model should be downloaded to
face_detection/detection/sfd/s3fd.pth
. Alternativelink if the above does not work.Getting the weights
Model | Description | Link to the model |
---|---|---|
Wav2Lip | Highly accurate lip-sync | Link |
Wav2Lip + GAN | Slightly inferior lip-sync, but better visual quality | Link |
Expert Discriminator | Weights of the expert discriminator | Link |
Visual Quality Discriminator | Weights of the visual disc trained in a GAN setup | Link |
Lip-syncing videos using the pre-trained models (Inference) |
You can lip-sync any video to any audio:
python inference.py --checkpoint_path<ckpt> --face<video.mp4> --audio<an-audio-source>
The result is saved (by default) inresults/result_voice.mp4
. You can specify it as an argument, similar to several other available options. The audio source can be any file supported byFFMPEG
containing audio data:*.wav
,*.mp3
or even a video file, from which the code will automatically extract the audio.
- Experiment with the
--pads
argument to adjust the detected face bounding box. Often leads to improved results. You might need to increase the bottom padding to include the chin region. E.g.--pads 0 20 0 0
. - If you see the mouth position dislocated or some weird artifacts such as two mouths, then it can be because of over-smoothing the face detections. Use the
--nosmooth
argument and give it another try. - Experiment with the
--resize_factor
argument, to get a lower-resolution video. Why? The models are trained on faces that were at a lower resolution. You might get better, visually pleasing results for 720p videos than for 1080p videos (in many cases, the latter works well too). - The Wav2Lip model without GAN usually needs more experimenting with the above two to get the most ideal results, and sometimes, can give you a better result as well.Preparing LRS2 for training
Our models are trained on LRS2. Seehere for a few suggestions regarding training on other datasets.
data_root (mvlrs_v1)├── main, pretrain (we use only main folder in this work)|├── list of folders|│ ├── five-digit numbered video IDs ending with (.mp4)
Place the LRS2 filelists (train, val, test).txt
files in thefilelists/
folder.
python preprocess.py --data_root data_root/main --preprocessed_root lrs2_preprocessed/
Additional options likebatch_size
and the number of GPUs to use in parallel to use can also be set.
preprocessed_root (lrs2_preprocessed)├── list of folders|├── Folders with five-digit numbered video IDs|│ ├── *.jpg|│ ├── audio.wav
There are two major steps: (i) Train the expert lip-sync discriminator, (ii) Train the Wav2Lip model(s).
You can downloadthe pre-trained weights if you want to skip this step. To train it:
python color_syncnet_train.py --data_root lrs2_preprocessed/ --checkpoint_dir<folder_to_save_checkpoints>
You can either train the model without the additional visual quality discriminator (< 1 day of training) or use the discriminator (~2 days). For the former, run:
python wav2lip_train.py --data_root lrs2_preprocessed/ --checkpoint_dir<folder_to_save_checkpoints> --syncnet_checkpoint_path<path_to_expert_disc_checkpoint>
To train with the visual quality discriminator, you should runhq_wav2lip_train.py
instead. The arguments for both files are similar. In both cases, you can resume training as well. Look atpython wav2lip_train.py --help
for more details. You can also set additional less commonly-used hyper-parameters at the bottom of thehparams.py
file.Training on datasets other than LRS2
Training on other datasets might require modifications to the code. Please read the following before you raise an issue:
- You might not get good results by training/fine-tuning on a few minutes of a single speaker. This is a separate research problem, to which we do not have a solution yet. Thus, we would most likely not be able to resolve your issue.
- You must train the expert discriminator for your own dataset before training Wav2Lip.
- If it is your own dataset downloaded from the web, in most cases, needs to be sync-corrected.
- Be mindful of the FPS of the videos of your dataset. Changes to FPS would need significant code changes.
- The expert discriminator's eval loss should go down to ~0.25 and the Wav2Lip eval sync loss should go down to ~0.2 to get good results.When raising an issue on this topic, please let us know that you are aware of all these points.We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model.Evaluation
This repository can only be used for personal/research/non-commercial purposes. However, for commercial requests, please contact us directly atrudrabha@synclabs.so orprajwal@synclabs.so. We have a turn-key hosted API with new and improved lip-syncing models here:https://synclabs.so/The size of the generated face will be 192 x 288 in our new models. Please cite the following paper if you use this repository:
@inproceedings{10.1145/3394171.3413532,author = {Prajwal, K R and Mukhopadhyay, Rudrabha and Namboodiri, Vinay P. and Jawahar, C.V.},title = {A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild},year = {2020},isbn = {9781450379885},publisher = {Association for Computing Machinery},address = {New York, NY, USA},url = {https://doi.org/10.1145/3394171.3413532},doi = {10.1145/3394171.3413532},booktitle = {Proceedings of the 28th ACM International Conference on Multimedia},pages = {484–492},numpages = {9},keywords = {lip sync, talking face generation, video generation},location = {Seattle, WA, USA},series = {MM '20}}
Parts of the code structure are inspired by thisTTS repository. We thank the author for this wonderful code. The code for Face Detection has been taken from theface_alignment repository. We thank the authors for releasing their code and models. We thankzabique for the tutorial collab notebook.