- Notifications
You must be signed in to change notification settings - Fork4.8k
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
License
RVC-Boss/GPT-SoVITS
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Zero-shot TTS: Input a 5-second vocal sample and experience instant text-to-speech conversion.
Few-shot TTS: Fine-tune the model with just 1 minute of training data for improved voice similarity and realism.
Cross-lingual Support: Inference in languages different from the training dataset, currently supporting English, Japanese, Korean, Cantonese and Chinese.
WebUI Tools: Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.
Check out ourdemo video here!
Unseen speakers few-shot fine-tuning demo:
few.shot.fine.tuning.demo.mp4
For users in China, you canclick here to use AutoDL Cloud Docker to experience the full functionality online.
- Python 3.9, PyTorch 2.0.1, CUDA 11
- Python 3.10.13, PyTorch 2.1.2, CUDA 12.3
- Python 3.9, PyTorch 2.2.2, macOS 14.4.1 (Apple silicon)
- Python 3.9, PyTorch 2.2.2, CPU devices
Note: numba==0.56.4 requires py<3.11
If you are a Windows user (tested with win>=10), you candownload the integrated package and double-click ongo-webui.bat to start GPT-SoVITS-WebUI.
Users in China candownload the package here.
conda create -n GPTSoVits python=3.9conda activate GPTSoVitsbash install.sh
Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.
- Install Xcode command-line tools by running
xcode-select --install
. - Install FFmpeg by running
brew install ffmpeg
. - Install the program by running the following commands:
conda create -n GPTSoVits python=3.9conda activate GPTSoVitspip install -r requirements.txt
conda install ffmpeg
sudo apt install ffmpegsudo apt install libsox-devconda install -c conda-forge'ffmpeg<7'
Download and placeffmpeg.exe andffprobe.exe in the GPT-SoVITS root.
InstallVisual Studio 2017 (Korean TTS Only)
brew install ffmpeg
pip install -r requirements.txt
- Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please checkDocker Hub for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs.
- Environment Variables:
- is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation.
- Volumes Configuration,The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content.
- shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation.
- Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances.
docker compose -f "docker-compose.yaml" up -d
As above, modify the corresponding parameters based on your actual situation, then run the following command:
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx
Users in China candownload all these models here.
Download pretrained models fromGPT-SoVITS Models and place them in
GPT_SoVITS/pretrained_models
.Download G2PW models fromG2PWModel_1.1.zip, unzip and rename to
G2PWModel
, and then place them inGPT_SoVITS/text
.(Chinese TTS Only)For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models fromUVR5 Weights and place them in
tools/uvr5/uvr5_weights
.If you want to use
bs_roformer
ormel_band_roformer
models for UVR5, you can manually download the model and corresponding configuration file, and put them intools/uvr5/uvr5_weights
.Rename the model file and configuration file, ensure that the model and configuration files have the same and corresponding names except for the suffix. In addition, the model and configuration file namesmust includeroformer
in order to be recognized as models of the roformer class.The suggestion is todirectly specify the model type in the model name and configuration file name, such as
mel_mand_roformer
,bs_roformer
. If not specified, the features will be compared from the configuration file to determine which type of model it is. For example, the modelbs_roformer_ep_368_sdr_12.9628.ckpt
and its corresponding configuration filebs_roformer_ep_368_sdr_12.9628.yaml
are a pair,kim_mel_band_roformer.ckpt
andkim_mel_band_roformer.yaml
are also a pair.
For Chinese ASR (additionally), download models fromDamo ASR Model,Damo VAD Model, andDamo Punc Model and place them in
tools/asr/models
.For English or Japanese ASR (additionally), download models fromFaster Whisper Large V3 and place them in
tools/asr/models
. Also,other models may have the similar effect with smaller disk footprint.
The TTS annotation .list file format:
vocal_path|speaker_name|language|text
Language dictionary:
- 'zh': Chinese
- 'ja': Japanese
- 'en': English
- 'ko': Korean
- 'yue': Cantonese
Example:
D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.
Double-clickgo-webui.bat
or usego-webui.ps1
if you want to switch to V1,then double-clickgo-webui-v1.bat
or usego-webui-v1.ps1
python webui.py<language(optional)>
if you want to switch to V1,then
python webui.py v1<language(optional)>
Or maunally switch version in WebUI
1. Fill in the audio path2. Slice the audio into small chunks3. Denoise(optinal)4. ASR5. Proofreading ASR transcriptions6. Go to the next Tab, then finetune the model
Double-clickgo-webui-v2.bat
or usego-webui-v2.ps1
,then open the inference webui at1-GPT-SoVITS-TTS/1C-inference
python GPT_SoVITS/inference_webui.py<language(optional)>
OR
python webui.py
then open the inference webui at1-GPT-SoVITS-TTS/1C-inference
New Features:
Support Korean and Cantonese
An optimized text frontend
Pre-trained model extended from 2k hours to 5k hours
Improved synthesis quality for low-quality reference audio
Use v2 from v1 environment:
pip install -r requirements.txt
to update some packagesClone the latest codes from github.
Download v2 pretrained models fromhuggingface and put them into
GPT_SoVITS\pretrained_models\gsv-v2final-pretrained
.Chinese v2 additional:G2PWModel_1.1.zip(Download G2PW models, unzip and rename to
G2PWModel
, and then place them inGPT_SoVITS/text
.
New Features:
The timbre similarity is higher, requiring less training data to approximate the target speaker (the timbre similarity is significantly improved using the base model directly without fine-tuning).
GPT model is more stable, with fewer repetitions and omissions, and it is easier to generate speech with richer emotional expression.
Use v3 from v2 environment:
pip install -r requirements.txt
to update some packagesClone the latest codes from github.
Download v3 pretrained models (s1v3.ckpt, s2Gv3.pth and models--nvidia--bigvgan_v2_24khz_100band_256x folder) fromhuggingface and put them into
GPT_SoVITS\pretrained_models
.additional: for Audio Super Resolution model, you can readhow to download
High Priority:
- Localization in Japanese and English.
- User guide.
- Japanese and English dataset fine tune training.
Features:
- Zero-shot voice conversion (5s) / few-shot voice conversion (1min).
- TTS speaking speed control.
Enhanced TTS emotion control.Maybe use pretrained finetuned preset GPT models for better emotion.- Experiment with changing SoVITS token inputs to probability distribution of GPT vocabs (transformer latent).
- Improve English and Japanese text frontend.
- Develop tiny and larger-sized TTS models.
- Colab scripts.
- Try expand training dataset (2k hours -> 10k hours).
- better sovits base model (enhanced audio quality)
- model mix
Use the command line to open the WebUI for UVR5
python tools/uvr5/webui.py "<infer_device>" <is_half> <webui_port_uvr5>
This is how the audio segmentation of the dataset is done using the command line
python audio_slicer.py \ --input_path "<path_to_original_audio_file_or_directory>" \ --output_root "<directory_where_subdivided_audio_clips_will_be_saved>" \ --threshold <volume_threshold> \ --min_length <minimum_duration_of_each_subclip> \ --min_interval <shortest_time_gap_between_adjacent_subclips> --hop_size <step_size_for_computing_volume_curve>
This is how dataset ASR processing is done using the command line(Only Chinese)
python tools/asr/funasr_asr.py -i <input> -o <output>
ASR processing is performed through Faster_Whisper(ASR marking except Chinese)
(No progress bars, GPU performance may cause time delays)
python ./tools/asr/fasterwhisper_asr.py -i <input> -o <output> -l <language> -p <precision>
A custom list save path is enabled
Special thanks to the following projects and contributors:
Thankful to @Naozumi520 for providing the Cantonese training set and for the guidance on Cantonese-related knowledge.
About
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)