Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
NotificationsYou must be signed in to change notification settings

yoyolicoris/music-spectrogram-diffusion-pytorch

Repository files navigation

An unofficial PyTorch implementation of the paperMulti-instrument Music Synthesis with Spectrogram Diffusion, adapted fromofficial codebase.We aim to increase the reproducibility of their work by providing training code and pre-trained models in PyTorch.

Data Preparation

Please download the following datasets.

Create Clean MIDI for URMP

The MIDI files in the URMP dataset mostly don't contain the correct program number. Use theclean_urmp_midi.py script to create a new set of MIDI files that contain the correct program number corresponding to the instruments in the file names.

Training

Small Autoregressive

python main.py fit --config cfg/ar_small.yaml

Diffusion, Small without Context

python main.py fit --config cfg/diff_small.yaml

Diffusion, Small with Context

python main.py fit --config cfg/diff_small.yaml --data.init_args.with_context true --model.init_args.with_context true

Diffusion, Base with Context

python main.py fit --config cfg/diff_base.yaml

Remember to change the path arguments under thedata section of the yaml files to where you downloaded the dataset, or set them using--data.init_args.*_path keyword in commandline.You can also set the path tonull if you want to ommit that dataset.Notice that URMP requires one extra path argument, which is where you createthe clean MIDI.

To adjust other hyperparmeters, please refer toLightningCLI documentation for more information.

Evaluating

The following command will compute the Reconstruction and FAD metrics using the embeddings from the VGGish and TRILL models and reporting the averages across the whole test dataset.

python main.py test --config config.yaml --ckpt_path your_checkpoint.ckpt

Inferencing

To synthesize audio from MIDI with trained models:

python infer.py input.mid checkpoint.ckpt config.yaml output.wav

Pre-Trained Models

We provided three pre-trained models corresponding to the diffusion baselines in the paper.

We trained them following the settings in the paper besides the batch size, which we reduced to 8 due to limited computational resources.We evaluated these models using our codebase and summarized them in the following table:

ModelsVGGish ReconVGGish FADTrill ReconTrill FAD
Small w/o Context2.480.490.840.08
Small w/ Context2.440.590.680.04
Base w/ Context----
Ground Truth Encoded1.800.800.350.02

TODO

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors4

  •  
  •  
  •  
  •  

Languages


[8]ページ先頭

©2009-2025 Movatter.jp