- Notifications
You must be signed in to change notification settings - Fork2
irasin/Pytorch_WCT
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Unofficial Pytorch(1.0+) implementation of nips paperUniversal Style Transfer via Feature Transforms.
Original torch implementation from the author can be foundhere.
Other implementations such asPytorch_implementation1 ,Pytorch_implementation2 orPytorch_implementation3are also available.
This repository provides a pre-trained model for you to generate your own image given content image and style image.
If you have any question, please feel free to contact me. (Language in English/Japanese/Chinese will be ok!)
I propose a structure-emphasized multimodal style transfer(SEMST), feel free to use ithere.
- Python 3.7
- PyTorch 1.0+
- TorchVision
- Pillow
Anaconda environment recommended here!
(optional)
- GPU environment
Clone this repository
git clone https://github.com/irasin/Pytorch_WCTcd Pytorch_WCT
Prepare your content image and style image. I provide some in the
content
andstyle
and you can try to use them easily.Download the pretrained modelhere and put them under the directory named
model_state
Generate the output image. A transferred output image and a content_output_pair image and a NST_demo_like image will be generated.
pythontest.py-ccontent_image_path-sstyle_image_path
usage: test.py [-h] [--content CONTENT] [--style STYLE] [--output_name OUTPUT_NAME] [--alpha ALPHA] [--gpu GPU] [--model_state_path MODEL_STATE_PATH]
If output_name is not given, it will use the combination of content image name and style image name.
Some results of content image and my cat (called Sora) will be shown here.