Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

NotificationsYou must be signed in to change notification settings

tensorlayer/SRGAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SRGAN Architecture

Prepare Data and Pre-trained VGG

    1. You need to download the pretrained VGG19 model weights inhere.
    1. You need to have the high resolution images for training.
    • In this experiment, I used images fromDIV2K - bicubic downscaling x4 competition, so the hyper-paremeters inconfig.py (like number of epochs) are seleted basic on that dataset, if you change a larger dataset you can reduce the number of epochs.
    • If you dont want to use DIV2K dataset, you can also useYahoo MirFlickr25k, just simply download it usingtrain_hr_imgs = tl.files.load_flickr25k_dataset(tag=None) inmain.py.
    • If you want to use your own images, you can set the path to your image folder viaconfig.TRAIN.hr_img_path inconfig.py.

Run

🔥🔥🔥🔥🔥🔥 You need installTensorLayerX at first!

🔥🔥🔥🔥🔥🔥 Please install TensorLayerX via source

pip install git+https://github.com/tensorlayer/tensorlayerx.git

Train

config.TRAIN.img_path="your_image_folder/"

Your directory structure should look like this:

srgan/    └── config.py    └── srgan.py    └── train.py    └── vgg.py    └── model          └── vgg19.npy    └── DIV2K          └── DIV2K_train_HR          ├── DIV2K_train_LR_bicubic          ├── DIV2K_valid_HR          └── DIV2K_valid_LR_bicubic
  • Start training.
python train.py

🔥Modify a line of code intrain.py, easily switch to any framework!

importosos.environ['TL_BACKEND']='tensorflow'# os.environ['TL_BACKEND'] = 'mindspore'# os.environ['TL_BACKEND'] = 'paddle'# os.environ['TL_BACKEND'] = 'pytorch'

🚧 We will support PyTorch as Backend soon.

Evaluation.

🔥 We have trained SRGAN on DIV2K dataset.🔥 Download model weights as follows.

SRGAN_gSRGAN_d
TensorFlowBaidu,GoogledriveBaidu,Googledrive
PaddlePaddleBaidu,GoogledriveBaidu,Googledrive
MindSpore🚧Coming soon!🚧Coming soon!
PyTorch🚧Coming soon!🚧Coming soon!

Download weights file and put weights under the folder srgan/models/.

Your directory structure should look like this:

srgan/    └── config.py    └── srgan.py    └── train.py    └── vgg.py    └── model          └── vgg19.npy    └── DIV2K          ├── DIV2K_train_HR          ├── DIV2K_train_LR_bicubic          ├── DIV2K_valid_HR          └── DIV2K_valid_LR_bicubic    └── models          ├── g.npz  # You should rename the weigths file.           └── d.npz  # If you set os.environ['TL_BACKEND'] = 'tensorflow',you should rename srgan-g-tensorflow.npz to g.npz .
  • Start evaluation.
python train.py --mode=eval

Results will be saved under the folder srgan/samples/.

Results

Reference

Citation

If you find this project useful, we would be grateful if you cite the TensorLayer paper:

@article{tensorlayer2017,author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},journal = {ACM Multimedia},title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},url = {http://tensorlayer.org},year = {2017}}@inproceedings{tensorlayer2021,  title={TensorLayer 3.0: A Deep Learning Library Compatible With Multiple Backends},  author={Lai, Cheng and Han, Jiarong and Dong, Hao},  booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},  pages={1--3},  year={2021},  organization={IEEE}}

Other Projects

Discussion

License


[8]ページ先頭

©2009-2025 Movatter.jp