Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Tensorflow 2.x based implementation of FSRCNN for single image super-resolution

License

NotificationsYou must be signed in to change notification settings

Nhat-Thanh/FSRCNN-TF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Implementation of FSRCNN model inAccelerating the Super-Resolution Convolutional Neural Network paper with Tensorflow 2x.

Pytorch version:https://github.com/Nhat-Thanh/FSRCNN-Pytorch

I used Adam with optimize tuned hyperparameters instead of SGD + Momentum.

I implemented 3 models, FSRCNN-x2, FSRCNN-x3, FSRCNN-x4.

Contents

Train

You run this command to begin the training:

python train.py  --steps=200000             \                 --scale=2                  \                 --batch_size=128           \                 --save-best-only=0         \                 --save-every=1000          \                 --save-log=0               \                 --ckpt-dir="checkpoint/x2"
  • --save-best-only: if it's equal to0, model weights will be saved everysave-every steps.
  • --save-log: if it's equal to1,train loss, train metric, validation loss, validation metric will be saved everysave-every steps.

NOTE: if you want to re-train a new model, you should delete all files in sub-directories incheckpoint directory. Your checkpoint will be saved when above command finishs and can be used for the next times, so you can train a model on Google Colab without taking care of GPU time limit.

I trained 3 models on Google Colab in 200000 steps:Open In Colab

You can get the models here:

Test

I useSet5 as the test set. After Training, you can test models with scale factorsx2, x3, x4, the result is calculated by compute average PSNR of all images.

python test.py --scale=2 --ckpt-path="default"
  • --ckpt-path="default" means you are using default model path, akacheckpoint/x{scale}/FSRCNN-x{scale}.h5. If you want to use your trained model, you can pass yours to--ckpt-path.

Demo

After Training, you can test models with this command, the result is thesr.png.

python demo.py --image-path="dataset/test2.png" \               --ckpt-path="default"            \               --scale=2
  • --ckpt-path is the same as inTest

Evaluate

I evaluated models with Set5, Set14, BSD100 and Urban100 dataset by PSNR. I use Set5's Butterfly to show my result:

ModelSet5Set14BSD100Urban100
FSRCNN-x237.719134.045433.989331.3276
FSRCNN-x334.611431.262831.3051X
FSRCNN-x431.887729.261729.597626.9266


Bicubic x2-x3-x4 (top), FSRCNN x2-x3-x4 (bottom).

References

About

Tensorflow 2.x based implementation of FSRCNN for single image super-resolution

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp