Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Deep learning software for colorizing black and white images with a few clicks.

License

NotificationsYou must be signed in to change notification settings

junyanz/interactive-deep-colorization

Repository files navigation

Project Page |Paper |Demo Video |SIGGRAPH Talk

04/10/2020 Update:@mabdelhack provided a windows installation guide for the PyTorch model in Python 3.6. Check out the Windowsbranch for the guide.

10/3/2019 Update: Our technology is also now available in Adobe Photoshop Elements 2020. See thisblog andvideo for more details.

9/3/2018 Update: The code now supports a backend PyTorch model (with PyTorch 0.5.0+). Please find the Local Hints Network training code in thecolorization-pytorch repository.

Real-Time User-Guided Image Colorization with Learned Deep Priors.
Richard Zhang*,Jun-Yan Zhu*,Phillip Isola,Xinyang Geng, Angela S. Lin, Tianhe Yu, andAlexei A. Efros.
In ACM Transactions on Graphics (SIGGRAPH 2017).
(*indicates equal contribution)

We first describe the system(0) Prerequisities and steps for(1) Getting started. We then describe the interactive colorization demo(2) Interactive Colorization (Local Hints Network). There are two demos: (a) a "barebones" version in iPython notebook and (b) the full GUI we used in our paper. We then provide an example of the(3) Global Hints Network.

(0) Prerequisites

  • Linux or OSX
  • Caffe or PyTorch
  • CPU or NVIDIA GPU + CUDA CuDNN.

(1) Getting Started

  • Clone this repo:
git clone https://github.com/junyanz/interactive-deep-colorization ideepcolorcd ideepcolor
  • Download the reference model
bash ./models/fetch_models.sh

(2) Interactive Colorization (Local Hints Network)

We provide a "barebones" demo in iPython notebook, which does not require QT. We also provide our full GUI demo.

(2a) Barebones Interactive Colorization Demo

If you need to convert the Notebook to an older version, usejupyter nbconvert --to notebook --nbformat 3 ./DemoInteractiveColorization.ipynb.

(2b) Full Demo GUI

  • InstallQt5 andQDarkStyle. (SeeInstallation)

  • Run the UI:python ideepcolor.py --gpu [GPU_ID] --backend [CAFFE OR PYTORCH]. Arguments are described below:

--win_size    [512] GUI window size--gpu         [0] GPU number--image_file  ['./test_imgs/mortar_pestle.jpg'] path to the image file--backend     ['caffe'] either use 'caffe' or 'pytorch'; 'caffe' is the official model from siggraph 2017, and 'pytorch' is the same weights converted
  • User interactions

  • Adding points: Left-click somewhere on the input pad
  • Moving points: Left-click and hold on a point on the input pad, drag to desired location, and let go
  • Changing colors: For currently selected point, choose a recommended color (middle-left) or choose a color on the ab color gamut (top-left)
  • Removing points: Right-click on a point on the input pad
  • Changing patch size: Mouse wheel changes the patch size from 1x1 to 9x9
  • Load image: Click the load image button and choose desired image
  • Restart: Click on the restart button. All points on the pad will be removed.
  • Save result: Click on the save button. This will save the resulting colorization in a directory where theimage_file was, along with the user input ab values.
  • Quit: Click on the quit button.

(3) Global Hints Network

We include an example usage of our Global Hints Network, applied to global histogram transfer. We show its usage in an iPython notebook.

Installation

  • Install Caffe or PyTorch. The Caffe model is official. PyTorch is a reimplementation.

    • Install Caffe: see the Caffeinstallation and Ubuntu installationdocument. Please compile the Caffe with the python layersupport (setWITH_PYTHON_LAYER=1 in theMakefile.config) and build Caffe python library bymake pycaffe.

    You also need to addpycaffe to yourPYTHONPATH. Usevi ~/.bashrc to edit the environment variables.

    PYTHONPATH=/path/to/caffe/python:$PYTHONPATHLD_LIBRARY_PATH=/path/to/caffe/build/lib:$LD_LIBRARY_PATH
  • Install scikit-image, scikit-learn, opencv, Qt5, and QDarkStyle pacakges:

# ./install/install_deps.shsudo pip install scikit-imagesudo pip install scikit-learnsudo apt-get install python-opencvsudo apt-get install qt5-defaultsudo pip install qdarkstyle

For Conda users, type the following command lines (this may work for full Anaconda but not Miniconda):

# ./install/install_conda.shconda install -c anaconda protobuf## photobufconda install -c anaconda scikit-learn=0.19.1## scikit-learnconda install -c anaconda scikit-image=0.13.0## scikit-imageconda install -c menpo opencv=2.4.11## opencvconda install -c anaconda qt## qt5conda install -c auto qdarkstyle## qdarkstyle

For Docker users, please follow the Dockerdocument.

Training

Please find a PyTorch reimplementation of the Local Hints Network training code in thecolorization-pytorch repository.

Citation

If you use this code for your research, please cite our paper:

@article{zhang2017real,  title={Real-Time User-Guided Image Colorization with Learned Deep Priors},  author={Zhang, Richard and Zhu, Jun-Yan and Isola, Phillip and Geng, Xinyang and Lin, Angela S and Yu, Tianhe and Efros, Alexei A},  journal={ACM Transactions on Graphics (TOG)},  volume={9},  number={4},  year={2017},  publisher={ACM}}

Cat Paper Collection

One of the authors objects to the inclusion of this list, due to an allergy. Another author objects on the basis that cats are silly creatures and this is a serious, scientific paper. However, if you love cats, and love reading cool graphics, vision, and learning papers, please check out the Cat Paper Collection:[Github][Webpage]

About

Deep learning software for colorizing black and white images with a few clicks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors4

  •  
  •  
  •  
  •  

[8]ページ先頭

©2009-2025 Movatter.jp