Waifu2x

From ArchWiki
Jump to navigation Jump to search

This article covers installing, using and training waifu2x, image super-resolution for anime-style art using deep convolutional neural networks.

Installation

To directly use waifu2x, install waifu2x-gitAUR package. There are other alternates for using waifu2x, just search waifu2x in AUR.

Tip: If you have an NVIDIA GPU, you can install cuda to significantly speed up the conversion process.

Usage

waifu2x is avaliable with command waifu2x. For detailed options, run waifu2x --help

Upscaling

Use --scale_ratio parameter to specify scale ratio you want. And -i with input file name, -o with output file name:

waifu2x --scale_ratio 2 -i my_waifu.png -o 2x_my_waifu.png

Noise Reduction

Use --noise_level parameter(1 or 2) to specify noise reduction level:

waifu2x --noise_level 1 -i my_waifu.png -o lucid_my_waifu.png

And you can use --jobs to specify number of threads launching at same time, benifit for multi-core CPU :

waifu2x --jobs 4 --noise_level 1 -i my_waifu.png -o lucid_my_waifu.png

Upscaling & Noise Reduction

--scale_ratio and --noise_level can be combined, so you can:

waifu2x --scale_ratio 2 --noise_level 1 -i my_waifu.png -o 2x_lucid_my_waifu.png
Tip: If you are finding a batch operation interface, you can have a look at this waifu2x wrapper script

Training

Tango-edit-cut.pngThis section is being considered for removal.Tango-edit-cut.png

Reason: This should be a PKGBUILD, most of it is also just copy-pasted from the README on Github (Discuss in Talk:Waifu2x)

To train custom models, an NVIDIA graphical card is required because waifu2x uses CUDA for computing. Then you need to prepare below develop dependencies and waifu2x source.

Dependencies

Install:

It is recommended to install below optional cuDNN library and bindings package. With them you can enable cuDNN backend for training, which have a significant speed up.

You need to manually download a cudnn binary pack from NVIDIA cuDNN site during installing cudnn.

waifu2x source

Fetch waifu2x source code from GitHub:

git clone --depth 1 https://github.com/nagadomi/waifu2x.git

Enter source directory. Now you can test waifu2x command line tool:

th waifu2x.lua

Command line tools

Note: If you have installed cuDNN library, you can use cuDNN with -force_cudnn 1 option. cuDNN is too much faster than default kernel.

Noise Reduction

th waifu2x.lua -m noise -noise_level 1 -i input_image.png -o output_image.png
th waifu2x.lua -m noise -noise_level 0 -i input_image.png -o output_image.png
th waifu2x.lua -m noise -noise_level 2 -i input_image.png -o output_image.png
th waifu2x.lua -m noise -noise_level 3 -i input_image.png -o output_image.png

2x Upscaling

th waifu2x.lua -m scale -i input_image.png -o output_image.png

Noise Reduction + 2x Upscaling

th waifu2x.lua -m noise_scale -noise_level 1 -i input_image.png -o output_image.png
th waifu2x.lua -m noise_scale -noise_level 0 -i input_image.png -o output_image.png
th waifu2x.lua -m noise_scale -noise_level 2 -i input_image.png -o output_image.png
th waifu2x.lua -m noise_scale -noise_level 3 -i input_image.png -o output_image.png

For more, see waifu2x#command-line-tools.

Train your own models

Note: If you have installed cuDNN library, you can use cuDNN kernel with -backend cudnn option. And, you can convert trained cudnn model to cunn model with tools/rebuild.lua.
Note: The command that was used to train for waifu2x's pretraind models is available at appendix/train_upconv_7_art.sh, appendix/train_upconv_7_photo.sh. Maybe it is helpful.

Data Preparation

Genrating a file list.

find /path/to/image/dir -name "*.png" > data/image_list.txt
Note: You should use noise free images.

Converting training data:

th convert_data.lua

Train a Noise Reduction(level1) model

mkdir models/my_model
th train.lua -model_dir models/my_model -method noise -noise_level 1 -test images/miku_noisy.png
# usage
th waifu2x.lua -model_dir models/my_model -m noise -noise_level 1 -i images/miku_noisy.png -o output.png

You can check the performance of model with models/my_model/noise1_best.png.

Train a Noise Reduction(level2) model

th train.lua -model_dir models/my_model -method noise -noise_level 2 -test images/miku_noisy.png
# usage
th waifu2x.lua -model_dir models/my_model -m noise -noise_level 2 -i images/miku_noisy.png -o output.png

You can check the performance of model with models/my_model/noise2_best.png.

Train a 2x UpScaling model

th train.lua -model upconv_7 -model_dir models/my_model -method scale -scale 2 -test images/miku_small.png
# usage
th waifu2x.lua -model_dir models/my_model -m scale -scale 2 -i images/miku_small.png -o output.png

You can check the performance of model with models/my_model/scale2.0x_best.png.

Train a 2x and noise reduction fusion model

th train.lua -model upconv_7 -model_dir models/my_model -method noise_scale -scale 2 -noise_level 1 -test images/miku_small.png
# usage
th waifu2x.lua -model_dir models/my_model -m noise_scale -scale 2 -noise_level 1 -i images/miku_small.png -o output.png

You can check the performance of model with models/my_model/noise1_scale2.0x_best.png.

For latest information, see waifu2x#train-your-own-model.

Docker

See waifu2x#docker.

See also