A basic explanation of how to build this graph: On the left side panel you will see all of the nodes that you can bring in. First, select the “Video frame iterator” and drag it into the workspace. Inside the Video Frame Iterator there will already be two additional nodes–”Load frame as Image” and “Write Output Frame”. At the top of the Video Frame Iterator is an area where you can select your input video file. You also want to find the orange “Upscale Image” node and drag it inside the Video Frame Iterator.
Next, select the orange “Load Model” node and drag it into the workspace. (Note that it used to be necessary to place the Load Model node outside of the iterator, but newer versions of chainner don’t care where you put it). Finally just hook up the nodes as depicted in the image. On the “Write Output Frame” node you can specify a different directory and filename for your output file, and you can also set the encoding settings here. I recommend outputting an MP4 at the best quality (quality 0) to create a lossless file which you can then do further processing or encoding on.
Selecting a Model
Processing a video may be somewhat slow, depending on your graphics card and the model selected. Before processing an entire video, you should first export a few individual frames from your video and test different models on them in order to find a model (or models) that make your video look good.
It is worth noting that you may see some models described as “lite” or “compact”. These models can be much faster at the expense of the model being smaller (and thus it wasn’t able to “learn” as much). Animation does not usually need a full sized model, so I highly recommend using lite or compact models when possible. In general, the smaller the file size of a model, the faster it will run. A full-sized ESRGAN model is around 64MB, while compact models may come in at just a couple MB.
You will also want to note that each model is trained to scale the image by a specific factor–typically 2x or 4x. In order to get your video to a specific resolution, you will typically upscale it and then do a standard resize to get it to the exact size you want it to be. 1x models are usually designed to fix a particular problem and then be chained together with another model to do the actual upscaling.
With that out of the way, here are a few models that I recommend you try, in no particular order. There are lots of other good ones, so feel free to try out others as well.
Model Name - Description
(2x) Futsuu_Anime - Upscales while doing some sharpening and line darkening. Can also clean up some minor artifacts of various types. Pretty good general purpose model.
(2x) AnimeClassics UltraLite - Handles Rainbows, Dot Crawl, MPEG/H.264 Compression, and may even assist in removing halos, and fixing blurriness in certain cases. Best when used on old anime that is grainy.
(2x) LD-Anime_Compact - Upscales while fixing numerous video problems, including: noise/grain, compression artifacts, rainbows, dot crawl, halos and color bleed. Can over-smooth some textures though.
(2x) Digitoon Lite - Meant as a versatile model for upscaling high detail digital anime and cartoons. Has debanding, MPEG-2 correction, and halo reduction.
(1x) HurrDeblur SuperUltraCompact - Very fast 1x sharpening/deblurring model
(1x) AnimeUndeint Compact - Corrects jagged lines on animation that has been deinterlaced.
(1x) BleedOut Compact - Helps repair color bleed and heavy chroma noise that may be present on some older footage
(1x) Dotzilla Compact - Wipes out dot crawl and rainbows in animation.
Of course, if none of these seem to work well for your source, feel free to try out any other models in the model database to see if there are any that work better for you.
If you don’t have an NVIDIA GPU
Most models are distributed as .pth (pytorch) files. These work best with NVIDIA GPUs.
If you have an AMD GPU, you will get best results by converting the model to NCNN format. You can do this as follows:
Select the Orange Load Model node (orange indicates you are loading a pytorch model). Select the Orange Convert to NCNN node. Select the Pink Save Model node.