CFNet: A Synthesis for Video Colorization

Ziyang Tang, Purdue University

Abstract

Image to Image translation has been triggered a huge interests among the different topics in deep learning recent years. It provides a mapping function to encode the noisy input images into a high dimensional signal and translate it to the desired output images. The mapping can be one to one, many to one or one to many. Due to the uncertainty from the mapping functions, when extend the methods in video field, the flickering problems emerges. Even a slight change among the frames may bring a obvious change in the output images. In this thesis, we provide a two-stream solution as CFNet for the flickering problems in video colorizations. Compared with the frame-by-frame methods by the previous work, CFNet has a great improvement in allevaiting the flickering problems in video colorizations, especially for the video clips with large objects and still background. Compared with the baseline with frame by frame methods, CFNet improved the PSNR from 27 to 30, which is a great progress.

Degree

M.Sc.

Advisors

Gusev, Purdue University.

Subject Area

Computer science

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS