Color difference weighted adaptive residual preprocessing using perceptual modeling for video compression

Mark Q Shaw, Purdue University


In this work, we investigate a method for selectively modifying a video stream using a color contrast sensitivity model based on the human visual system. The model identifies regions of high variance with frame-to-frame differences that are visually imperceptible to a human observer. The model is based on the CIELAB color appearance model and the CIE Δ E94 color difference formula, taking advantage of the nature of frame-based progressive video coding. The use of a color contrast sensitivity model alone was not sufficient. Therefore, it was important to also incorporate perceptual saliency and spatial activity information from the scene. To correct for the effects of temporal drift, a drift control algorithm was implemented to minimize the propagation of errors. The method has been implemented in the JM 18.0 H.264/AVC encoder reference software and yields an average of 14% improvement in compression without visibly degrading the video quality. Further compression gains (averaging up to 43%) are achievable if one dynamically changes the color difference attenuation allowed in the encoding process. As expected, the amount of compression improvement obtained is dependent on the type of video content being compressed. The imperceptibility of the changes was confirmed through psychophysical evaluation.




Delp, Purdue University.

Subject Area

Electrical engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server