Adaptive lossless video compression

Sahng-Gyu Park, Purdue University


In this thesis, a new lossless adaptive compression algorithm for color video sequences is described. There are three types of redundancy in color video sequences: spatial, spectral and temporal redundancy. Our approach is to develop a new backward-adaptive temporal prediction technique to reduce temporal redundancy in a video sequence. The new temporal prediction technique is similar to the concept of the use of a motion vector, but requires that no motion vectors to be sent to the receiver. Another approach is to exploit both spatial and temporal redundancies in the video sequence. If there is a great deal of motion in the video sequence, temporal prediction does not perform well with respect to compression efficiency. In this case, only spatial prediction is used. In other cases, temporal prediction may work better than spatial prediction. An adaptive selection method between spatial and temporal prediction improves the compression performance. An adaptive integer wavelet transform is also investigated. Using the new backward-adaptive temporal prediction and the adaptive selection method between spatial and temporal prediction we show that our scheme is better than the state-of-the-art lossless compression algorithms. ^




Major Professor: Edward J. Delp, Purdue University.

Subject Area

Engineering, Electronics and Electrical

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server