Abstract

Quantzzat~onc an effectively reduce the huge amount of data with possibly small error (called quantzzatton error). In designing a quantizer using a portion of the data as a training data, the training algorithm tries to find a codebook that minimizes the quantization error measured in the training data. It is known that, under several conditions, the minimized quantizat.ion error approaches the opt,imal error for the underlying distribution of the training data as the training data size increases. In this report, an upper bound for the minimized quantization error from the training data is derived as a function of the ratio of the training data size to the codebook size. This bound enables us to observe the coiivergence behavior of the trained quantizers as the training data size increases.

Date of this Version

April 1998

Share

COinS