Summary
The basic purpose of data compression is to massage a data stream to reduce the average bit rate required for transmission or storage by removing unwanted redundancy and/or unnecessary precision. A mathematical formulation of data compression providing figures of merit and bounds on optimal performance was developed by Shannon [1,2] both for the case where a perfect compressed reproduction is required and for the case where a certain specified average distortion is allowable. Unfortunately, however, Shannon’s probabilistic approach requires advance precise knowledge of the statistical description of the process to be compressed - a demand rarely met in practice. The coding theorems only apply, or are meaningful, when the source is stationary and ergodic.
We here present a tutorial description of numerous recent approaches and results generalizing the Shannon approach to unknown statistical environments. Simple examples and empirical results are given to illustrate the essential ideas.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
SHANNON, C.E., “The Mathematical Theory of Communication”, University of Illinois Press, 1949, Urbana, Illinois.
Coding Theorems for a Discrete Source with a Fidelity Criterion“, in IRE Nat.Conv.Rec., pt. 4, pp. 142–163, 1959.
ROZANOV, YU., “Stationary Random Processes”, Holden-Day, San Francisco, 1967.
GRAY, R.M., and DAVISSON, L.D., “Source Coding Theorems without the Ergodic Assumption”, IEEE Trans. IT, July 1974.
GRAY, R.M., and DAVISSON, L.D., “The Ergodic Decomposition of Discrete Stationary Sources”, IEEE Trans. IT, September 1974.
DAVISSON, L.D., “Universal Noiseless Coding”, IEEE Trans. Inform. Theory, Vol. IT-19, pp. 783–795, November 1973.
GRAY, R.M., NEUHOFF, D., and SHIELDS, P., “A Generalization of Ornstein’s d Metric with Applications to Information Theory”, Annals of Probability (to be published).
NEUHOFF, D., GRAY, R.M. and DAVISSON, L.D., “Fixed Rate Universal Source Coding with a Fidelity Criterion”, submitted to IEEE Trans. IT.
PURSLEY, M.B., “Coding Theorems for Non-Ergodic Sources and Sources with Unknown Parameters”, USC Technical Report, February 1974.
ZIV, J., “Coding of Sources with Unknown Statistics-Part I: Probability of Encoding Error; Part II: Distortion Relative to a Fidelity Criterion”, IEEE Trans.Info.Theo., vol IT-18, No. 4, July 1972, pp. 460–473.
BLAHUT, R.E., “Computation of Channel Capacity and Rate-Distortion Functions”, IEEE Trans.Info.Theo., Vol. IT-18, No. 4, July 1972, pp. 460–473.
GALLAGER, R.G., “Information Theory and Reliable Communication”, New York, Wiley, 1968, ch. 9.
BERGER, T., “Rate Distortion Theory: A Mathematical Basis for Data Compression”, Englewood Cliffs, New Jersey, Prentice-Hall.
NEUHOFF, D., Ph.D. Research, Stanford University, 1973.
GRAY, R.M., and DAVISSON, L.D. “A Mathematical Theory of Data Compression (?)”, USCEE Report, September 1974.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1975 Springer-Verlag Wien
About this chapter
Cite this chapter
Davisson, L.D. (1975). Universal Source Coding. In: Advances in Source Coding. International Centre for Mechanical Sciences, vol 166. Springer, Vienna. https://doi.org/10.1007/978-3-7091-2928-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-7091-2928-9_2
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-81302-7
Online ISBN: 978-3-7091-2928-9
eBook Packages: Springer Book Archive