Skip to main content

Pattern Recognition and Density Estimation under the General i.i.d. Assumption

  • Conference paper
  • First Online:
Computational Learning Theory (COLT 2001)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2111))

Included in the following conference series:

Abstract

Statistical learning theory considers three main problems, pattern recognition, regression and density estimation. This paper studies solvability of these problems (mainly concentrating on pattern recognition and density estimation) in the “high-dimensional” case, where the patterns in the training and test sets are never repeated. We show that, assuming an i.i.d. data source but without any further assumptions, the problems of pattern recognition and regression can often be solved (and there are practically useful algorithms to solve them). On the other hand, the problem of density estimation, as we formalize it, cannot be solved under the general i.i.d. assumption, and additional assumptions are required.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. David R. Cox and David V. Hinkley. Theoretical Statistics. Chapman and Hall, London, 1974.

    MATH  Google Scholar 

  2. Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-based Methods. Cambridge, Cambridge University Press, 2000.

    Google Scholar 

  3. Alex Gammerman and A.R. Thatcher. Bayesian diagnostic probabilities without assuming independence of symptoms. Yearbook of Medical Informatics, pp. 323–330, 1992.

    Google Scholar 

  4. Alex Gammerman, Vladimir Vapnik, and Volodya Vovk. Learning by transduction. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, pp. 148–156, San Francisco, CA, 1998. Morgan Kaufmann.

    Google Scholar 

  5. Ming Li and Paul Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Springer, New York, 2nd edition, 1997.

    MATH  Google Scholar 

  6. Ilia Nouretdinov, Tom Melluish, and Volodya Vovk. Ridge Regression Confidence Machine. In Proceedings of the 18th International Conference on Machine Learning, 2001.

    Google Scholar 

  7. Kostas Proedrou, Ilia Nouretdinov, Volodya Vovk, and Alex Gammerman. Transductive confidence machines for pattern recognition. Technical Report CLRC-TR-01-02, Computer Learning Research Centre, Royal Holloway, University of London, 2001. Available from the CLRC web site, http://www.clrc.rhul.ac.uk.

  8. Craig Saunders, Alex Gammerman, and Volodya Vovk. Transduction with confidence and credibility. In Proceedings of the 16th International Joint Conference on Artificial Intelligence, pp. 722–726, 1999.

    Google Scholar 

  9. Craig Saunders, Alex Gammerman, and Volodya Vovk. Computationally efficient transductive machines. In Proceedings of ALT’00, 2000.

    Google Scholar 

  10. Vladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995.

    MATH  Google Scholar 

  11. Vladimir N. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.

    MATH  Google Scholar 

  12. Vladimir N. Vapnik and Olivier Chapelle. Bounds on error expectation for Support Vector Machines. In Alex J. Smola, Peter Bartlett, B. Schülkopf, and C. Schuurmans, editors, Advances in Large Margin Classifiers, pp. 5–26. MIT Press, 1999.

    Google Scholar 

  13. Volodya Vovk, Alex Gammerman, and Craig Saunders. Machine-learning applications of algorithmic randomness. In Proceedings of the 16th International Conference on Machine Learning, pp. 444–453, 1999.

    Google Scholar 

  14. Volodya Vovk and Vladimir V. V’yugin. On the empirical validity of the Bayesian method. Journal of Royal Statistical Society B, 55:253–266, 1993.

    MATH  MathSciNet  Google Scholar 

  15. Alexander K. Zvonkin and Leonid A. Levin. The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Mathematical Surveys, 25:83–124, 1970.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Nouretdinov, I., Vovk, V., Vyugin, M., Gammerman, A. (2001). Pattern Recognition and Density Estimation under the General i.i.d. Assumption. In: Helmbold, D., Williamson, B. (eds) Computational Learning Theory. COLT 2001. Lecture Notes in Computer Science(), vol 2111. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44581-1_22

Download citation

  • DOI: https://doi.org/10.1007/3-540-44581-1_22

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42343-0

  • Online ISBN: 978-3-540-44581-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics