Skip to main content

Rademacher and Gaussian Complexities: Risk Bounds and Structural Results

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2111))

Abstract

We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes.We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Peter L. Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory, 44(2):525–536, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  2. Peter L. Bartlett, Stéphane Boucheron, and Gábor Lugosi. Model selection and error estimation. Machine Learning, 2001. (To appear).

    Google Scholar 

  3. Mostefa Golea, Peter L. Bartlett, and Wee Sun Lee. Generalization in decision trees and DNF: Does size matter? In NIPS 10, pages 259–265, 1998.

    Google Scholar 

  4. Michael J. Kearns, Yishay Mansour, Andrew Y. Ng, and Dana Ron. An experimental and theoretical comparison of model selection methods. Machine Learning, 27:7–50, 1997.

    Article  Google Scholar 

  5. V. Koltchinskii. Rademacher penalties and structural risk minimization. Technical report, Department of Mathematics and Statistics, University of New Mexico, 2000.

    Google Scholar 

  6. V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Technical report, Department of Mathematics and Statistics, University of New Mexico, 2000.

    Google Scholar 

  7. V. Koltchinskii and D. Panchenko. Rademacher processes and bounding the risk of function learning. Technical report, Department of Mathematics and Statistics, University of New Mexico, 2000.

    Google Scholar 

  8. E.B. Kong and T.G. Dietterich. Error-correcting output coding corrects bias and variance. In Proc. 12th International Conference on Machine Learning, pages 313–321. Morgan Kaufmann, 1995.

    Google Scholar 

  9. M. Ledoux and M. Talagrand. Probability in Banach Spaces: isoperimetry and processes. Springer, 1991.

    Google Scholar 

  10. Llew Mason, Peter L. Bartlett, and Jonathan Baxter. Improved generalization through explicit optimization of margins. Machine Learning, 38(3):243–255, 2000.

    Article  MATH  Google Scholar 

  11. C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics 1989, pages 148–188. Cambridge University Press, 1989.

    Google Scholar 

  12. Shahar Mendelson. l-norm and its application to learning theory. Positivity, 2001. (To appear—see http://www.axiom.anu.edu.au/~shahar).

  13. Shahar Mendelson. Rademacher averages and phase transitions in Glivenko-Cantelli classes. (see http://www.axiom.anu.edu.au/~shahar), 2001.

  14. Shahar Mendelson. Some remarks on covering numbers. (unpublished manuscript—see http://www.axiom.anu.edu.au/~shahar), 2001.

  15. G. Pisier. The volume of convex bodies and Banach space geometry. Cambridge University Press, 1989.

    Google Scholar 

  16. Robert E. Schapire. Using output codes to boost multiclass learning problems. In Machine Learning: Proc. Fourteenth International Conference, pages 313–321, 1997.

    Google Scholar 

  17. Robert E. Schapire, Yoav Freund, Peter L. Bartlett, and Wee Sun Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5):1651–1686, October 1998.

    Article  MATH  MathSciNet  Google Scholar 

  18. John Shawe-Taylor, Peter L. Bartlett, Robert C. Williamson, and Martin Anthony. Structural risk minimisation over data-dependent hierarchies. IEEE Transactions on Information Theory, 44(5):1926–1940, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  19. N. Tomczak-Jaegermann. Banach-Mazur distance and finite-dimensional operator ideals. Number 38 in Pitman Monographs and Surveys in Pure and Applied Mathematics. Pitman, 1989.

    Google Scholar 

  20. Vladimir N. Vapnik and A.Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264–280, 1971.

    Article  MATH  MathSciNet  Google Scholar 

  21. R.C. Williamson, A.J. Smola, and B. Schölkopf. Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators. IEEE Transactions on Information Theory, 2001. (To appear).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bartlett, P.L., Mendelson, S. (2001). Rademacher and Gaussian Complexities: Risk Bounds and Structural Results. In: Helmbold, D., Williamson, B. (eds) Computational Learning Theory. COLT 2001. Lecture Notes in Computer Science(), vol 2111. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44581-1_15

Download citation

  • DOI: https://doi.org/10.1007/3-540-44581-1_15

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42343-0

  • Online ISBN: 978-3-540-44581-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics