Abstract
In this paper, we elaborate on the well-known relationship between Gaussian Processes (GP) and Support Vector Machines (SVM) under some convex assumptions for the loss functions. This paper concentrates on the derivation of the evidence and error bar approximation for regression problems. An error bar formula is derived based on the ∈-insensitive loss function.
Article PDF
Similar content being viewed by others
References
Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19:6, 716-723.
Evgeniou, T., Pontil, M., & Poggio, T. (1999). A unified framework for regularization networks and support vector machines. A.I. Memo 1654, AI Lab, MIT, Massachusetts.
Gao, J., Gunn, S., Harris, C., & Brown, M. (2000). A variational approach for support vector regression based on probabilistic framework. Reaserch Report, ISIS Research Group, Department of Electronics and Computer Science, University of Southampton.
Girosi, F. (1998). An equivalence between sparse approximation and support vector machines. Neural Computation, 10:6, 1455-1480.
Gunn, S. (1998). Support vector machines for classification and regression. Technical Report, ISIS, Department of Electronics and Computer Science, University of Southampton.
Gunn, S., Brown, M., & Bossley, K. (1997). Network performance assessment for neurofuzzy data modelling. In Lecture notes in computer science (vol. 1280, pp. 313-323). Boston: Academic Press.
Kwok, J. T.-Y. (1999). Moderating the outputs of support vector machine classifiers. IEEE-NN, 10:5, 1018.
MacKay, D. (1991). Bayesian modelling and neural networks. Ph.D. Thesis, California Institute of Technology, Pasadena, CA.
MacKay, D. (1992). Bayesian interpolation. Neural Computation, 4:3, 415-447.
MacKay, D. (1997). Gaussian processes, a replacement for neural networks. NIPS tutorial 1997, Cambridge University.
Murata, N., Yoshizawa, S., & Amari, S. (1994). Network information criterion-determining the number of hidden units for artificial neural network models. IEEE Transactions on Neural Networks, 5, 865-872.
Neal, R. (1996). Bayesian learning for neural networks, Lecture Notes in Statistics. New York: Springer.
Poggio, T. & Girosi, F. (1998). A sparse representation for function approximation. Neural Computation, 10, 1445-1454.
Smola, A. (1998). Learning with kernels. Ph.D. Thesis, Technischen Universität Berlin, Berlin, Germany.
Sollich, P. (1999a). Approximate learning curves for Gaussian processes. In ICANN99: Ninth International Conference on Artificial Neural Networks (pp. 437-442). London.
Sollich, P. (1999b). Bayesian methods for support vector machines: Evidence and error bars. Technical Report, King's College London, London, UK. Accepted by Machine Learning.
Sollich, P. (1999c). Probabilistic interpretations and Bayesian methods for support vector machines. Technical Report, King's College London, London, UK.
Sollich, P. (2000). Probabilistic methods for support vector machines. In S. Solla, T. Leen, & K. Möller (Eds.), Advnaces in neural information processing systems (pp. 349-355). Cambridge, MA: MIT Press.
Tikhonov, A. & Arsenin, V. (1977). Solution of ill-posed problems. Washington, D.C.: W.H. Winston.
Vapnik, V. (1995). The nature of statistical learning theory. New York: Springer.
Vapnik, V. (1998). Statistical learning theory. New York: Wiley.
Wahba, G. (1990). Splines models for observational data, vol. 59 of Series in Applied Mathematics. Philadelphia: SIAM Press.
Wahba, G. (1999). Support vector machines, reproducing kernel Hilbert spaces and the randomized GACV. In B. Schölkopf, C. Burges, & A. Smola (Eds.), Advances in kernel methods–support vector learning (pp. 68-88). Cambridge, MA: MIT Press.
Williams, C. (1997). Computing with infinite networks. In M. Mozer, M. Jordan, & T. Petsche (Eds.), Neural information processing systems (vol. 9, pp. 295-301). Cambridge, MA: MIT Press.
Williams, C. (1998). Prediction with Gaussian processes: From linear regression to linear prediction and beyond. In M. Jordan (Ed.), Learning in graphical models (pp. 599-621). Cambridge, MA: MIT Press.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Gao, J., Gunn, S., Harris, C. et al. A Probabilistic Framework for SVM Regression and Error Bar Estimation. Machine Learning 46, 71–89 (2002). https://doi.org/10.1023/A:1012494009640
Issue Date:
DOI: https://doi.org/10.1023/A:1012494009640