Skip to main content

Efficient Parallel Computation of the Stochastic MV-PURE Estimator by the Hybrid Steepest Descent Method

  • Conference paper
Artificial Intelligence and Soft Computing (ICAISC 2012)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7268))

Included in the following conference series:

Abstract

In this paper we consider the problem of efficient computation of the stochastic MV-PURE estimator which is a reduced-rank estimator designed for robust linear estimation in ill-conditioned inverse problems. Our motivation for this result stems from the fact that the reduced-rank estimation by the stochastic MV-PURE estimator, while avoiding the problem of regularization parameter selection appearing in a common regularization technique used in inverse problems and machine learning, presents computational challenge due to nonconvexity induced by the rank constraint. To combat this problem, we propose a recursive scheme for computation of the general form of the stochastic MV-PURE estimator which does not require any matrix inversion and utilize the inherently parallel hybrid steepest descent method. We verify efficiency of the proposed scheme in numerical simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Poggio, T., Girosi, F.: Networks for Approximation and Learning. Proceedings of the IEEE 78(9), 1481–1497 (1990)

    Article  Google Scholar 

  2. Evgeniou, T., Pontil, M., Poggio, T.: Regularization Networks and Support Vector Machines. Advances in Computational Mathematics 13(1), 1–50 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. De Vito, E., Rosasco, L., Caponetto, A., De Giovannini, U., Odone, F.: Learning from Examples as an Inverse Problem. Journal of Machine Learning Research 6, 883–904 (2005)

    MATH  Google Scholar 

  4. Tikhonov, A.N.: Solution of Incorrectly Formulated Problems and the Regularization Method. Soviet Math. Doktl. 5, 1035–1038 (1963)

    MATH  Google Scholar 

  5. Marquardt, D.W.: Generalized Inverses, Ridge Regression, Biased Linear Estimation, and Nonlinear Estimation. Technometrics 12, 591–612 (1970)

    Article  MATH  Google Scholar 

  6. Scharf, L.L., Thomas, J.K.: Wiener Filters in Canonical Coordinates for Transform Coding, Filtering, and Quantizing. IEEE Transactions on Signal Processing 46(3), 647–654 (1998)

    Article  Google Scholar 

  7. Yamashita, Y., Ogawa, H.: Relative Karhunen-Loeve Transform. IEEE Transactions on Signal Processing 44(2), 371–378 (1996)

    Article  Google Scholar 

  8. Kulis, B., Sustik, M., Dhillon, I.: Learning Low-Rank Kernel Matrices. In: ICML 2006, pp. 505–512 (2006)

    Google Scholar 

  9. Smola, A.J., Schölkopf, B.: Sparse Greedy Matrix Approximation for Machine Learning. In: ICML 2000, pp. 911–918 (2000)

    Google Scholar 

  10. Bach, F.R., Jordan, M.I.: Predictive Low-Rank Decomposition for Kernel Methods. In: ICML 2005, pp. 33–40 (2005)

    Google Scholar 

  11. Ben-Israel, A., Greville, T.N.E.: Generalized Inverses: Theory and Applications, 2nd edn. Springer, New York (2003)

    MATH  Google Scholar 

  12. Yamada, I., Elbadraoui, J.: Minimum-Variance Pseudo-Unbiased Low-Rank Estimator for Ill-Conditioned Inverse Problems. In: IEEE ICASSP 2006, pp. 325–328 (2006)

    Google Scholar 

  13. Piotrowski, T., Yamada, I.: MV-PURE Estimator: Minimum-Variance Pseudo-Unbiased Reduced-Rank Estimator for Linearly Constrained Ill-Conditioned Inverse Problems. IEEE Transactions on Signal Processing 56(8), 3408–3423 (2008)

    Article  MathSciNet  Google Scholar 

  14. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004)

    MATH  Google Scholar 

  15. Piotrowski, T., Cavalcante, R.L.G., Yamada, I.: Stochastic MV-PURE Estimator: Robust Reduced-Rank Estimator for Stochastic Linear Model. IEEE Transactions on Signal Processing 57(4), 1293–1303 (2009)

    Article  MathSciNet  Google Scholar 

  16. Piotrowski, T., Yamada, I.: Why the Stochastic MV-PURE Estimator Excels in Highly Noisy Situations? In: IEEE ICASSP 2009, pp. 3081–3084 (2009)

    Google Scholar 

  17. Chen, T., Amari, S., Lin, Q.: A Unified Algorithm for Principal and Minor Components Extraction. Neural Networks 11, 385–390 (1998)

    Article  Google Scholar 

  18. Yamada, I.: Hybrid Steepest Descent Method for Variational Inequality Problem Over the Intersection of Fixed Point Sets of Nonexpansive Mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithm for Feasibility and Optimization and Their Applications, pp. 473–504. Elsevier (2001)

    Google Scholar 

  19. Yamada, I., Ogura, N., Shirakawa, N.: A Numerically Robust Hybrid Steepest Descent Method for the Convexly Constrained Generalized Inverse Problems. In: Nashed, Z., Scherzer, O. (eds.) Inverse Problems, Image Analysis and Medical Imaging, vol. 313, pp. 269–305. American Mathematical Society, Contemporary Mathematics (2002)

    Chapter  Google Scholar 

  20. Yamada, I., Yukawa, M., Yamagishi, M.: Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and Its Applications, vol. 49, pp. 345–390 (2011)

    Google Scholar 

  21. Piotrowski, T., Yamada, I.: Convex Formulation of the Stochastic MV-PURE Estimator and Its Relation to the Reduced Rank Wiener Filter. In: IEEE ICSES 2008, pp. 397–400 (2008)

    Google Scholar 

  22. Kailath, T., Sayed, A.H., Hassibi, B.: Linear Estimation. Prentice Hall, New Jersey (2000)

    Google Scholar 

  23. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. The MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  24. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, New York (1985)

    MATH  Google Scholar 

  25. Golub, G.H., Van Loan, C.F.: Matrix Computations. The Johns Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  26. Badeau, R., Richard, G., David, B.: Fast and Stable YAST Algorithm for Principal and Minor Subspace Tracking. IEEE Transactions on Signal Processing 57(4), 3437–3446 (2008)

    Article  MathSciNet  Google Scholar 

  27. Stark, H., Yang, Y.: Vector Space Projections: A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics. John Wiley & Sons, New York (1998)

    MATH  Google Scholar 

  28. Hua, Y., Nikpour, M.: Computing the Reduced-Rank Wiener Filter by IQMD. IEEE Signal Processing Letters 6(9), 240–242 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Piotrowski, T., Yamada, I. (2012). Efficient Parallel Computation of the Stochastic MV-PURE Estimator by the Hybrid Steepest Descent Method. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2012. Lecture Notes in Computer Science(), vol 7268. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29350-4_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-29350-4_49

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29349-8

  • Online ISBN: 978-3-642-29350-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics