Abstract
In this paper we consider the problem of efficient computation of the stochastic MV-PURE estimator which is a reduced-rank estimator designed for robust linear estimation in ill-conditioned inverse problems. Our motivation for this result stems from the fact that the reduced-rank estimation by the stochastic MV-PURE estimator, while avoiding the problem of regularization parameter selection appearing in a common regularization technique used in inverse problems and machine learning, presents computational challenge due to nonconvexity induced by the rank constraint. To combat this problem, we propose a recursive scheme for computation of the general form of the stochastic MV-PURE estimator which does not require any matrix inversion and utilize the inherently parallel hybrid steepest descent method. We verify efficiency of the proposed scheme in numerical simulations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Poggio, T., Girosi, F.: Networks for Approximation and Learning. Proceedings of the IEEE 78(9), 1481–1497 (1990)
Evgeniou, T., Pontil, M., Poggio, T.: Regularization Networks and Support Vector Machines. Advances in Computational Mathematics 13(1), 1–50 (2000)
De Vito, E., Rosasco, L., Caponetto, A., De Giovannini, U., Odone, F.: Learning from Examples as an Inverse Problem. Journal of Machine Learning Research 6, 883–904 (2005)
Tikhonov, A.N.: Solution of Incorrectly Formulated Problems and the Regularization Method. Soviet Math. Doktl. 5, 1035–1038 (1963)
Marquardt, D.W.: Generalized Inverses, Ridge Regression, Biased Linear Estimation, and Nonlinear Estimation. Technometrics 12, 591–612 (1970)
Scharf, L.L., Thomas, J.K.: Wiener Filters in Canonical Coordinates for Transform Coding, Filtering, and Quantizing. IEEE Transactions on Signal Processing 46(3), 647–654 (1998)
Yamashita, Y., Ogawa, H.: Relative Karhunen-Loeve Transform. IEEE Transactions on Signal Processing 44(2), 371–378 (1996)
Kulis, B., Sustik, M., Dhillon, I.: Learning Low-Rank Kernel Matrices. In: ICML 2006, pp. 505–512 (2006)
Smola, A.J., Schölkopf, B.: Sparse Greedy Matrix Approximation for Machine Learning. In: ICML 2000, pp. 911–918 (2000)
Bach, F.R., Jordan, M.I.: Predictive Low-Rank Decomposition for Kernel Methods. In: ICML 2005, pp. 33–40 (2005)
Ben-Israel, A., Greville, T.N.E.: Generalized Inverses: Theory and Applications, 2nd edn. Springer, New York (2003)
Yamada, I., Elbadraoui, J.: Minimum-Variance Pseudo-Unbiased Low-Rank Estimator for Ill-Conditioned Inverse Problems. In: IEEE ICASSP 2006, pp. 325–328 (2006)
Piotrowski, T., Yamada, I.: MV-PURE Estimator: Minimum-Variance Pseudo-Unbiased Reduced-Rank Estimator for Linearly Constrained Ill-Conditioned Inverse Problems. IEEE Transactions on Signal Processing 56(8), 3408–3423 (2008)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004)
Piotrowski, T., Cavalcante, R.L.G., Yamada, I.: Stochastic MV-PURE Estimator: Robust Reduced-Rank Estimator for Stochastic Linear Model. IEEE Transactions on Signal Processing 57(4), 1293–1303 (2009)
Piotrowski, T., Yamada, I.: Why the Stochastic MV-PURE Estimator Excels in Highly Noisy Situations? In: IEEE ICASSP 2009, pp. 3081–3084 (2009)
Chen, T., Amari, S., Lin, Q.: A Unified Algorithm for Principal and Minor Components Extraction. Neural Networks 11, 385–390 (1998)
Yamada, I.: Hybrid Steepest Descent Method for Variational Inequality Problem Over the Intersection of Fixed Point Sets of Nonexpansive Mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithm for Feasibility and Optimization and Their Applications, pp. 473–504. Elsevier (2001)
Yamada, I., Ogura, N., Shirakawa, N.: A Numerically Robust Hybrid Steepest Descent Method for the Convexly Constrained Generalized Inverse Problems. In: Nashed, Z., Scherzer, O. (eds.) Inverse Problems, Image Analysis and Medical Imaging, vol. 313, pp. 269–305. American Mathematical Society, Contemporary Mathematics (2002)
Yamada, I., Yukawa, M., Yamagishi, M.: Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer Optimization and Its Applications, vol. 49, pp. 345–390 (2011)
Piotrowski, T., Yamada, I.: Convex Formulation of the Stochastic MV-PURE Estimator and Its Relation to the Reduced Rank Wiener Filter. In: IEEE ICSES 2008, pp. 397–400 (2008)
Kailath, T., Sayed, A.H., Hassibi, B.: Linear Estimation. Prentice Hall, New Jersey (2000)
Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. The MIT Press, Cambridge (2006)
Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, New York (1985)
Golub, G.H., Van Loan, C.F.: Matrix Computations. The Johns Hopkins University Press, Baltimore (1996)
Badeau, R., Richard, G., David, B.: Fast and Stable YAST Algorithm for Principal and Minor Subspace Tracking. IEEE Transactions on Signal Processing 57(4), 3437–3446 (2008)
Stark, H., Yang, Y.: Vector Space Projections: A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics. John Wiley & Sons, New York (1998)
Hua, Y., Nikpour, M.: Computing the Reduced-Rank Wiener Filter by IQMD. IEEE Signal Processing Letters 6(9), 240–242 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Piotrowski, T., Yamada, I. (2012). Efficient Parallel Computation of the Stochastic MV-PURE Estimator by the Hybrid Steepest Descent Method. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2012. Lecture Notes in Computer Science(), vol 7268. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29350-4_49
Download citation
DOI: https://doi.org/10.1007/978-3-642-29350-4_49
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-29349-8
Online ISBN: 978-3-642-29350-4
eBook Packages: Computer ScienceComputer Science (R0)