skip to main content
article

Fast computation of low-rank matrix approximations

Published:01 April 2007Publication History
Skip Abstract Section

Abstract

Given a matrix A, it is often desirable to find a good approximation to A that has low rank. We introduce a simple technique for accelerating the computation of such approximations when A has strong spectral features, that is, when the singular values of interest are significantly greater than those of a random matrix with size and entries similar to A. Our technique amounts to independently sampling and/or quantizing the entries of A, thus speeding up computation by reducing the number of nonzero entries and/or the length of their representation. Our analysis is based on observing that the acts of sampling and quantization can be viewed as adding a random matrix N to A, whose entries are independent random variables with zero-mean and bounded variance. Since, with high probability, N has very weak spectral features, we can prove that the effect of sampling and quantization nearly vanishes when a low-rank approximation to A + N is computed. We give high probability bounds on the quality of our approximation both in the Frobenius and the 2-norm.

References

  1. Alon, N., Krivelevich, M., and Vu, V. H. 2002. Concentration of eigenvalues of random matrices. Israel Math. J. 131, 259--267.Google ScholarGoogle ScholarCross RefCross Ref
  2. Azar, Y., Fiat, A., Karlin, A., McSherry, F., and Saia, J. 2001. Data mining through spectral analysis. In Proceedings of the 33rd Annual Symposium on Theory of Computing (Heraklion, Crete, Greece). ACM, New York, 619--626. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bellhumer, P. N., Hespanha, J., and Kriegman, D. 1997. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. (Special Issue on Face Recognition) 17, 7, 711--720. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Berry, M., Drmač, Z., and Jessup, E. 1999. Matrices, vector spaces, and information retrieval. SIAM Rev. 41, 2, 335--362. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Berry, M., Dumais, S., and O'Brien, G. 1995. Using linear algebra for intelligent information retrieval. SIAM Rev. 41, 4, 573--595. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Deerwester, S., Dumais, S., Landauer, T., Furnas, G., and Harshman, R. 1990. Indexing by latent semantic analysis. J. Soc. Inf. Sci. 41, 6, 391--407.Google ScholarGoogle ScholarCross RefCross Ref
  7. Drineas, P., Frieze, A., Kannan, R., Vempala, S., and Vinay, V. 2004. Clustering large graphs via the singular value decomposition. Mach. Learn. 56, 1--3, 9--33. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Drineas, P., and Kannan, R. 2003. Pass efficient algorithms for approximating large matrices. In Proceedings of the 14th Annual Symposium on Discrete Algorithms (Baltimore, MD). ACM, New York, 223--232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Frieze, A., Kannan, R., and Vempala, S. 2004. Fast Monte-Carlo algorithms for finding low-rank approximations. J. ACM 51, 6, 1025--1041. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Füredi, Z., and Komlós, J. 1981. The eigenvalues of random symmetric matrices. Combinatorica 1, 3, 233--241.Google ScholarGoogle ScholarCross RefCross Ref
  11. Georghiades, A., Belhumeur, P., and Kriegman, D. 2001. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 23, 6, 643--660. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Golub, G., and Van Loan, C. 1996. Matrix computations, 3rd ed. Johns Hopkins University Press, Baltimore, MD. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Papadimitriou, C., Raghavan, P., Tamaki, H., and Vempala, S. 2000. Latent semantic indexing: A probabilistic analysis. J. Comput. Syst. Sci. 61, 2, 217--235. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Vu, V. H. 2005. Spectral norm of random matrices. In Proceedings of the 37th Annual Symposium on Theory of Computing (Baltimore, MD). ACM, New York, 423--430. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Fast computation of low-rank matrix approximations

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader