Skip to main content
Log in

New Routes from Minimal Approximation Error to Principal Components

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

We introduce two new methods of deriving the classical PCA in the framework of minimizing the mean square error upon performing a lower-dimensional approximation of the data. These methods are based on two forms of the mean square error function. One of the novelties of the presented methods is that the commonly employed process of subtraction of the mean of the data becomes part of the solution of the optimization problem and not a pre-analysis heuristic. We also derive the optimal basis and the minimum error of approximation in this framework and demonstrate the elegance of our solution in comparison with a recent solution in the framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bishop CM (2006) Pattern recognition and machine learning. Information science and statistics. Springer, New York

    Google Scholar 

  2. Diamantaras KI, Kung SY (1996) Principal component neural networks: theory and applications. John wiley, NewYork

    MATH  Google Scholar 

  3. Duda RO, Hart PE, Stork DG (2001) Pattern classification. 2nd edn. Wiley Interscience, New York

    MATH  Google Scholar 

  4. Fukunaga K (1990) Introduction to statistical pattern recognition. Computer science and scientific computing, 2nd edn. Academic Press, San Diego

    Google Scholar 

  5. Fukunaga K, Koontz WLG (1970) Application of the Karhunen–Loeve expansion to feature selection and ordering. IEEE Transac Comput C- 19(4): 311–318

    Article  MATH  Google Scholar 

  6. Harsanyi JC, Chang C-I (1994) Hyperspectral image classification and dimensionality reduction: an orthogonal subspace projection approach. IEEE Transac Geosci Remote Sens 32(4): 779–785

    Article  ADS  Google Scholar 

  7. Hotelling H (1933) Analysis of a complex of statistical variables into principal components. J Educ Psychol 24: 417–441

    Article  Google Scholar 

  8. Huo X, Elad M, Flesia AG, Muise B, Stanfill R, Mahalanobis A et al (2003) Optimal reduced-rank quadratic classifiers using the Fukunaga–Koontz transform with applications to automated target recognition. Proc SPIE 5094: 59–72

    Article  ADS  Google Scholar 

  9. Hyvarinen A, Karhunen J, Oja E (2001) Independent component analysis, vol 27 of adaptive and learning systems for signal processing, communications and control. Wiley-Interscience, New York

    Google Scholar 

  10. Johnson RA, Wichern DW (1992) Applied multivariate statistical analysis, 3rd edn. Prentice-Hall, Inc., Upper Saddle River

    MATH  Google Scholar 

  11. Jolliffe IT (2002) Principal component analysis, 2nd edn. Springer, New York

    MATH  Google Scholar 

  12. Mahanalobis A, Muise RR, Stanfill SR, Van Nevel A (2004) Design and application of quadratic correlation filters for target detection. IEEE Transac Aerosp Electron Syst 40(3): 837–850

    Article  ADS  Google Scholar 

  13. Mann ME, Bradley RS, Hughes MK (1998) Global-scale temperature patterns and climate forcing over the past six centuries. Nature 392: 779–788

    Article  ADS  Google Scholar 

  14. Mardia K, Kent J, Bibby J (1979) Multivariate analysis. Academic Press, London

    MATH  Google Scholar 

  15. McIntyre S, McKitrick R (2005) Reply to comment by Huybers on “hockey sticks, principal components, and spurious significance”. Geophys Res Lett 32: L20713

    Article  ADS  Google Scholar 

  16. Miranda AA, Whelan PF (2005) Fukunaga–Koontz transform for small sample size problems. In: Proceedings of the IEE Irish signals and systems conference, pp 156–161, Dublin

  17. Noy-Meir I (1973) Data transformations in ecological ordination: I. some advantages of non-centering. J Ecol 61(2): 329–341

    Article  Google Scholar 

  18. Pearson K (1901) On lines and planes of closest fit to systems of points in space. Philos Mag 2: 559–572

    Google Scholar 

  19. Plett GL, Doi T, Torrieri D (1997) Mine detection using scattering parameters and an artificial neural network. IEEE Transac Neural Netw 8(6): 1456–1467

    Article  Google Scholar 

  20. Ripley BD (1996) Pattern recognition and neural networks. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  21. Van Huffel S (ed) (1997) Recent advances in total least squares techniques and errors-in-variables modeling. SIAM, Philadelphia

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abhilash Alexander Miranda.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Miranda, A.A., Le Borgne, YA. & Bontempi, G. New Routes from Minimal Approximation Error to Principal Components. Neural Process Lett 27, 197–207 (2008). https://doi.org/10.1007/s11063-007-9069-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-007-9069-2

Keywords

Navigation