Skip to main content
Log in

Iterative computation of negative curvature directions in large scale optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper we deal with the iterative computation of negative curvature directions of an objective function, within large scale optimization frameworks. In particular, suitable directions of negative curvature of the objective function represent an essential tool, to guarantee convergence to second order critical points. However, an “adequate” negative curvature direction is often required to have a good resemblance to an eigenvector corresponding to the smallest eigenvalue of the Hessian matrix. Thus, its computation may be a very difficult task on large scale problems. Several strategies proposed in literature compute such a direction relying on matrix factorizations, so that they may be inefficient or even impracticable in a large scale setting. On the other hand, the iterative methods proposed either need to store a large matrix, or they need to rerun the recurrence.

On this guideline, in this paper we propose the use of an iterative method, based on a planar Conjugate Gradient scheme. Under mild assumptions, we provide theory for using the latter method to compute adequate negative curvature directions, within optimization frameworks. In our proposal any matrix storage is avoided, along with any additional rerun.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bank, R., Chan, T.: A composite step bi-conjugate gradient algorithm for nonsymmetric linear systems. Numer. Algorithms 7, 1–16 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  2. Boman, E., Murray, W.: An iterative approach to computing a direction of negative curvature. Presented at Copper Mountain conference, March 1998. Available at the url: www-sccm.stanford.edu/students/boman/papers.shtml

  3. Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. MPS–SIAM Series on Optimization. SIAM, Philadelphia (2000)

    MATH  Google Scholar 

  4. Cullum, J., Willoughby, R.: Lanczos Algorithms for Large Symmetric Eigenvalue Computations. Birkhäuser, Boston (1985)

    MATH  Google Scholar 

  5. Dixon, L., Ducksbury, P., Singh, P.: A new three-term conjugate gradient method. Technical report 130, Numerical Optimization Centre, Hatfield Polytechnic, Hatfield, Hertfordshire, UK (1985)

  6. Facchinei, F., Lucidi, S.: Convergence to second order stationary points in inequality constrained optimization. Math. Oper. Res. 93, 746–766 (1998)

    Article  MathSciNet  Google Scholar 

  7. Fasano, G.: Use of conjugate directions inside Newton-type algorithms for large scale unconstrained optimization. PhD thesis, Università di Roma “La Sapienza”, Roma, Italy (2001)

  8. Fasano, G.: Lanczos-conjugate gradient method and pseudoinverse computation, on indefinite and singular systems. J. Optim. Theory Appl. DOI 10.1007/s10957-006-91193

  9. Fasano, G.: Planar-conjugate gradient algorithm for large-scale unconstrained optimization, part 1: theory. J. Optim. Theory Appl. 125, 523–541 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  10. Fasano, G.: Planar-conjugate gradient algorithm for large-scale unconstrained optimization, part 2: application. J. Optim. Theory Appl. 125, 543–558 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  11. Fasano, G., Roma, M.: Iterative computation of negative curvature directions in large scale optimization: theory and preliminary numerical results, Technical report 12-05, Dipartimento di Informatica e Sistemistica “A. Ruberti”, Roma, Italy (2005)

  12. Ferris, M., Lucidi, S., Roma, M.: Nonmonotone curvilinear linesearch methods for unconstrained optimization. Comput. Optim. Appl. 6, 117–136 (1996)

    MATH  MathSciNet  Google Scholar 

  13. Golub, G., Van Loan, C.: Matrix Computations, 3rd edn. John Hopkins University Press, Baltimore (1996).

    MATH  Google Scholar 

  14. Gould, N.I.M., Lucidi, S., Roma, M., Toint, P.L.: Solving the trust-region subproblem using the Lanczos method. SIAM J. Optim. 9, 504–525 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  15. Gould, N.I.M., Lucidi, S., Roma, M., Toint, P.L.: Exploiting negative curvature directions in linesearch methods for unconstrained optimization. Optim. Methods Softw. 14, 75–98 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  16. Gould, N.I.M., Orban, D., Toint, P.: \(\mathsf{CUTEr}\) (and \(\mathsf{SifDec}\) ), a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29, 373–394 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  17. Hestenes, M.: Conjugate Direction Methods in Optimization. Springer, New York (1980)

    MATH  Google Scholar 

  18. Liu, Y., Storey, C.: Efficient generalized conjugate gradient algorithm, part 1. J. Optim. Theory Appl. 69, 129–137 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  19. Lucidi, S., Rochetich, F., Roma, M.: Curvilinear stabilization techniques for truncated Newton methods in large scale unconstrained optimization. SIAM J. Optim. 8, 916–939 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  20. Lucidi, S., Roma, M.: Numerical experiences with new truncated Newton methods in large scale unconstrained optimization. Comput. Optim. Appl. 7, 71–87 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  21. McCormick, G.: A modification of Armijo’s step-size rule for negative curvature. Math. Program. 13, 111–115 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  22. Miele, A., Cantrell, J.: Study on a memory gradient method for the minimization of functions. J. Optim. Theory Appl. 3, 459–470 (1969)

    Article  MATH  MathSciNet  Google Scholar 

  23. Moré, J., Sorensen, D.: On the use of directions of negative curvature in a modified Newton method. Math. Program. 16, 1–20 (1979)

    Article  MATH  Google Scholar 

  24. Moré, J., Sorensen, D.: Computing a trust region step. SIAM J. Sci. Stat. Comput. 4, 553–572 (1983)

    Article  MATH  Google Scholar 

  25. Nash, S.: A survey of truncated-Newton methods. J. Comput. Appl. Math. 124, 45–59 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  26. Paige, C., Saunders, M.: Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal. 12, 617–629 (1975)

    Article  MATH  MathSciNet  Google Scholar 

  27. Parlett, B.: The Symmetric Eigenvalue Problem. Prentice-Hall Series in Computational Mathematics. Prentice-Hall, Englewood Cliffs (1980)

    MATH  Google Scholar 

  28. Shultz, G., Schnabel, R., Byrd, R.: A family of trust-region-based algorithms for unconstrained minimization. SIAM J. Numer. Anal. 22, 47–67 (1985)

    Article  MATH  MathSciNet  Google Scholar 

  29. Stoer, J.: Solution of large linear systems of equations by conjugate gradient type methods. In: Bachem A., Grötschel M., Korte B. (eds.) Mathematical Programming. The State of the Art, pp. 540–565. Springer, Berlin/Heidelberg (1983)

    Google Scholar 

  30. Trefethen, L., Bau, D.: Numerical Linear Algebra. SIAM, Philadelphia (1997)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Fasano.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fasano, G., Roma, M. Iterative computation of negative curvature directions in large scale optimization. Comput Optim Appl 38, 81–104 (2007). https://doi.org/10.1007/s10589-007-9034-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-007-9034-z

Keywords

Navigation