Skip to main content
Log in

Superlinearly convergent quasi-newton algorithms for nonlinearly constrained optimization problems

  • Published:
Mathematical Programming Submit manuscript

Abstract

A class of algorithms for nonlinearly constrained optimization problems is proposed. The subproblems of the algorithms are linearly constrained quadratic minimization problems which contain an updated estimate of the Hessian of the Lagrangian. Under suitable conditions and updating schemes local convergence and a superlinear rate of convergence are established. The convergence proofs require among other things twice differentiable objective and constraint functions, while the calculations use only first derivative data. Rapid convergence has been obtained in a number of test problems by using a program based on the algorithms proposed here.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. M.J. Best, “A feasible conjugate direction method to solve linearly constrained minimization problems”,Journal of Optimization Theory and Applications 16 (1975) 25–38.

    Google Scholar 

  2. A.R. Colville, “A comparative study on nonlinear programming codes”, IBM New York Scientific Center, Report 320-2949 (1968).

  3. R.W. Cottle and G.B. Dantzig, “The principal pivoting method of quadratic programming”, in: G.B. Dantzig and A.F. Veinott, Jr., eds.,Mathematics of the decision sciences (Am. Math. Soc., Providence, R.I., 1968) pp. 144–162.

    Google Scholar 

  4. A.V. Fiacco and G.P. McCormick,Nonlinear programming: SUMT (Wiley, New York, 1968).

    Google Scholar 

  5. U.M. Garcia-Palomares, “Superlinearly convergent quasi-Newton methods for nonlinear programming”, Ph.D. dissertation, University of Wisconsin, Madison (1973).

    Google Scholar 

  6. A.A. Goldstein and J.F. Price, “An effective algorithm for minimization”,Numerische Mathematik 10 (1967) 184–189.

    Google Scholar 

  7. G.P. McCormick, “Penalty function versus nonpenalty function methods for constrained nonlinear programming problems”,Mathematical Programming 1 (1971) 217–238.

    Google Scholar 

  8. O.L. Mangasarian,Nonlinear programming (McGraw-Hill, New York, 1969).

    Google Scholar 

  9. J.M. Ortega and W.C. Rheinboldt, Iterative solution of nonlinear equations of several variables (Academic Press, New York, 1970).

    Google Scholar 

  10. S.M. Robinson, “A quadratically-convergent algorithm for general nonlinear programming problems”,Mathematical programming 3 (1972) 145–156.

    Google Scholar 

  11. S.M. Robinson, “Perturbed Kuhn-Tucker points and rates of convergence for a class of nonlinear programming algorithms”,Mathematical Programming 7 (1974) 1–16.

    Google Scholar 

  12. J. Stoer, “On the numerical solution of constrained least-squares problems”,SIAM Journal on Numerical Analysis 8 (1971) 382–411.

    Google Scholar 

  13. D.M. Topkis, “Superlinear convergence of a second order algorithm for constrained optimization”, ORC Tech. Rept. 71-33, University of California, Berkeley (1971).

    Google Scholar 

  14. R.B. Wilson, “A simplicial method for convex programming”, Ph.D. Dissertation, Harvard University, Cambridge (1963).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Research supported by NSF Grant GJ-35292 at the University of Wisconsin.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Palomares, U.M.G., Mangasarian, O.L. Superlinearly convergent quasi-newton algorithms for nonlinearly constrained optimization problems. Mathematical Programming 11, 1–13 (1976). https://doi.org/10.1007/BF01580366

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01580366

Keywords

Navigation