Skip to main content
Log in

A two-stage feasible directions algorithm for nonlinear constrained optimization

  • Published:
Mathematical Programming Submit manuscript

Abstract

We present a feasible directions algorithm, based on Lagrangian concepts, for the solution of the nonlinear programming problem with equality and inequality constraints. At each iteration a descent direction is defined; by modifying it, we obtain a feasible descent direction. The line search procedure assures the global convergence of the method and the feasibility of all the iterates.

We prove the global convergence of the algorithm and apply it to the solution of some test problems. Although the present version of the algorithm does not include any second-order information, like quasi-Newton methods, these numerical results exhibit a behavior comparable to that of the best methods known at present for nonlinear programming.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. J. Abadie, “Méthode du gradient réduit généralisé: Le code GRGA”, Note HI 1756/00, Electricité de France (Paris, 1975).

    Google Scholar 

  2. J. Abadie and J. Carpentier, “Generalization of the Wolfe reduced gradient method to the case of nonlinear constraints”, in: R. Fletcher, ed.,Optimization (Academic Press, New York, 1969) pp. 37–49.

    Google Scholar 

  3. D.P. Bertsekas, “On penalty and multiplier methods for constrained minimization”,SIAM Journal on Control and Optimization 14 (1978) 216–235.

    Article  MathSciNet  Google Scholar 

  4. M.C. Biggs, “Constrained minimization using recursive equality quadratic programming”, in: F.A. Lootsma, ed.,Numerical Methods for Nonlinear Optimization (Academic Press, London, New York, 1971) pp. 411–428.

    Google Scholar 

  5. M.C. Biggs, “On the convergence of some constrained minimization algorithms based on recursive quadratic programming”,Journal of the Institute of Mathematics and Its Applications 21 (1978) 67–82.

    Article  MATH  MathSciNet  Google Scholar 

  6. M.C. Bartholomew-Biggs, “An improved implementation of the recursive quadratic programming method for constrained minimization”, Technical Report 105, Numerical Optimisation Centre, The Hatfield Polytechnic, (Hatfield England, 1979).

    Google Scholar 

  7. Bui-Trong-Lieu and P. Huard, “La méthode des centres dans un espace topologique”,Numerische Mathematik 8 (1966) 65–67.

    Article  MathSciNet  Google Scholar 

  8. C. Charambolus, “Nonlinear least path optimization and nonlinear programming”,Mathematical Programming 12 (1977) 195–225.

    Article  MathSciNet  Google Scholar 

  9. R.A. Colville, “A comparative study on nonlinear programming codes”, Report 320-2949, IBM Scientific Center (New York, 1968).

    Google Scholar 

  10. R. Fletcher, “Methods for nonlinear constraints”, in: M.J.D. Powell, ed.,Nonlinear Optimization 1981, NATO Conference series (Academic Press, London, New York, 1982) pp. 185–211.

    Google Scholar 

  11. D. Gabay, “Reduced quasi-Newton methods with feasibility improvement for nonlinearly constrained optimization”,Mathematical Programming Study 16 (1982) 18–44

    MATH  MathSciNet  Google Scholar 

  12. S.P. Han, “Superlinearly convergent variable metric algorithms for general nonlinear programming problems”,Mathematical Programming 11 (1976) 263–282.

    Article  MathSciNet  Google Scholar 

  13. S.P. Han, “A globally convergent method for nonlinear programming”,Journal of Optimization Theory and Applications 22 (1977) 297–309.

    Article  MATH  MathSciNet  Google Scholar 

  14. J. Herskovits and N. Zouain, “A nonlinear programming algorithm for structural optimization problems”, Report 7-79, Programa de Engenharia Mecânica, COPPE, Universidade Federal do Rio de Janeiro, Rio de Janeiro, 1979.

    Google Scholar 

  15. W. Hock and K. Schittkowski, “Test examples for nonlinear programming codes”, Lecture Notes in Economics and Mathematical Systems 187 (Springer-Verlag, Berlin, 1981).

    MATH  Google Scholar 

  16. P. Huard, “Programation mathématique convèxe”,Revue Française d’Informatique et Recherche Opérationnelle 7 (1968) 43–59.

    MathSciNet  Google Scholar 

  17. D.G. Luenberger,Introduction to Linear and Nonlinear Programming (Addison-Wesley, Reading, MA, 1973).

    MATH  Google Scholar 

  18. D.Q. Mayne and E. Polak, “Feasible direction algorithms for optimization problems with equality and inequality constraints”,Mathematical Programming 11 (1976) 67–80.

    Article  MATH  MathSciNet  Google Scholar 

  19. O. Pironneau and E. Polak, “A dual method for optimal control problems”,SIAM Journal on Control 11 (1973) 534–549.

    Article  MATH  MathSciNet  Google Scholar 

  20. E. Polak,Computational Methods in Optimization (Academic Press, New York, 1971).

    Google Scholar 

  21. M.J.D. Powell, “A fast algorithm for nonlinearly constrained optimization calculations”, in: G.A. Watson, ed.,Numerical Analysis, Dundee, 1977, Lecture Notes in Mathematics 630 (Springer-Verlag, Berlin, 1978) pp. 144–157.

    Google Scholar 

  22. M.J.D. Powell, “The convergence of varible metric methods for nonlinearly constrained optimization calculation”, in: O.L. Mangasarian, R.R Meyer and S.M. Robinson, eds.,Nonlinear Programming 3 (Academic Press, New York, 1978) pp. 27–63.

    Google Scholar 

  23. M.J.D. Powell, “Algorithms for nonlinear constraints that use Lagrangian functions”,Mathematical Programming 14 (1978) 224–248.

    Article  MATH  MathSciNet  Google Scholar 

  24. R.W.H. Sargent, “Reduced-gradient and projection methods for nonlinear programming”, in: P.E. Gill and W. Murray, eds.,Numerical Methods for Constrained Optimization (Academic Press, New York, 1974) pp. 149–174.

    Google Scholar 

  25. S. Segenreich, N. Zouain and J. Herskovits, “An optimality criteria method based on slack variables concept for large structural optimization”, in:Proceedings of the Symposium on Applications of Computer Methods in Engineering (Los Angeles, USA, 1977) pp. 563–572.

  26. R.A. Tapia, “Diagonalized multiplier methods and quasi-Newton methods for constrained optimization”,Journal of Optimization Theory and Applications 22 (1977) 135–194.

    Article  MATH  MathSciNet  Google Scholar 

  27. D.M. Topkis and A.R. Veinott Jr., “On the convergence of some feasible direction algorithms for nonlinear programming”,Journal on SIAM Control 5 (1967) 268–279.

    Article  MATH  MathSciNet  Google Scholar 

  28. G. Zoutendijk,Methods of Feasible Directions (Elsevier, Amsterdam, 1960).

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Research performed while the author was on a two years appointment at INRIA, Rocquencourt, France, and partially supported by the Brazilian Research Council (CNPq).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Herskovits, J. A two-stage feasible directions algorithm for nonlinear constrained optimization. Mathematical Programming 36, 19–38 (1986). https://doi.org/10.1007/BF02591987

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02591987

Key words

Navigation