Open Access
Issue
RAIRO-Oper. Res.
Volume 58, Number 1, January-February 2024
Page(s) 775 - 802
DOI https://doi.org/10.1051/ro/2023161
Published online 22 February 2024
  • N. Andrei, An unconstrained optimization test functions collection. Adv. Model. Optim. 10 (2008) 147–161. [MathSciNet] [Google Scholar]
  • N. Andrei, Open problems in nonlinear conjugate gradient algorithms for unconstrained optimization. Bull. Malays. Math. Sci. Soc. 34 (2011) 319–330. [MathSciNet] [Google Scholar]
  • N. Andrei, An accelerated subspace minimization three-term conjugate gradient algorithm for unconstrained optimization. Numer. Algorithms 65 (2014) 859–874. [CrossRef] [MathSciNet] [Google Scholar]
  • N. Andrei, Nonlinear conjugate gradient methods for unconstrained optimization, Springer Optimization and Its Applications 158. Springer, Cham (2020). [CrossRef] [Google Scholar]
  • K.A. Ariyawansa, Deriving collinear scaling algorithms as extensions of quasi-Newton methods and the local convergence of DFP- and BFGS-related collinear scaling algorithms. Math. Program. 49 (1990) 23–48. [CrossRef] [Google Scholar]
  • J. Bai, W. W. Hager and H. Zhang, An inexact accelerated stochastic ADMM for separable convex optimization. Comput. Optim. Appl. 81 (2022) 479–518. [CrossRef] [MathSciNet] [Google Scholar]
  • J. Barzilai and J.M. Borwein, Two-point step size gradient methods. IMA J. Numer. Anal. 8 (1988) 141–148. [CrossRef] [MathSciNet] [Google Scholar]
  • E.M.L. Beale, A derivation of conjugate gradients. In: Numerical Methods for Nonlinear Optimization. Academic Press, London (1972) 39–43. [Google Scholar]
  • A.I. Cohen, Rate of convergence of several conjugate gradient algorithms. SIAM J. Numer. Anal. 9 (1972) 248–259. [CrossRef] [MathSciNet] [Google Scholar]
  • H. Crowder and P. Wolfe, Linear convergence of the conjugate gradient method. IBM J. Res. Dev. 16 (1972) 431–433. [Google Scholar]
  • Y.H. Dai and Y.X. Yuan, A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10 (1999) 177–182. [CrossRef] [MathSciNet] [Google Scholar]
  • Y.H. Dai and C.X. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23 (2013) 296–320. [Google Scholar]
  • Y.H. Dai and C.X. Kou, A Barzilai-Borwein conjugate gradient method. Sci. China Math. 59 (2016) 1511–1524. [CrossRef] [MathSciNet] [Google Scholar]
  • Y.H. Dai, L.Z. Liao and D. Li, On restart procedures for the conjugate gradient method. Numer. Algorithms 35 (2004) 249–260. [Google Scholar]
  • Y.H. Dai, J.Y. Yuan and Y.X. Yuan, Modified two-point stepsize gradient methods for unconstrained optimization problems. Comput. Optim. Appl. 22 (2002) 103–109. [CrossRef] [MathSciNet] [Google Scholar]
  • W.C. Davidon, Conic approximations and collinear scalings for optimizers. SIAM J. Numer. Anal. 17 (1980) 268–281. [Google Scholar]
  • S. Di and W. Sun, A trust region method for conic model to solve unconstraind optimizaions. Optim. Methods Softw. 6 (1996) 237–263. [CrossRef] [Google Scholar]
  • E.D. Dolan and J.J. Moré, Benchmarking optimization software with performance profiles. Math. Program. 91 (2002) 201–213. [Google Scholar]
  • R. Fletcher, Practical methods of optimization. Vol. 1: Unconstrained optimization. Wiley-Interscience, New York (1980). [Google Scholar]
  • R. Fletcher and C.M. Reeves, Function minimization by conjugate gradients. Comput. J. 7 (1964) 149–154. [Google Scholar]
  • W.W. Hager and H. Zhang, Algorithm 851: CG DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32 (2006) 113–137. [CrossRef] [Google Scholar]
  • M.R. Hestenes and E.L. Stiefel, Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49 (1952) 409–436. [CrossRef] [Google Scholar]
  • M. Li, H. Liu and Z. Liu, A new subspace minimization conjugate gradient method with nonmonotone line search for unconstrained optimization. Numer. Algorithms 79 (2018) 195–219. [CrossRef] [MathSciNet] [Google Scholar]
  • Y. Li, Z. Liu and H. Liu, A subspace minimization conjugate gradient method based on conic model for unconstrained optimization. Comput. Appl. Math. 38 (2019). [Google Scholar]
  • Y.L. Liu and C.S. Storey, Efficient generalized conjugate gradient algorithms. I: Theory. J. Optim. Theory Appl. 69 (1991) 129–137. [CrossRef] [MathSciNet] [Google Scholar]
  • Z. Liu and H. Liu, An efficient gradient method with approximate optimal stepsize for large-scale unconstrained optimization. Numer. Algorithms 78 (2018) 21–39. [CrossRef] [MathSciNet] [Google Scholar]
  • H. Liu and Z. Liu, An efficient Barzilai-Borwein conjugate gradient method for unconstrained optimization. J. Optim. Theory Appl. 180 (2019) 879–906. [CrossRef] [MathSciNet] [Google Scholar]
  • G.P. McCormick and K. Ritter, Alternative proofs of the convergence properties of the conjugate-gradient method. J. Optim. Theory Appl. 13 (1975) 497–518. [Google Scholar]
  • J. Nocedal and S.J. Wright, Numerical Optimization. Springer, New York (2006). [Google Scholar]
  • B.T. Polyak, The conjugate gradient method in extremal problems. USSR Comput. Math. Math. Phys. 9 (1969) 94–112. [CrossRef] [Google Scholar]
  • M.J.D. Powell, Restart procedures for the conjugate gradient method. Math. Program. 12 (1977) 241–254. [Google Scholar]
  • S. Sheng, Interpolation by conic model for unconstrained optimization. Computing 54 (1995) 83–98. [CrossRef] [MathSciNet] [Google Scholar]
  • D.C. Sorensen, The Q-superlinear convergence of a collinear scaling algorithm for unconstrained optimization. SIAM J. Numer. Anal. 17 (1980) 84–114. [Google Scholar]
  • W. Sun, On nonquadratic model optimization methods. Asia. Pac. J. Oper. Res. 13 (1996) 43–63. [Google Scholar]
  • W. Sun and Y. Yuan, Optimization Theory and Methods: Nonlinear Programming. Springer, New York (2006). [Google Scholar]
  • W. Sun, Y. Li, T. Wang and H. Liu, A new subspace minimization conjugate gradient method based on conic model for large-scale unconstrained optimization. Comput. Appl. Math. 41 (2022). [Google Scholar]
  • P. Wolfe, Convergence conditions for ascent methods. SIAM Rev. 11 (1969) 226–235. [CrossRef] [MathSciNet] [Google Scholar]
  • P. Wolfe, Convergence conditions for ascent methods. II: Some corrections. SIAM Rev. 13 (1971) 185–188. [CrossRef] [MathSciNet] [Google Scholar]
  • Y. Yang, Y. Chen and Y. Lu, A subspace conjugate gradient algorithm for large-scale unconstrained optimization. Numer. Algorithms 76 (2017) 813–828. [CrossRef] [MathSciNet] [Google Scholar]
  • Y.X. Yuan, A modified BFGS algorithm for unconstrained optimization. IMA J. Numer. Anal. 11 (1991) 325–332. [CrossRef] [MathSciNet] [Google Scholar]
  • Y.X. Yuan, Analysis on the conjugate gradient method. Optim. Methods Softw. 2 (1993) 19–29. [CrossRef] [Google Scholar]
  • Y.X. Yuan, Subspace techniques for nonlinear optimization. In: Some Topics in Industrial and Applied Mathematics. Higher Education Press, Beijing (2007) 206–218. [CrossRef] [Google Scholar]
  • Y.X. Yuan, A review on subspace methods for nonlinear optimization. In: Proc. of the International Congress of Mathematics. Seoul, Korea, 13-21 August (2014) 807–827. [Google Scholar]
  • Y.X. Yuan and J. Stoer, A subspace study on conjugate gradient algorithms. Z. Angew. Math. Mech. 75 (1995) 69–77. [CrossRef] [MathSciNet] [Google Scholar]
  • H. Zhang and W.W. Hager, A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 14 (2004) 1043–1056. [CrossRef] [MathSciNet] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.