Open Access
Issue
RAIRO-Oper. Res.
Volume 56, Number 4, July-August 2022
Page(s) 2403 - 2424
DOI https://doi.org/10.1051/ro/2022107
Published online 01 August 2022
  • H. Akaike, On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method. Ann. Inst. Statist. Math. 11 (1959) 1–17. [CrossRef] [MathSciNet] [Google Scholar]
  • N. Andrei, An unconstrained optimization test functions collection. Adv. Model. Optim. 10 (2008) 147–161. [MathSciNet] [Google Scholar]
  • J. Barzilai and J.M. Borwein, Two-point step size gradient methods. IMA J. Numer. Anal. 8 (1988) 141–148. [CrossRef] [MathSciNet] [Google Scholar]
  • T. Bianconcini and M.Q. Sciandrone, A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques. Optim. Methods Softw. 31 (2016) 1008–1035. [CrossRef] [MathSciNet] [Google Scholar]
  • T. Bianconcini, G. Liuzzi and B. Morini, On the use of iterative methods in cubic regularization for unconstrained optimization. Comput. Optim. Appl. 60 (2015) 35–57. [CrossRef] [MathSciNet] [Google Scholar]
  • F. Biglari and M. Solimanpur, Scaling on the spectral gradient method. J. Optim. Theory Appl. 158 (2013) 626–635. [CrossRef] [MathSciNet] [Google Scholar]
  • E.G. Birgin, J.M. Martínez and M. Raydan, Nonmonotone spectral projected gradient methods for convex sets. SIAM J. Optim. 10 (2000) 1196–1211. [CrossRef] [MathSciNet] [Google Scholar]
  • C. Cartis, N.I.M. Gould and P.L. Toint, Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math. Program. 127 (2011) 245–295. [CrossRef] [MathSciNet] [Google Scholar]
  • C. Cartis, N.I. Gould and P.L. Toint, Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function-and derivative-evaluation complexity. Math. Program. 130 (2011) 295–319. [CrossRef] [MathSciNet] [Google Scholar]
  • A. Cauchy, Méthode générale pour la résolution des systéms déquations simultanées. Comput. Rend. Sci. Paris 25 (1847) 46–89. [Google Scholar]
  • Y.H. Dai and C.X. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23 (2013) 296–320. [Google Scholar]
  • Y.H. Dai and L.Z. Liao, R-linear convergence of the Barzilai and Borwein gradient method. IMA J. Numer. Anal. 22 (2002) 1–10. [CrossRef] [MathSciNet] [Google Scholar]
  • Y.H. Dai and H.C. Zhang, An adaptive two-point stepsize gradient algorithm. Numer. Algorithms 27 (2001) 377–385. [CrossRef] [MathSciNet] [Google Scholar]
  • Y.H. Dai, J.Y. Yuan and Y.X. Yuan, Modified two-point stepsize gradient methods for unconstrained optimization. Comput. Optim. Appl. 22 (2002) 103–109. [CrossRef] [MathSciNet] [Google Scholar]
  • Y.H. Dai, W.W. Hager, K. Schittkowski and H. Zhang, The cyclic Barzilai-Borwein method for unconstrained optimization. IMA J. Numer. Anal. 26 (2006) 604–627. [CrossRef] [MathSciNet] [Google Scholar]
  • E.D. Dolan and J.J. Moré, Benchmarking optimization software with performance profiles. Math. Program. 91 (2002) 201–213. [Google Scholar]
  • N.I.M. Gould and M. Porcelli, Updating the regularization parameter in the adaptive cubic regularization algorithm. Comput. Optim. Appl. 53 (2012) 1–22. [CrossRef] [MathSciNet] [Google Scholar]
  • N.I.M. Gould, D. Orban and P.L. Toint, CUTEr and SifDec: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29 (2003) 373–394. [Google Scholar]
  • L. Grippo, F. Lamparillo and S. Lucidi, A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23 (1986) 707–716. [CrossRef] [MathSciNet] [Google Scholar]
  • W.W. Hager and H.C. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16 (2005) 170–192. [CrossRef] [Google Scholar]
  • W.W. Hager and H.C. Zhang, Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32 (2006) 113–137. [CrossRef] [Google Scholar]
  • Y.K. Huang, Y.H. Dai and X.W. Liu, Equipping Barzilai-Borwein method with two dimensional quadratic termination property. SIAM J. Optim. 31 (2021) 3068–3096. [CrossRef] [MathSciNet] [Google Scholar]
  • D.W. Li and R.Y. Sun, On a faster R-linear convergence rate of the Barzilai-Borwein method (2020). Preprint arXiv:2101.00205. [Google Scholar]
  • Z.X. Liu and H.W. Liu, Several efficient gradient methods with approximate optimal stepsizes for large scale unconstrained optimization. J. Comput. Appl. Math. 328 (2018) 400–413. [CrossRef] [MathSciNet] [Google Scholar]
  • Z.X. Liu and H.W. Liu, An efficient gradient method with approximate optimal stepsize for large-scale unconstrained optimization. Numer. Algorithms 78 (2018) 21–39. [CrossRef] [MathSciNet] [Google Scholar]
  • Z.X. Liu, H.W. Liu and X.L. Dong, An efficient gradient method with approximate optimal stepsize for the strictly convex quadratic minimization problem. Optimization 67 (2018) 427–440. [CrossRef] [MathSciNet] [Google Scholar]
  • F. Luengo and M. Raydan, Gradient method with dynamical retards for large-scale optimization problems. Electron. Trans. Numer. Anal. 16 (2003) 186–193. [MathSciNet] [Google Scholar]
  • M. Miladinović, P. Stanimirović and S. Miljković, Scalar correction method for solving large scale unconstrained minimization problems. J. Optim. Theory Appl. 151 (2011) 304–320. [CrossRef] [MathSciNet] [Google Scholar]
  • H. Nosratipour, O.S. Fard and A.H. Borzabadi, An adaptive nonmonotone global Barzilai-Borwein gradient method for unconstrained optimization. Optimization 66 (2017) 641–655. [CrossRef] [MathSciNet] [Google Scholar]
  • M. Raydan and On the Barzilai and Borwein choice of steplength for the gradient method. IMA J. Numer. Anal. 13 (1993) 321–326. [CrossRef] [MathSciNet] [Google Scholar]
  • M. Raydan, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7 (1997) 26–33. [Google Scholar]
  • W.Y. Sun, Optimization methods for non-quadratic model. Asia-Pac. J. Oper. Res. 13 (1996) 43–63. [Google Scholar]
  • W.Y. Sun and D. Xu, A filter-trust-region method based on conic model for unconstrained optimization. Sci. Sin. Math. 55 (2012) 527–543. [Google Scholar]
  • P.L. Toint, A non-monotone trust-region algorithm for nonlinear optimization subject to convex constraints. Math. Program. 77 (1997) 69–94. [Google Scholar]
  • Y.H. Xiao, Q.Y. Wang and D. Wang, Notes on the Dai-Yuan-Yuan modified spectral gradient method. J. Comput. Appl. Math. 234 (2010) 2986–2992. [CrossRef] [MathSciNet] [Google Scholar]
  • Y.X. Yuan, A new stepsize for the steepest descent method. J. Comput. Math. 24 (2006) 149–156. [MathSciNet] [Google Scholar]
  • H.C. Zhang and W.W. Hager, A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 14 (2004) 1043–1056. [CrossRef] [MathSciNet] [Google Scholar]
  • J.Z. Zhang, N.Y. Deng and L.H. Chen, New quasi-Newton equation and related methods for unconstrained optimization. J. Optim. Theory Appl. 102 (1999) 147–167. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.