Open Access
Issue
RAIRO-Oper. Res.
Volume 58, Number 4, July-August 2024
Page(s) 2783 - 2795
DOI https://doi.org/10.1051/ro/2024030
Published online 02 July 2024
  • M. Ahookhoosh, K. Amini and M. Peyghami, A nonmonotone trust region line search method for large-scale unconstrained optimization. Appl. Math. Model. 36 (2012) 478–487. [CrossRef] [MathSciNet] [Google Scholar]
  • Z. Aminifard, S. Babaie-Kafaki and F. Dargahi, Nonmonotone quasi-Newton-based conjugate gradient methods with application to signal processing. Numer. Algorithms 93 (2023) 1527–1541. [Google Scholar]
  • N. Andrei, An acceleration of gradient descent algorithm with backtracking for unconstrained optimization. Numer. Algorithms 42 (2006) 63–73. [CrossRef] [MathSciNet] [Google Scholar]
  • N. Andrei, Another hybrid conjugate gradient algorithm for unconstrained optimization. Numer. Algorithms 47 (2008) 143–156. [CrossRef] [MathSciNet] [Google Scholar]
  • M.A.T. Ansary and G. Panda, A modified quasi-Newton method for vector optimization problem. Optimization 64 (2015) 2289–2306. [CrossRef] [MathSciNet] [Google Scholar]
  • E. Birgin, J.M. Martínez and M. Raydan, Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10 (2000) 1196–1211. [CrossRef] [MathSciNet] [Google Scholar]
  • E. Carrizosa and J.B.G. Frenk, Dominating sets for convex functions with some applications. J. Optim. Theory Appl. 96 (1998) 281–295. [CrossRef] [MathSciNet] [Google Scholar]
  • G. Chiandussi, M. Codegone, S. Ferrero and F.E. Varesio, Comparison of multi-objective optimization methodologies for engineering applications. Comput. Math. Appl. 63 (2012) 912–942. [CrossRef] [MathSciNet] [Google Scholar]
  • T.D. Chuong, Newton-like methods for efficient solutions in vector optimization. Comput. Optim. Appl. 54 (2013) 495–516. [CrossRef] [MathSciNet] [Google Scholar]
  • Z. Cui and B. Wu, A new modified nonmonotone adaptive trust region method for unconstrained optimization. Comput. Optim. Appl. 53 (2012) 795–806. [CrossRef] [MathSciNet] [Google Scholar]
  • Y.H. Dai, A nonmonotone conjugate gradient algorithm for unconstrained optimization. J. Syst. Sci. Complex. 15 (2002) 139–145. [MathSciNet] [Google Scholar]
  • E.D. Dolan and J.J. Moré, Benchmarking optimization software with performance profiles. Math. Program. 91 (2002) 201–213. [Google Scholar]
  • L.M.G. Drummond and A.N. Iusem, A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28 (2004) 5–29. [CrossRef] [MathSciNet] [Google Scholar]
  • L.M.G. Drummond and B.F. Svaiter, A steepest descent method for vector optimization. J. Comput. Appl. Math. 175 (2005) 395–414. [CrossRef] [MathSciNet] [Google Scholar]
  • G.W. Evans, An overview of techniques for solving multiobjective mathematical programs. Manage. Sci. 30 (1984) 1268–1282. [CrossRef] [Google Scholar]
  • N.S. Fazzio and M.L. Schuverdt, Convergence analysis of a nonmonotone projected gradient method for multiobjective optimization problems. Optim. Lett. 13 (2019) 1365–1379. [CrossRef] [MathSciNet] [Google Scholar]
  • J. Fliege and B.F. Svaiter, Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51 (2000) 479–494. [CrossRef] [MathSciNet] [Google Scholar]
  • J. Fliege, L.M.G. Drummond and B.F. Svaiter, Newton’s method for multiobjective optimization. SIAM J. Optim. 20 (2009) 602–626. [Google Scholar]
  • E.H. Fukuda and L.M.G. Drummond, On the convergence of the projected gradient method for vector optimization. Optimization 60 (2011) 1009–1021. [CrossRef] [MathSciNet] [Google Scholar]
  • E.H. Fukuda and L.M.G. Drummond, A survey on multiobjective descent methods. Pesqui. Oper. 34 (2014) 585–620. [CrossRef] [Google Scholar]
  • L. Grippo, F. Lampariello and S. Lucidi, A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23 (1986) 707–716. [Google Scholar]
  • L. Grippo, F. Lampariello and S. Lucidi, A truncated Newton method with nonmonotone line search for unconstrained optimization. J. Optim. Theory Appl. 60 (1989) 401–419. [CrossRef] [MathSciNet] [Google Scholar]
  • W.W. Hager and H. Zhang, Algorithm 851: CG Descent, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32 (2006) 113–137. [CrossRef] [Google Scholar]
  • S. Huband, P. Hingston, L. Barone and L. While, A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10 (2006) 477–506. [CrossRef] [Google Scholar]
  • R.M. Johnstone, C.R. Johnson Jr., R.R. Bitmead and B.D. Anderson, Exponential convergence of recursive least squares with exponential forgetting factor. Syst. Control Lett. 2 (1982) 77–82. [Google Scholar]
  • M. Laumanns, L. Thiele, K. Deb and E. Zitzler, Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 10 (2002) 263–282. [CrossRef] [Google Scholar]
  • S.H. Leung and C.F. So, Gradient-based variable forgetting factor RLS algorithm in time-varying environments. IEEE T. Signal Process. 53 (2005) 3141–3150. [CrossRef] [Google Scholar]
  • G.H. Liu, L.L. Jing, L.X. Han and D. Han, A class of nonmonotone conjugate gradient methods for unconstrained optimization. J. Optim. Theory Appl. 101 (1999) 127–140. [CrossRef] [MathSciNet] [Google Scholar]
  • L.R. Lucambio Pérez and L.F. Prudente, Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 28 (2018) 2690–2720. [Google Scholar]
  • E. Miglierina, E. Molho and M.C. Recchioni, Box-constrained multi-objective optimization: a gradient-like method without “a priori” scalarization. Eur. J. Oper. Res. 188 (2008) 662–682. [CrossRef] [Google Scholar]
  • K. Mita, E.H. Fukuda and N. Yamashita, Nonmonotone line searches for unconstrained multiobjective optimization problems. J. Global Optim. 75 (2019) 63–90. [CrossRef] [MathSciNet] [Google Scholar]
  • C. Paleologu, J. Benesty and S. Ciochina, A robust variable forgetting factor recursive least squares algorithm for system identification. IEEE Signal Process. Lett. 15 (2008) 597–600. [CrossRef] [Google Scholar]
  • S. Qu, Y. Ji, J. Jiang and Q. Zhang, Nonmonotone gradient methods for vector optimization with a portfolio optimization application. Eur. J. Oper. Res. 263 (2017) 356–366. [CrossRef] [Google Scholar]
  • S. Rezaee and S. Babaie-Kafaki, A modified nonmonotone trust region line search method. J. Appl. Math. Comput. 57 (2018) 421–436. [CrossRef] [MathSciNet] [Google Scholar]
  • O. Schütze, A. Lara and C.C. Coello, The directed search method for unconstrained multi-objective optimization problems, in Proceedings of the EVOLVE – A Bridge Between Probability, Set Oriented Numerics, and Evolutionary Computation. (2011) 1–4. [Google Scholar]
  • M. Sefrioui and J. Perlaux, Nash genetic algorithms: examples and applications, in Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No. 00TH8512). Vol. 1. IEEE (2000) 509–516. [Google Scholar]
  • A.M.J. Skulimowski, Methods of multicriteria decision support based on reference sets, in Advances in Multiple Objective and Goal Programming: Proceedings of the Second International Conference on Multi-Objective Programming and Goal Programming, Torremolinos, Spain, May 16–18, 1996. Springer (1997) 282–290. [Google Scholar]
  • W. Sun, Nonmonotone trust region method for solving optimization problems. Appl. Math. Comput. 156 (2004) 159–174. [MathSciNet] [Google Scholar]
  • W. Sun and Y.X. Yuan, Optimization Theory and Methods: Nonlinear Programming. Springer, New York (2006). [Google Scholar]
  • R. Tanabe and H. Ishibuchi, A review of evolutionary multimodal multiobjective optimization. IEEE Trans. Evol. Comput. 24 (2019) 193–200. [Google Scholar]
  • P.L. Toint, An assessment of nonmonotone line search techniques for unconstrained optimization. SIAM J. Sci. Comput. 17 (1996) 725–739. [Google Scholar]
  • Z. Yu, J. Zang and J. Liu, A class of nonmonotone spectral memory gradient method. J. Korean Math. Soc. 47 (2010) 63–70. [CrossRef] [MathSciNet] [Google Scholar]
  • H. Zhang and W.W. Hager, A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 14 (2004) 1043–1056. [CrossRef] [MathSciNet] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.