Open Access
RAIRO-Oper. Res.
Volume 57, Number 3, May-June 2023
Page(s) 1219 - 1238
Published online 14 June 2023
  • S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3 (2010) 1–122. [CrossRef] [Google Scholar]
  • X. Cai, G. Gu, B. He and X. Yuan, A proximal point algorithm revisit on the alternating direction method of multipliers. Sci. Chin. Math. 56 (2013) 2179–2186. [CrossRef] [Google Scholar]
  • C. Chen, B. He, Y. Ye and X. Yuan, The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent. Math. Program. 155 (2016) 57–79. [CrossRef] [MathSciNet] [Google Scholar]
  • D.L. Donoho, De-noising by soft-thresholding. IEEE Trans. Inf. Theory 41 (1995) 613–627. [Google Scholar]
  • R.L. Dykstra, An algorithm for restricted least squares regression. J. Am. Stat. Assoc. 78 (1983) 837–842. [CrossRef] [Google Scholar]
  • D. Gabay and B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. App. 2 (1976) 17–40. [Google Scholar]
  • R. Glowinski and A. Marroco, Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique 9 (1975) 41–76. [CrossRef] [EDP Sciences] [MathSciNet] [Google Scholar]
  • G.H. Golub and C.F. Van Loan, Matrix Computations. JHU Press (2013). [Google Scholar]
  • D.-R. Han, A survey on some recent developments of alternating direction method of multipliers. J. Oper. Res. Soc. Chin. 10 (2022) 1–52. [CrossRef] [Google Scholar]
  • T. Hastie, R. Tibshirani, J.H. Friedman and J.H. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Vol. 2. Springer (2009). [Google Scholar]
  • B. He, X. Fu and Z. Jiang, Proximal-point algorithm using a linear proximal term. J. Optim. Theory App. 141 (2009) 299–319. [CrossRef] [Google Scholar]
  • Z.-F. Jin, Q. Wang and Z. Wan, Recovering low-rank matrices from corrupted observations via the linear conjugate gradient algorithm. J. Comput. Appl. Math. 256 (2014) 114–120. [CrossRef] [MathSciNet] [Google Scholar]
  • Z.-F. Jin, Z. Wan, Y. Jiao and X. Lu, An alternating direction method with continuation for nonconvex low rank minimization. J. Sci. Comput. 66 (2016) 849–869. [CrossRef] [MathSciNet] [Google Scholar]
  • K.-M. Jung, Weighted least absolute deviation Lasso estimator. Commun. Stat. App. Methods 18 (2011) 733–739. [Google Scholar]
  • S.-J. Kim, K. Koh, S. Boyd and D. Gorinevsky, 1 trend filtering. SIAM Rev. 51 (2009) 339–360. [CrossRef] [MathSciNet] [Google Scholar]
  • H.-J. Kim, E. Ollila and V. Koivunen, New robust Lasso method based on ranks, in 2015 23rd European Signal Processing Conference (EUSIPCO). IEEE (2015) 699–703. [Google Scholar]
  • B. Lemaire, The proximal algorithm. Int. Ser. Numer. Math. 87 (1989) 73–87. [Google Scholar]
  • H. Li and Z. Lin, Accelerated alternating direction method of multipliers: an optimal o(1/k) nonergodic analysis. J. Sci. Comput. 79 (2019) 671–699. [CrossRef] [MathSciNet] [Google Scholar]
  • X. Li, L. Mo, X. Yuan and J. Zhang, Linearized alternating direction method of multipliers for sparse group and fused Lasso models. Comput. Stat. Data Anal. 79 (2014) 203–221. [CrossRef] [Google Scholar]
  • M. Li, D. Sun and K.-C. Toh, A majorized admm with indefinite proximal terms for linearly constrained convex composite optimization. SIAM J. Optim. 26 (2016) 922–950. [CrossRef] [MathSciNet] [Google Scholar]
  • X. Li, D. Sun and K.-C. Toh, On efficiently solving the subproblems of a level-set method for fused Lasso problems. SIAM J. Optim. 28 (2018) 1842–1866. [CrossRef] [MathSciNet] [Google Scholar]
  • Y. Liu, J. Tao, H. Zhang, X. Xiu and L. Kong, Fused Lasso penalized least absolute deviation estimator for high dimensional linear regression. Numer. Algebra 8 (2018) 97–117. [MathSciNet] [Google Scholar]
  • Y. Ouyang, Y. Chen, G. Lan and E. Pasiliao Jr., An accelerated linearized alternating direction method of multipliers. SIAM J. Imaging Sci. 8 (2015) 644–681. [CrossRef] [MathSciNet] [Google Scholar]
  • A. Rinaldo, Properties and refinements of the fused Lasso. Ann. Stat. 37 (2009) 2922–2952. [CrossRef] [Google Scholar]
  • S. Sebastio, G. Gnecco and A. Bemporad, Optimal distributed task scheduling in volunteer clouds. Comput. Oper. Res. 81 (2017) 231–246. [CrossRef] [MathSciNet] [Google Scholar]
  • Y.-L. Shang and L.-S. Zhang, A filled function method for finding a global minimizer on global integer optimization. J. Comput. Appl. Math. 181 (2005) 200–210. [CrossRef] [MathSciNet] [Google Scholar]
  • R. Tibshirani, Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B 58 (1996) 267–288. [Google Scholar]
  • R. Tibshirani, M. Saunders, S. Rosset, J. Zhu and K. Knight, Sparsity and smoothness via the fused Lasso. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67 (2005) 91–108. [CrossRef] [MathSciNet] [Google Scholar]
  • L. Wang, The L1 penalized lad estimator for high dimensional linear regression. J. Multivariate Anal. 120 (2013) 135–151. [CrossRef] [MathSciNet] [Google Scholar]
  • H. Wang, G. Li and G. Jiang, Robust regression shrinkage and consistent variable selection through the lad-Lasso. J. Bus. Econ. Stat. 25 (2007) 347–355. [CrossRef] [Google Scholar]
  • L. Wang, Y. You and H. Lian, A simple and efficient algorithm for fused Lasso signal approximator with convex loss function. Comput. Stat. 28 (2013) 1699–1714. [CrossRef] [MathSciNet] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.