A scaled descent modification of the Hestense-Stiefel conjugate gradient method with application to compressed sensing
Subject Areas : تحقیق در عملیاتAli Ebrahimnejad 1 , Zohre Aminifard 2 , Saman Babaie-Kafaki 3
1 - Department of Mathematics,Qaemshahr Branch, Islamic Azad University, Qaemshahr, Iran,
2 - Faculty of Mathematics, Statistics and Computer Science, Semnan University, Semnan, Iran
3 - Faculty of Mathematics, Statistics and Computer Science, Semnan University, Semnan, Iran
Keywords: بهینهسازی نامقید, روش گرادیان مزدوج, خاصیت کافی کاهشی, همگرایی سراسری, حسگری فشرده,
Abstract :
To improve the classic Hestense-Stiefel conjugate gradient method, Shengwei et al. proposed an efficient conjugate gradient method which possesses the sufficient descent property when the line search fulfills the strong Wolfe conditions (by restricting the line search parameters). Inspired by the scaled extension of the Hestense-Stiefel method which is recently presented by Dong et al., a scaled modification of the conjugate gradient method of Shengwei et al. is proposed which satisfies the sufficient descent condition independent of the line search technique as well as the convexity assumption of the objective function. Furthermore, the global convergence of the suggested method is discussed based on standard suppositions. In addition, a smooth approximation for the compressed sensing optimization problem is put forward. Numerical experiments are done on a set of classical problems of the CUTEr library as well as in solving compressed sensing problem. Results of the comparisons illustrate the superiority of the proposed approach.
[8] X. Li, W. Zhang and X. Dong, "A class of modified FR conjugate gradient method and applications to non-negative matrix factorization," Computers & Mathematics with Applications, vol. 73, pp. 270-276, 2017.
[9] X. Y. Wang, S. J. Li and X. P. Kou, " A self-adaptive three-term conjugate gradient method for monotone nonlinear equations with convex constraints," Calcolo, vol. 53, pp. 133-145, 2016.
[10] R. Dehghani and N. Mahdavi-Amiri, "Scaled nonlinear conjugate gradient methods for nonlinear least squares problems," Numerical Algorithms, vol. 82, pp. 1-20, 2019.
[11] Y. Dai, J. Han, G. Liu, D. Sun, H. Yin and Y. X. Yuan, "Convergence properties of nonlinear conjugate gradient methods," SIAM Journal on Optimization, vol. 10, pp. 345-358, 2000.
[12] Z. Aminifard and S. Babaie-Kafaki, "A modified descent Polak–Ribiére–Polyak conjugate gradient method with global convergence property for nonconvex functions," Calcolo, vol. 56, pp. 1-11, 2019.
[13] Z. Aminifard and S. Babaie-Kafaki, "An optimal parameter choice for the Dai–Liao family of conjugate gradient methods by avoiding a direction of the maximum magnification by the search direction matrix," 4OR, vol. 17, pp. 317-330, 2019.
[14] Y. Shengwei, Z. Wei and H. Huang, "A note about WYL’s conjugate gradient method and its applications," Applied Mathematics and computation, vol. 191, pp. 381-388, 2007.
[15] X. Dong, H. W. Liu, Y. B. He and . X. M. Yang, "A modified Hestenes–Stiefel conjugate gradient method with sufficient descent condition and conjugacy condition," Journal of Computational and Applied Mathematics, pp. 239-249, 2015.
[16] M. R. Hestenes and E. Stiefel, "Methods of conjugate gradients for solving linear systems," Journal of Research of the National Institute of Standards and Technology, vol. 49, p. 409–436, 1952.
[17] Z. Aminifard and S. Babaie-Kafaki, "Dai–Liao extensions of a descent hybrid nonlinear conjugate gradient method with application in signal processing," Numerical Algorithms, pp. 1-19, 2021.
[18] M. Li, "A modified Hestense–Stiefel conjugate gradient method close to the memoryless BFGS quasi-Newton method," Optimization Methods and Software, vol. 33, pp. 336-353, 2018.
[19] K. Amini, P. Faramarzi and N. Pirfalah, "A modified Hestenes–Stiefel conjugate gradient method with an optimal property," Optimization Methods and Software, vol. 34, pp. 770-782, 2019.
[20] Y. H. Dai and L. Z. Liao, "New conjugacy conditions and related nonlinear conjugate gradient methods," Applied Mathematics and optimization, vol. 43, pp. 87-101, 2001.
[21] K. Sugiki, Y. Narushima and H. Yabe, "Globally convergent three-term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization," Journal of Optimization Theory and Applications, vol. 153, pp. 733-757, 2012.
[22] W. Sun and Y. X. Yuan, Optimization Theory and Methods: Nonlinear Programming, Springer Science & Business Media, 2006.
[23] E. Hale, W. Yin and Y. Zhang, "Fixed-point continuation applied to compressed sensing: implementation and numerical experiments," Journal of Computational Mathematics, vol. 282, pp. 170-194, 2010.
[24] M. Roozbeh, S. Babaie-Kafaki and M. Arashi, "A class of biased estimators based on QR decomposition," Linear Algebra and its Applications, vol. 508, pp. 190-205, 2015.
[25] Y. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications, Cambridge university press, 2012.
[26] M. Elad, Sparse and Redundant Representations: from Theory to Applications in Signal and Image Processing, Springer Science & Business Media, 2010.
[27] S. Foucart and H. Rauhut, "A mathematical introduction to compressive sensing," Bulletin of the American Mathematical Society, vol. 54, pp. 151-165, 2017.
[28] Y. Nesterov, "Excessive gap technique in nonsmooth convex minimization.," SIAM Journal on Optimization, vol. 16, pp. 235-249, 2005.
[29] M. J. Black and A. Rangarajan, "On the unification of line processes, outlier rejection, and robust statistics with applications in early vision," International Journal of Computer Vision, vol. 19, pp. 57-91, 1996.
[30] Z. Aminifard, A. Hosseini and S. Babaie-Kafaki, "Modified conjugate gradient method for solving sparse recovery problem with nonconvex penalty," Signal Processing, Vols. 193, 108424, 2022.
[31] H. J. Xing, Y. J. Liu and Z. C. He, "Robust sparse coding for one-class classification based on correntropy and logarithmic penalty function," Pattern Recognition, vol. 111, p. 107685, 2021.
[32] I. Selesnick, and M. Farshchian, "Sparse signal approximation via nonseparable regularization," IEEE Transactions on Signal Processing, vol. 65, pp. 2561-2575, 2017.
[33] N. I. Gould, D. Orban and P. L. Toint, "CUTEr and SifDec: A constrained and unconstrained testing environment, revisited," ACM Transactions on Mathematical Software (TOMS), vol. 29, pp. 373-394, 2003.
[34] Z. Aminifard and S. Babaie-Kafaki, "Matrix analyses on the Dai-Liao conjugate gradient method," The ANZIAM Journal, vol. 61, pp. 195-203, 2019.
[35] J. Nocedal and S. Wright, Numerical Optimization, Springer Science & Business Media, 2006.
[36] W. W. Hager and H. Zhang, "Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent," ACM Transactions on Mathematical Software (TOMS), vol. 32, pp. 113-137, 2006.
[37] E. D. Dolan and J. J. Moré, "Benchmarking optimization software with performance profiles," Mathematical programing, vol. 91, pp. 201-213, 2002.