Page 31 - IJOCTA-15-1
P. 31
An International Journal of Optimization and Control: Theories & Applications
ISSN: 2146-0957 eISSN: 2146-5703
Vol.15, No.1, pp.25-34 (2025)
https://doi.org/10.36922/ijocta.1543
RESEARCH ARTICLE
Global convergence property with inexact line search for a new
conjugate gradient method
*
Sabrina Ben Hanachi , Badreddine Sellami, Mohammed Belloufi
Department of Mathematics and Computer Science, University of Mohamed-Cherif Messaadia, Algeria
s.benhanachi@univ-soukahras.dz; bsellami@univ-soukahras.dz; m.belloufi@univ-soukahras.dz
ARTICLE INFO ABSTRACT
Article History: To develop new conjugate gradient (CG) methods that are both theoretically
Received 22 February 2024 robust and practically effective for solving unconstrained optimization prob-
Accepted 4 January 2025 lems, we propose novel hybrid conjugate gradient algorithms. In these algo-
Available Online 20 January 2025 rithms, the scale parameter β k is defined as a convex combination of β k HZ
BA (from Al-Bayati and Al-Assady’s
Keywords: (from Hager and Zhang’s method) and β k
method). In one hybrid algorithm, the parameter in the convex combina-
Nonlinear unconstrained optimization
tion is determined to satisfy the conjugacy condition, independent of the line
Conjugate gradient
Line search search.In the other algorithm, the parameter is computed to ensure that the
Global convergence conjugate gradient direction aligns with the Newton direction. Under certain
conditions, the proposed methods guarantee a sufficient descent at each itera-
AMS Classification 2010: tion and exhibit global convergence properties. Furthermore, numerical results
90C06; 65K05; 90C26 demonstrate that the hybrid computational scheme based on the conjugacy
condition is efficient and performs favorably compared to some well-known al-
gorithms.
1. Introduction for large-scale optimization problems. In engi-
neering, 2 CG algorithms are employed for solv-
Optimization involves minimizing or maximizing ing challenges in structural design, fluid dynam-
an objective function. Unconstrained optimiza- ics, and control systems. In data science and ma-
tion, a subset of optimization, focuses on mini- 3,4
chine learning, they play a crucial role in op-
mizing a function of real variables without con-
timizing loss functions for regression, classifica-
straints. The general unconstrained optimization
tion, and neural network training. Additionally,
problem can be expressed as: 5
in signal and image processing CG methods are
used for tasks such as image reconstruction, de-
n
min{f(x), x ∈ R }, (1) noising, and signal recovery. Scientific computing
also heavily relies on CG algorithms for solving
large sparse linear systems, particularly in finite
where f is a smooth function, and its gradient is
available. 1 element analysis and computational physics.
Over time, several numerical methods have been The key advantages of CG methods include their
developed for solving such problems, including low memory requirements, 6 which make them
the Steepest Descent (SD) method, Newton’s suitable for high-dimensional problems, and their
method, Conjugate Gradient (CG) methods, and rapid convergence for specific classes of functions.
Quasi-Newton (QN) methods. This paper focuses The foundation of CG methods was laid in 1952
on CG methods. by Hestenes and Stiefel, who introduced the CG
7
These methods are widely utilized in various fields method for unconstrained linear optimization, as
due to their efficiency and scalability, especially it is applied to quadratic functions. Then, in
*Corresponding Author
25

