Page 32 - IJOCTA-15-1
P. 32

S. Ben Hanachi, B. Sellami, M. Belloufi / IJOCTA, Vol.15, No.1, pp.25-34 (2025)
            1964, Fletcher and Reeves  8  extended the CG     convergence properties. However, they may not
            method for solving nonlinear unconstrained min-   perform well in practice due to jamming. On the
            imization problems. The basis of all CG methods   other hand, although HS and LS methods may
            is to generate a sequence {x k } starting from an  not converge in general, they often perform bet-
                                   n
            initial estimate x 0 ∈ R , using the following re-  ter. To achieve good computational performance
            currence:                                         while maintaining strong global convergence, re-
                                                              searchers paid special attention to hybridize the
                   x k+1 = x k + α k d k ,  k = 0, 1, 2, . . . .  (2)  CG methods of the two categories. Various hy-
                                                              brid methods have been proposed in this context.
                                                              For example:
            The step size α k is determined using a one-
            dimensional search, referred to as a line search. 9
                                                                       wylcd          WY L      CD        19
            Line searches can be either exact or inexact. In-      • β      = (1−θ k )β    +θ k β  Hammel ;
                                                                      k               k         k
            exact line searches are generally preferred because
                                                                                                          20
            performing an exact line search at each iteration is   • β hHZDY  = (1 − θ k )β HZ  + θ k β DY  Hallal ;
                                                                      k                  k        k
            computationally expensive in terms of both time
            and memory   10,11 . In our research, we employ the    • β HMLSFR    = (1 − θ k )β MLS  + θ k β FR
                                                                      k                        k          k
                                                                             21
            strong Wolfe line search, defined by the following       Guefassa ;
            conditions  12,13 :
                                                                   • β hPRPHZ    =  (1 − θ k )β PRP  + θ k β HZ
                                                                      k     22                k           k
                                                T
                    f(x k + α k d k ) − f(x k ) ≤ δα k g d k ,  (3)  Delladji ;
                                                k
                                                                                                          23
                                  k+1 k | ≤ −σg d k .
                                 |g T  d       T        (4)        • β k hFRBA  = (1−θ k )β FR  +θ k β k BA  Delladji ;
                                               k
                                                                                        k
                                                                       hyb           HS       FR           24
                                                                   • β    = (1 − θ k )β  + θ k β  Djordjevi´c.
            where 0 < δ ≤ σ < 1. And d k is the search direc-         k              k        k
            tion generated by the following rule:
                                                              From the literature, the AL-BAYATI and AL-
                        (                                                           25
                          −g 0 ,            if k = 0          ASSADY (BA) method      often performs better in
                   d k =                                (5)                                                26
                          −g k + β k−1 d k−1 , if k ≥ 0,      practice than the Hager and Zhan (HZ) method,
                                                              while the HZ method exhibits stronger conver-
                                                              gence properties than the BA method. To com-
            where g k is the gradient of f at the point x k
            and β k is known as the CG parameter. Different   bine the advantages of both the BA and HZ meth-
            choices for the parameter β k correspond to vari-  ods and develop a more efficient and robust algo-
            ous conjugate gradient methods. Bellow are some   rithm, we propose a new hybrid CG method based
            well-known formulas:                              on a convex combination of these two methods
                                                              for solving unconstrained optimization problems
                  • β FR  =  ∥g k+1 ∥ 2  Fletcher and Reeves (FR);  under suitable conditions. The corresponding pa-
                     k      ∥g k ∥ 2                          rameters are defined as follows:

                                                       14
                  • β DY  =  ∥g k+1 ∥ 2  Dai-Yuan method (DY) ;
                     k       T
                            y d k
                             k                                        HZ      1           ∥y k ∥ 2  T
                           g T  y                                   β k  =   T   (y k − 2d k  T  ) g k+1 .  (6)
                            k+1 k
                     HS
                  • β   =        Hestenes and Stiefel (HS);                 d y k         d y k
                     k      T                                                k             k
                            y d k
                            k                                                   2
                          g  T  y                                    β BA  =  ∥y k ∥  ,                   (7)
                  • β LS  =  k+1 k  Liu and Storey (LS). 15           k      T
                     k       T                                              d y k
                           −g d k                                            k
                             k
            These methods are identical if f is a strongly    where y k = g k+1 − g k and ∥.∥ stands for the Eu-
            convex quadratic function and α k is obtained     clidean norm.
            by exact line search, as the parameters β k of    This paper is organized as follows: In Section 2,
            these methods will be equal, generating the       we present the new hybrid conjugate gradient al-
            same sequence {x k } k=0 .  However, when ap-     gorithm, detail its corresponding algorithm, and
                                 ∞
            plied to a general nonlinear function with in-    derive the parameter θ k using specific techniques.
            exact line searches, these methods generate       Furthermore, we establish the sufficient descent
            different sequences {x k } k=0 , implying different  condition under the strong Wolfe line search. In
                                    ∞
            methods. 16–18                                    Section 3, we analyze the convergence proper-
            The most important characteristic that combines   ties of the proposed method. In Section 4, we
            the methods FR and DY methods is their strong     demonstrate the efficiency of our method through
                                                            26
   27   28   29   30   31   32   33   34   35   36   37