AN EFFICIENT NEW HYBRID CG-METHOD AS CONVEX COMBINATION OF DY AND CD AND HS ALGORITHMS

. In this paper, we proposed a new hybrid conjugate gradient algorithm for solving unconstrained optimization problems as a convex combination of the Dai-Yuan algorithm, conjugate-descent algorithm, and Hestenes-Stiefel algorithm. This new algorithm is globally convergent and satisfies the sufficient descent condition by using the strong Wolfe conditions. The numerical results show that the proposed nonlinear hybrid conjugate gradient algorithm is efficient and robust.

Hestenes-Stiefel method (1952), Fletcher-Reeves method (1964), Polyak-Polak-Ribère method (1969), Conjugate-descent method (1987), Liu-Storey method (1991), Dai-Yaun method (1999) (see [7,15,16,18,21,24,25]), which are given by The FR [16], CD [15], and DY [7] methods are globally convergent, but they may not perform in practice.On the other hand, the HS [18], PRP [24,25], and LS [21] methods are more efficient with excellent numerical performance, but they may not always be convergent without some modifications this survey of methods, with special attention to their global convergence, is given by Hager and Zhang [17].Naturally, researchers try to develop some new methods that have the advantages of both these two categories of methods.One of the most effective conjugate gradient methods is the hybrid CG method, which is a combination of different classical CG methods for global convergence properties and excellent numerical performance.Recently, some hybrid CG methods.For example the HSDY method is a convex combination of HS and DY methods (see [2]), where   has been introduced as The LSCDCC method is a convex combination of LS and CD methods (see [12]), which   was proposed as [27]) proposed a family of globally convergent conjugate methods, where Here   =  +1 −   and ‖.‖ the Euclidean norm.Motivated by the works of [2] we present a new hybrid conjugate gradient method for solving problem (1) based on Dai-Yaun method, Conjugate-descent method and Hestenes-Stiefel method.In Section 2, we find new scalar   computed as a convex combination of    ,    and    formulas.The sufficient descent property of the suggested method is proved in Section 3. The global convergence of the suggested method is established in Section 4. In Section 5, we discuss the numerical results and comparisons.Finally we present some conclusions.

The new conjugate gradient method
In this section, we will present a new hybrid conjugate gradient formula.The our new   which is known as And So, we may actually write The parameters   ,    [0, 1] and 0 ≤   +   ≤ 1.
The following line search conditions are used to find 0 <  ≤  < 5  11 .The value of the   and   is determined in such a way that the search direction fulfills the famous D-L conjugacy condition [8]: And Therefore Solving (20) implies that Finally, having in view the relation ( 13), we define The following is the  HDYCDHS  method algorithm: Algorithm 1 Step 1: Select  0 ∈ R  ,  > 0, compute  0 and  0 , set  0 = − 0 ,  0 = 1 ‖0‖ .

The sufficient descent property
It is well known that the descent property is an important property for iterative methods to be globally convergent.The search direction   of the new method satisfies the sufficient descent condition with inexact line search.Theorem 3.1.Let {  } N given by Algorithme 1,   provides ( 16) and (17) then > 0 where  < 5  11 .
Proof.Induction is used to show (23).
If (22) hold then The search direction that meets the sufficient descent condition is achieved.If (22) does not hold then Using (17), we can get that And Multiplying both sides of (18) by   +1 , we get We have demonstrated seven cases.

The convergence analysis
The following two assumptions are required to obtain the convergence of our algorithm in this part: [2] H1.The level set ℋ = { ∈ R  |  () ≤  ( 0 )} is bounded where  0 is the initial point.H2.In a neighborhood  of ℋ the function  is continuously differentiable and its gradient ∇ () is lipschitz continuous, for all ,  ∈  there ∃  > 0 such that Under such hypotheses, there ∃  ≥ 0, such that The following Lemma holds for any CG method using the strong Wolfe conditions, as shown in [6].Proof.We proved by contradiction.Assume that (33) is false, ie.  ̸ = 0, then there is a constant Ā > 0 that exists, which D is the diameter of the level set  and   =  +1 −   , we have (25) with the inequality of Cauchy Schwartz and H2, we get It follows with using ( 23), ( 31), ( 34) and ( 35) that )︂ .
Hence 1 Is terms of the contradiction with Lemma 4.1, so we have proved (33).

The numerical results
In this section, we discuss the numerical performance of our method HDYCDHS using certain test problems from references [3,20].
The comparisons of methods are provided in the following context.Let  1  and  2  be the optimal solutions determined by ALG1 and ALG2, respectively.We show that, in the specific problem, the performance of ALG1 was superior than the performance of ALG2 if: And the CPU time, or the number of function evaluations, or the number of iterations of ALG1 was less than the CPU time, or the number of function evaluations, or the number of iterations of ALG2, which were evaluated using the profiles of Dolan and Moré [14].
Figure 1 presents the performance profiles based on CPU time of HDYCDHS versus HSDY, DY, and CD.In Figure 2, which shows the number of function evaluations where HDYCDHS is better than HSDY, DY, and CD.
Moreover, in terms of the number of iterations, as shown in Figure 3.

Conclusions
In this research, we presented a new hybrid conjugate gradient method for unconstrained optimization problems.The new CG parameter is known as

Figure 1 .
Figure 1.Performance profiles using the CPU time.

Figure 2 .
Figure 2. Performance profiles using the function evaluation.
=      +      + (1 −   −   )   , the convex combination scalars   ,   are chosen in such a way that the search direction satisfies the D-L conjugacy condition.The suggested method can provide sufficient descent directions with an inexact line search.The global convergence of our hybrid technique was proved, and the numerical results demonstrate its usefulness for unconstrained minimization problems.

Figure 3 .
Figure 3. Performance profiles using the iteration number.