COMPLEXITY ANALYSIS OF PRIMAL-DUAL INTERIOR-POINT METHODS FOR LINEAR OPTIMIZATION BASED ON A NEW EFFICIENT BI-PARAMETERIZED KERNEL FUNCTION WITH A TRIGONOMETRIC BARRIER TERM

. In this paper we are generalizing the efficient kernel function with trigonometric barrier term given by (M. Bouafia, D. Benterki and A. Yassine, J. Optim. Theory Appl. 170 (2016) 528–545). Using an elegant and simple analysis and under some easy to check conditions, we explore the best complexity result for the large update primal-dual interior point methods for linear optimization. This complexity estimate improves results obtained in (X. Li and M. Zhang, Oper. Res. Lett. 43 (2015) 471–475; M.R. Peyghami and S.F. Hafshejani, Numer. Algo. 67 (2014) 33–48; M. Bouafia, D. Benterki and A. Yassine, J. Optim. Theory Appl. 170 (2016) 528–545). Our comparative numerical experiments on some test problems consolidate and confirm our theoretical results according to which the new kernel function has promising applications compared to the kernel function given by (M. Bouafia and A. Yassine, Optim. Eng. 21 (2020) 651–672). Moreover, the comparative numerical study that we have established favors our new kernel function better than other best trigonometric kernel functions (M. Bouafia, D. Benterki and A. Yassine, J. Optim. Theory Appl. 170 (2016) 528–545; M. Bouafia and A. Yassine, Optim. Eng. 21 (2020) 651–672).


Introduction
Polynomial time Interior Point Methods IPMs for solving linear programming were first proposed by Karmarkar [7].This method, and its variants that were developed subsequently, are now called interior-point methods IPMs.For a survey, we refer to recent books on the subject, as Bai et al. [1], Peng et al. [10], Roos et al. [13] and Ye [15].In order to describe the idea of this paper, we need to recall some ideas underlying new primal-dual IPMs.The purpose of this work is to present primal-dual interior-point methods IPMs based on generalized trigonometric barrier function for solving the standard linear optimization problem (LO) where  ∈ R × , () = ,  ∈ R  , and  ∈ R  , and its dual problem () max{   :    +  = ,  ≥ 0}.
The kernel functions play an important role in the design and analysis of interior-point methods IPMs.They are not only used for determining the search directions but also for measuring the distance between the given iterate and the -center for the algorithms.Currently, IPMs based on kernel function is one of the most effective methods for solving linear optimization (LO) and other convex optimization problems and is a very active research area in mathematical programming.
In 2005, Bai et al. [2], proposed a new kernel function with an exponential barrier term.The same paper introduced the first new kernel function which have a trigonometric barrier term.
In 2012, El Ghami et al. [5], evaluated the first new kernel function with a trigonometric barrier term given by Bai et al. [2].They obtained O (︁  )︁ iterations bound for large-update methods.Since then, research has focused on developing a new kernel function with a trigonometric barrier term to improve the complexity bound obtained by El Ghami et al. [5].
In 2014, Peyghami et al. [11], proposed a new kernel function with an exponential trigonometric term for LO.They obtained O (︁ √  (log ) 2 log   )︁ iterations bound for large-update methods.
In 2015, Li and Zhang [8], presented another trigonometric barrier function, which has O (︁  )︁ complexity for large-update methods.These results improve the complexity bound obtained by El Ghami et al. [5].
In 2016, Bouafia et al. [4], we proposed the first with trigonometric barrier terms for interior point methods in LO.We generalized and improved the complexity bound based on a new kernel function with trigonometric barrier terms obtained in [5,8,11,12].We obtained the best known complexity results for large and small-update methods.
In 2018, Fathi-Hafshejani et al. [6], presented a large-update primal-dual interior-point algorithm for linear optimization problems based on a new kernel function with a trigonometric growth term.They obtained the best known complexity results for large which has O (︀√  log  log   )︀ complexity for large-update method.This result improves the complexity bound obtained in [5,8,11,12].
Recently in 2020, Bouafia and Yassine [3], investigated a new efficient twice parametric kernel function that combines the parametric classic function with the parametric kernel function trigonometric barrier term given by Bouafia et al. [4] to develop primal-dual interior-point algorithms for solving linear programming problems.We obtained the best known complexity results for large and small-update methods.
In this paper, for ,  ∈ R, we introduce the function where This function   () is a parameterized version which generalize the kernel function given by Bouafia et al. [4].The new kernel function has twice trigonometric terms and twice parameter.Using some mild and standard conditions, the worst case iteration complexity bound of the large-update primal dual IPMs based on the new proposed kernel function is driven.As usual, the so-called exponential convexity property plays an important role in this regard.Our analysis shows that the worst case iteration complexity of large-update IPMs for solving LO problems based on the new kernel function meets the so far best known iteration complexity, i.e., O (︀√  log  log   )︀ .The paper is organized as follows: In Section 2, we recall some basic concepts of interior point methods and the central path curve for LO.Some interesting and useful properties of the new kernel function are provided in Section 3. Section 4, is devoted to describe the proximity reduction during an inner iteration.The step size is discussed in Section 5.In Section 6, we derive the inner iteration bound and the total iteration bound of the algorithm.In Section 7, we present a comparison of our algorithm presented in [3] with our new results in this paper.Finally, we are finishing the paper with some remarks and a general conclusion showing the added value of our work.
We use the following notations throughout the paper.R  + and R  ++ denote the set of -dimensional nonnegative vectors and positive vectors, respectively.For ,  ∈ R  ,  min and  denote the smallest component of the vector  and the componentwise product of the vector  and , respectively.We denote by  = () the  ×  diagonal matrix with the components of the vector  ∈ R  are the diagonal entries, finally  denotes the -dimensional vector of ones.And throughout the paper, ‖‖ denotes the 2-norm of a vector.

Preliminaries
In this section, we briefly describe the idea behind the interior point methods based on kernel functions.We also provide the structure of the generic primal-dual IPMs when the kernel functions are used to induce the proximity measure.Without loss of generality, we assume that ( ) and () satisfy the interior-point condition IPC, i.e., there exist ( 0 , 0 , 0 ) such that ( Therefore, an optimal solution of ( ) and () can be found by solving the following system  = ,  ≥ 0,    +  = ,  ≥ 0,  = 0. (2. 2) The key idea behind primal-dual IPMs for solving LO problems is to replace the third equation in (2.2) by the parameterized nonlinear equation  = , where  > 0. Therefore, the system (2.2) can be rewritten as Note that, this system has a unique solution as ((), (), ()), where () is called the -center of ( ) and ((), ()) the -center of ().The set of -centers (with  running through all positive real numbers) gives a homotype path, which is called the central path of ( ) and ().Applying Newton's method to the system (2.3), we obtain the following Newton-system ∆ = 0,   ∆ + ∆ = 0, ∆ + ∆ =  − .
(2.4) Note that this system has a unique solution.Now we can derive the new point as where the step size  satisfies 0 <  ≤ 1.Now, we introduce the scaled vector , that  = √︁   .System (2.4) can be rewritten as follows: ) where the logarithmic barrier function Φ() : R  ++ → R + is defined as follows: ) (2.9) Now, we introduce the scaled search directions   and   as follows: (2.10) System (2.7) can be rewritten as follows: where  = 1   −1 ,  = (),  = ().We use Φ() as the proximity function to measure the distance between the current iterate and the −center for given  > 0. We also define the norm-based proximity measure, () : R  ++ → R + , as follows The relevance of the central path for LO was recognized first by Megiddo [9], Sonnevend [14].If  → 0, then the limit of the central path exists, and since the limit points satisfy the complementarity condition, the limit yields optimal solutions for ( ) and ().From a theoretical point of view, the IPC can be assumed without loss of generality.In fact we may, and will, assume that  0 =  0 = .In practice, this can be realized by embedding the given problems ( ) and () into a homogeneous self-dual problem which has two additional variables and two additional constraints.For this and the other properties mentioned above, see Roos et al. [13].The IPMs follow the central path approximately.We briefly describe the usual approach.Without loss of generality, we assume that ((), (), ()) is known for some positive .For example, due to the above assumption, we may assume this for  = 1, with (1) = (1) = .We then decrease  to  = (1 − ) for some fixed  ∈]0, 1[.We call () the kernel function of the logarithmic barrier function Φ().In all of the paper, based of on a new kernel function, we replace () by a new kernel function   () and Φ() by a new barrier function Φ  ().If Φ  () ≤  , then we start a new outer iteration by performing a −update; otherwise, we enter an inner iteration by computing the search directions at the current iterates with respect to the current value of  and apply (2.5) to get new iterates.If necessary, we repeat the procedure until we find iterates that are in the neighborhood of ((), ()).Then  is again reduced by the factor 1− with 0 <  < 1, and we apply Newton's method targeting the new −centers, and so on.This process is repeated until  is small enough, say until  < ; at this stage we have found an −approximate solution of LO.The parameters ,  and the step size  should be chosen in such a way that the algorithm is optimized in the sense that the number of iterations required by algorithm is as small as possible.The generic primal-dual algorithm for LO problem is as follows (see Fig. 1).

Properties of the kernel function
In this section, we investigate some properties of the new kernel function with trigonometric barrier terms which are essential to our complexity analysis.We call () : R ++ → R + a kernel function if  is twice differentiable and satisfies the following conditions: Now, for  ∈ R and  ∈ R, we define a new function   () as follows: and For convenience of reference, we gives the first three derivatives with respect to  as follows: and Lemma 3.1.For () defined in (3.2) and  ≥ 2, we have the following For (3.4b), using (3.4a) we get tan () > 0,  > 0.

The proximity reduction during an inner iteration
Note that at the start of each outer iteration of the algorithm, just before the update of  with the factor 1 − , we have Φ  () ≤  .Due to the update of  the vector  is divided by the factor √ 1 − , with 0 <  < 1, which in general leads to an increase in the value of Φ  ().Then, during the subsequent inner iterations, Φ  () decreases until it passes the threshold  again.Hence, during the course of the algorithm the value of Φ  () increases only after one update of .Therefore, the largest values of Φ  () occur just after the updates of .That is why in this section we derive an estimate for the effect of a -update on the value of Φ  ().We start with an important lemmas.Lemma 4.1.For   (), we have where . (3.1h) Proof.For (3.1f), using (3.1a) and (3.1b), we have For (3.1g), since   (1) =  ′  (1) = 0,  ′′′  () < 0,  ′′  (1) =  (, ), and by using Taylor's Theorem, we have for some , 1 ≤  ≤ .This completes the proof.Lemma 4.2.For   (), we have By (3.1f), we have we have By (3.1g), we have For (3.1j), let By the definition of  :  () = ,  ∈ ]0, 1] .
And by the definition of  ′  () and  ′ (), we have This completes the proof.
Proof.Using Lemma 3.1 (3.1e), and Theorem 3.2 in [2], we can get the result.This completes the proof.
we have that ∼  is the default step size and that [13].We can get the following lemma.
Lemma 5.7.Suppose that ℎ() is a twice differentiable convex function with that ℎ() attains its global minimum at  * > 0 and that ℎ ′′ () is increasing with respect to .Then, for any  ∈ [0,  * ], we have Let the univariate function ℎ be such that Lemma 5.8.Let ∼  be the default step size as defined in (3.1o) and let Then where Proof.Using Lemma 4.5 in [2].And if the step size  satisfies  ≤ , then  () ≤ − 2 .So, for ∼  ≤ , we have This completes the proof.

Iteration bound
In this section, the worst iteration complexity bounds for primal-dual IPMs based on the proposed kernel function are computed.

Inner iteration bound
After the update of  to (1 − ) , we have We need to count how many inner iterations are required to return to the situation where Φ  () ≤  .We denote the value of Φ  () after the  update as (Φ  ) 0 ; the subsequent values in the same outer iteration are denoted as (Φ  )  ,  = 1, 2, . . ., , where  denotes the total number of inner iterations in the outer iteration.The decrease in each inner iteration is given by (3.1p).In [2].We can find the appropriate values of  and  ∈ ]0, 1]: .
Lemma 6.1.Let  be the total number of inner iterations in the outer iteration.Then we have Proof.By Lemma 1.3.2 in [10].We have  ≤ This completes the proof.

Total iteration bound
The number of outer iterations is bounded above by log    (see [13]).Lemma 3.2.17, page 116).Through multiplying the number of outer iterations by the number of inner iterations, we get an upper bound for the total number of iterations, namely, For large-update methods with  = O () and  = Θ(1) we have In case of a small-update methods, we have  = O (1) and  = Θ( 1 √  ).Substitution of these values into () does not give the best possible bound.A better bound is obtained as follows.
By (3.1g), with we have where we also used that 1

Comparison of algorithms
In this section, we present a comparison of our algorithm [3] with our new results given in this paper.To prove the effectiveness of our new kernel function and evaluate its effect on the behavior of the algorithm, we offer a comparative study between the results obtained by the considered algorithms, essentially based on the two following kernel functions.
1) The first kernel function, given by M. Bouafia and A. Yassine in [3], defined by And the default stepsize 2) Our new kernel function defined in (3.1) by And the default stepsize

Numerical Tests
Consider the following problem To prove the effectiveness of our new kernel function and evaluate its effect on the behavior of the algorithm, we conducted comparative numerical tests between the two previous kernel functions.
In the table of results, (ex (, )):  is the number of constraints and  is the number of variables, Med  = total iteration outer iteration represent the median value of the number of internal iterations using the function   .We summarize this numerical study in Tables 1, 2, 3, 4, 5 and 6.

Comments.
The realized numerical experiments show the effectiveness of our new kernel function on all the used instances.We note that when the dimension of the problem becomes large, the difference between our new kernel function and that of our kernel function in [3] becomes large in terms of number of iterations.Besides, in case of kernel function   , the algorithm requires a huge number of iterations to obtain the optimal solution for dimensions (75, 150) and (100, 200) in the all Tables 1-5.In these cases the number of total iterations necessary to obtain the optimal solution were not mentioned in the tables.In the Table 6 of kernel function   , the algorithm requires a short number of iterations to obtain the optimal solution compared to our new kernel function.This result confirms the theoretical purpose obtained in [3], which states if we take  =  = log .Therefore, we obtain the best known complexity bound for large-update methods, namely O (︀√  (log ) log   )︀ .By comparing the two Tables 1 and 2 of our new kernel function   , the algorithm requires a short number of iterations to obtain the optimal solution in the Table 2, this result gives the advantage of our new function by the parameter , because for  = 2, this function is the same of the function given by Bouafia et al. [4].In the Table 5 of our new kernel function   , the algorithm requires a short number of iterations to obtain the optimal solution.This result confirms the theoretical results obtained, which states if we take  = log  2 − 1 and  = 2, we obtain the best known complexity bound for large-update methods, namely O (︀√  (log ) log   )︀ iterations complexity.These numerical results consolidate and confirm our theoretical results.

Concluding remarks
In this paper, we used some simple analysis tools and computed the existing theoretical results in [4].We introduced a generalized efficient kernel function with a bi-trigonometric barrier term.We have analyzed large and small-update methods of primal-dual interior-point algorithm based on a generalized efficient kernel function with a trigonometric barrier terms.In particular, if we take  = log  2 − 1 and  = 2, we obtain the best known complexity bound for large-update methods, namely O (︀√  (log ) log   )︀ iterations complexity.These results are an important contribution for improving the computational complexity of the problem under study.

Table 6 .
Comparison of examples for  = log  and  = log .