LINEAR PROGRAMMING WITH A FEASIBLE DIRECTION INTERIOR POINT TECHNIQUE FOR SMOOTH OPTIMIZATION

. We propose an adaptation of the Feasible Direction Interior Points Algorithm (FDIPA) of J. Herskovits, for solving large-scale linear programs. At each step, the solution of two linear systems with the same coefficient matrix is determined. This step involves a significant computational effort. Reducing the solution time of linear systems is, therefore, a way to improve the performance of the method. The linear systems to be solved are associated with definite positive symmetric matrices. Therefore, we use Split Preconditioned Conjugate Gradient (SPCG) method to solve them, together with an Incomplete Cholesky preconditioner using Matlab’s ICHOL function. We also propose to use the first iteration of the conjugate gradient, and to presolve before applying the algorithm, in order to reduce the computational cost. Following, we then provide mathematica proof that show that the iterations approach Karush–Kuhn–Tucker points of the problem under reasonable assumptions. Finally, numerical evidence show that the method not only works in theory but is also competitive with more advanced methods.

that it was not competitive with the simplex method because of the possibility of numerical instability in the calculations and the expensive computational steps they require.
The second method is the interior point method that arises with the Karmarkar's algorithm [5], which starts from a strict interior point and applies repeated projective transformations on an inscribed sphere to create a sequence of points that converges to the optimal solution with polynomial complexity. From the Karmarkar algorithm, different types of interior point algorithms have been generated, depending on the type of transforms and search directions to be used. For example, the affine scaling method studied by Dikin [6], Barnes [7], Cavalier and Soyster [8], the path following method by Gonzaga [9,10] and Ye potential reduction method [11]. For a better understanding of interior point theory, reading the books [12,13] is suggested.
In this article, the Algorithm FDIPA proposed by Herskovits [14] for nonlinear programming problems will be adapted to solve large-scale linear programming problems (FDIPA-LP) efficiently. Starting from a strictly feasible initial point, FDIPA performs a Newton-like iteration to solve the Karush-Kuhn-Tucker (KKT) optimality condition. During this procedure a descent direction for the objective function is obtained. A perturbation is introduced for the complementary condition so that a feasible descent direction is obtained. This algorithm ensures the primary feasibility and updating the dual variable ensures its dual feasibility. This is relevant, especially in engineering, economics or finance, where physically non-admissible solutions can not be accepted.
FDIPA has already been adapted to various types of problems, see e.g., [15][16][17][18][19][20] with various applications, especially in mechanical engineering, aerodynamic and electromagnetism; proving to be an easy code to implement, strong and efficient.
FDIPA already has a first adaptation to solve LP in Tits and Zhou [21], where it has a resemblance to the affine scaling method, demonstrating convergence using as search direction, the direction obtained by solving the first FDIPA system (that is, it is a FDIPA without perturbation), and a step which guarantees strict viability.
We will see that both FDIPA-LP and the unperturbed FDIPA work well in theory. Computationally, FDIPA-LP has to solve two linear systems, which implies a significant computational effort. To reduce the size of these systems, before starting the algorithm, we will use a presolver [22] and scaling techniques. We will also use the Split Preconditioned Conjugate Gradient (SPCG) method to solve them, along with an incomplete Cholesky preconditioner using Matlab's ICHOL function. It was observed, that the iterations generated by SPCG are candidates to be search direction. Therefore, we use only the first iteration of the SPCG.
In the next section, we discuss and present the adaptation of FDIPA algorithm for LP case of inequality constrained optimization. Section 3 studies the convergence, the implementation details are presents in Section 4. In Section 5, we present the results of FDIPA-LP algorithm applied to the problems from NETLIB and we compare the FDIPA performance with and without perturbations following [21]. Finally, in the last section, we present some conclusions.

The feasible direction interior points algorithm for linear programming
We consider the linear programming problem with the inequality formulation: where ∈ × is the constraint's matrix, ∈ is the independent vector, ∈ is the cost vector and ∈ is the variables vector, with ≥ ≥ 1 and ( ) = . Let = {1, . . . , }, for ∈ , let denotes the th row of , is the th component of . The constraints function is given by ( ) = − .

Define ( ) =
, where is the objective function that is to be minimized. The set of feasible points of (1), denoted by Ω, is defined By the interior set of Ω we mean The Lagrangian of problem (1) is With it gradient given by A point ∈ Ω is a stationary point of the problem (1) if there exists ∈ such that: where denotes ( ) and ( ) denotes ( ( )), we will refer to such as a multiplier vector associated with . If ≥ 0, then is a Karush-Kuhn-Tucker (KKT) point of problem (1).
Next, we adapt the basic ideas of the interior point method with feasible direction (FDIPA) developed by Herskovits [14]. Let where p is a real number with the interior set (Ω p ) = { ∈ (Ω)| ( ) < p}. We now make the following assumptions: Assumption 1. There exist a real number p such that Ω p is compact and has a nonempty interior (Ω p ).
FDIPA algorithm looks for a feasible primal and dual solutions, applying a Newton-like iteration to KKT equations in such a way that it has a sequence of strictly feasible points minimizing the objective function. The Newton-like iteration that solves (5) and (6) is given by the following system where ( , ) ∈ (Ω) × R ++ is the current point, ( , ) is the new estimated point. We denote the matrix of the system (7) by ( , ).
We define a direction = − . So, we have: We perturb on the right-hand side of the system (8) generating a new feasible direction that points to (Ω), where > 0 is an appropriate positive real number to ensure that is a descent direction, and¯is the new estimate of .
On the other hand, the pair ( ,¯) obtained by system (9) can be calculated by (8) and the following system where, Once we have computed a descent and feasible direction , we can determine the next iterative. A full step in this direction is generally not allowed, as it would violate (2). To avoid this difficulty, we perform a linear search along the direction to get the feasibility ( + ) < 0 and an appropriate objective function reduction ( + ) < ( ) so that the new iteration is + for some line search parameter ∈ (0, 1], guaranteeing that the sequence of points is (Ω). Evaluating at point + , if < 0, we obtain ( + ) < ( ) for all ∈ R.
In the following section, we show the feasibility of this direction ( < 0). Evaluating at point + , for all ∈ , s We look at the possible values of : The value of has to be the minimum = min

The statement of the algorithm
The proposed FDIPA-LP is presented as follows.
Step 1: Determination of the descent feasible direction.
(i) Solve the linear system ( , ): (ii) Solve the linear system ( , ): (iii) Compute the positive scalar : If > 0, then it is defined as (iv) Compute the search direction d as Step 2: Line search Step 3: Update.
The algorithm performs iterations within the feasible region. In step 1(ii) we obtain the value of the deflection that guarantees a descent direction . In step 2,¯is the maximum step that guarantees feasibility and with the expression (28) we get a strict feasibility. In step 3(ii), we use the same rule for FDIPA nonlinear programming to update Lagrange multipliers .

Study of convergence
In this section, we will prove, under certain nondegeneracy assumptions, that for any starting point 0 ∈ (Ω) the sequence generated ( , ) by the algorithm is convergent to ( * , * ) under the Assumptions 1 and 2, where * is a KKT point.
First, let where ∈ Ω and > 0, for all ∈ . The matrix is non-singular whenever Assumption 2 holds. Proof see Lemma 3.1 of Tits and Zhou [21]. Consequently, , , obtained in step 1 of the algorithm are well defined.
Note that, if the algorithm ends at iteration ∈ N, = 0 then solves the equation (19). Now, looking at equation (18), we have = 0 and consequently = 0. Therefore, when ̸ = 0, the algorithm generates an infinite sequence of . From now on, we study the case ̸ = 0 at every iteration. The iterations of the algorithm FDIPA-LP generate ∈ (Ω) and > 0. The following lemma shows that is a descent direction.
On the other hand, with the choice of in step 1 of the algorithm, we ensure that Now we are going to show feasibility of . Proof. The step length is defined in the equation (28) of the algorithm. Since the constraints are linear, to satisfy the line search condition the following inequalities must be true: for all ∈ . If ≤ 0, the above inequality is satisfied with any > 0. Otherwise, it follows from (9) that ( ) Obviously > 0 and ( ) < 0. Thus, the inequality is satisfied if Now, is bounded by (30) and, since , and are bounded from above, also¯is bounded from above. Thus, there exists > 0 such that /¯> . Therefore, for all ∈ and for all ∈ [0, ], we have ( + ) ≤ 0.
and taking limits on both sides for → ∞, following that Considering now the result of Lemma 3.1 in the limit for → ∞ we get It also follows from Lemma 3.1, yields * = 0. Taking limits in (18) and (19), we have Thus, * is stationary, with multiplier vector * . From the equation (47) and the Assumption 2 we have the uniqueness of * . Proof. It is analogous to the proof of Lemmas A.5 and 3.6 in Tits and Zhou [21]. Lemma A.5 proves that is connected compact set, and Lemma 3.6 proves that possesses a unique * associated with it.
The Lemma 3.4 appears as a hypothesis in FDIPA for nonlinear programming [14], it helps in the proof of the convergence of the sequence of points generated by the algorithm to a KKT point.
Next, a technical lemma to understand convergence.
The constraint functions associated with estimated negative multipliers are decreasing functions ( ( + ) < ( )). This ensures that stationary points that are not KKT points will be avoided. Finally, Theorem 3.6 proves that the presented algorithm converges to the solution of the problem.
Theorem 3.6. Any accumulation point * of any sequence generated by the algorithm { } is a Karush-Kuhn-Tucker point of the problem.
Proof. As * is a stationary point of the problem (1), it is only necessary to prove that the Lagrange multipliers * are nonnegative. We have that * = 0 and * = 0 at a stationary point. Then, it follows from (19) that where * = 0 in the case when ( * ) < 0. Consider now a constraint ( * ) = 0, yielding the following consequences. As the method is strictly feasible, Then, we can define a sequence { } ∈ , ⊂ N, converging to * such that Note that − 1 may not belong to and from of Lemma 3.1, we get¯− 1 ≥ 0. Now, we proceed by contradiction and assume that * < 0. It follows from (41) that * = 0 and from (12) Let¯be one of these points and { −1 } ∈¯,¯⊂ , a sequence convergent to¯therefore we have that ‖ −1 ‖ → 0. Then, for large enough, ‖ −1 ‖ < for any > , ∈¯and, in consequence of the line search, This result is in contradiction with the fact that¯̸ ∈ ( ) proving the theorem.

Implementation of interior points feasible directions for linear programming
First, we will present the split preconditioned conjugate gradient method, necessary for solving the internal linear systems.

Split preconditioned conjugate gradient method (SPCG)
The SPCG method is an algorithm for the numerical solution of systems of linear equations, such as those whose matrices are symmetric and positive-definite.

Solving internal linear systems
The internal linear systems of FDIPA-LP can be solved by various numerical methods, which depend on the form and characteristics that they obtain when manipulated.
We observe, in the development of the FDIPA-LP algorithm, that the iterations generated by SPCG are candidates to be a feasible descent direction. Therefore, we stop the SPCG method at an early iteration (SPCG-T). It which represents an attractive and economic alternative for large-scale problems.
In Table 1, we will compare the number of iterations of SPCG (Iter SPCG) vs the number of SPCG with truncated iteration (Iter SPCG-T), needed to solve the internal FDIPA-LP systems for tested problems from the NETLIB [24] collection. The first column contains the name of the problem.

Presolve
The algorithm begins by trying to simplify the problem by eliminating redundancies and simplifying the restrictions. With this, we managed to reduce the size of the problem and the execution time, the presolve methods used are in [22].
In Table 2 we present some computational results clearly advocating for the use of an involved presolve analysis. The columns variables and constraints show the number of rows and columns in , respectively. The following columns variables and constraints show the same numbers, but after presolve. The results collected in Table 2 also show that there exist almost irreductible problems, for example, Afiro, Degen2, Degen3, Israel, Qap8 and Qap12.
We also use stepping techniques since they improve the computational properties of LP problem. Scaling is used before the application of FDIPA-LP algorithm to reduce the condition number of the constraint matrix, improve the performance number of the algorithms, reduce the number of iterations required to solve LP, and simplify the setting of tolerances.
In [22] present eleven scaling techniques used before the execution of an LP algorithm where a computational study is carried out, comparing the execution time of the scaling techniques, and investigate the impact of scaling before the application of LP algorithms. We will use 6 types of scaling techniques presented in Table 2.
For more details of its mathematical formulation, illu strative numerical example and implementation in MATLAB of each type of scaling technique see [22].

Starting point
In the algorithm, we have assumed that we start from a starting point 0 feasible and interior. This point will be found by solving the following problem: we obtain a feasible point when < 0.

Stopping criteria
We must establish criteria to determine when the current point is close enough to * . Our criteria happens when the relative improvement in the objective function is small, and ‖ ‖ ≤ .

Numerical results
In this section, we report experimental results from NETLIB [24] collection. This library brings together LP problems from different areas.
Every primal linear programming problem has another associated problem called dual, where at the optimal point each has the same objective value. Duality theory relates both problems [12,25]. The dual problem fits well to be solved with FDIPA-LP proposed in this article.
Our implementation was done in MATLAB environment. All computational experiments were performed on a microcomputer with an Intel CORE i5 processor, 8 GB of RAM and 2.40 GHz of frequency running on the Windows 10 platform.
The first step in solving a test problem was to perform a presolve described in Section 4.3, then apply a kind of scaling technique from Table 3. Next, FDIPA-LP needs as input a point 0 within the feasible region, but the NETLIB library does not provide workable points for your problems. Therefore, a feasible solution is sought 0 ∈ (Ω). To strictly feasible initial point, FDIPA-LP is started to solve the auxiliary problem (59), the optimization process is interrupted when the objective function reaches a negative value 0 .
In the Tables 4 and 5, presents 52 linear programming problems solved with FDIPA-LP. The seven columns give, respectively, the name of the problem, the primal objective value, the dual objective value with FDIPA, the number of iterations required for convergence, the sum of the number of SPCG Truncate iterations needed to solve two systems, the number of iterations required to calculate the starting point and type of scaling technique.

Computational results FDIPA LP vs FDIPA without perturbation
Herskovits [14] stated that original FDIPA solves LP problems, using only system (8). In other words, it was not necessary to carry out the perturbation ( ) to the complementarity condition.
Then Tits and Zhou [21], develop unperturbed FDIPA (where direction is ) for LP, showing the global convergence and the rate of convergence. Already in Celis [26] some computational results are observed.
In theory, both algorithms work. let's compare the efficiency of both. Table 6 shows the obtained computational results. In the test, the following parameter values were assumed = 0.7, = 1, = 10 −3 , = 0.9995, = 10 15 . The first column of Table 6 contains the name of the problem. The next two columns show the number of iterations that FDIPA-LP performs to obtain the solution, with its respective gap. Columns 4 and 5 contain the iteration number of the FDIPA solving the first unperturbed system with its respective gap.

Conclusion
In this paper we proposed a feasible direction algorithm for linear programming FDIPA-LP, inspired from iteration due to Herskovits. The method start by performing an iteration of Newton's method to KKT conditions, generating a system, then perturbs the complementarity conditions. The linear system solved at each iteration is identical to that of the primal-dual logarithmic barrier method [27,28]. However, FDIPA-LP is not a penalty or barrier method, the perturbs strategy of the new algorithm is drastically different.
Under assumptions a theoretical study proving convergence to a KKT point was presented. Also, important aspects for efficient computational implementations were considered, such as use conjugate gradient method for system internal solved, by stopping in an early stage to find the direction, also the use of presolver and some kind of scaling, which presents an advantage to reduce the computational cost.
This algorithm was tested on 52 test problems, guarantee the good development of the algorithms and it should be noted that the fact that all tests are solved with the same parameters shows that the algorithm is robust, strong and efficient compared with alternative implementations.