REGULARIZATION ALGORITHMS FOR LINEAR COPOSITIVE PROBLEMS

. The paper is devoted to the regularization of linear Copositive Programming problems which consists of transforming a problem to an equivalent form, where the Slater condition is satisfied and therefore the strong duality holds. We describe regularization algorithms based on a concept of immobile indices and on the understanding of the important role that these indices play in the feasible sets’ characterization. These algorithms are compared to some regularization procedures developed for a more general case of convex problems and based on a facial reduction approach. We show that the immobile-index-based approach combined with the specifics of copositive problems allows us to construct more explicit and detailed regularization algorithms for linear Copositive Programming problems than those already available.


Introduction
Conic optimization is a subfield of convex optimization that studies the problems of minimizing a convex function over the intersection of an affine subspace and a convex cone.For a gentle introduction to conic optimization and a survey of its applications in Operations Research and related areas, we refer interested readers to [15] and the references therein.
Copositive Programming (CoP) problems form a special class of conic problems and can be considered as an optimization over the convex cone of so-called copositive matrices (i.e.matrices which are positive semi-defined on the non-negative orthant).Copositive models arise in many important applications, including  -hard problems.For the references on motivation and application of CoP see, e.g.[3,7,9].
In linear CoP, the objective function is linear and the constraints are formulated with the help of linear matrix functions.Linear copositive problems are closely related to that of linear Semi-Infinite Programming (SIP) and Semidefinite Programming (SDP).Copositive and semidefinite problems are particular cases of SIP problems, but CoP deals with more challenging and less studied problems than SDP.The literature on the theory and methods of SIP, CoP, and SDP is quite extensive.We refer the interested readers to [1-3, 9, 25, 26], and the references in these works.
In convex and conic optimization, optimality conditions, and duality results are usually formulated under certain regularity conditions, so-called constraint qualifications (CQ) (see, e.g.[2,10,22,26]).Such conditions should guarantee the fulfillment of the Karush-Kuhn-Tucker (KKT)-type optimality conditions and the strong duality property consisting in the fact that the optimal values of the primal problem and the corresponding Lagrangian dual one, are equal and the dual problem attains its maximum.Strong duality is the cornerstone of convex optimization, playing a particularly important role in the stability of numerical methods.
Unfortunately, even in convex optimization, many problems cannot be classified as regular (i.e.satisfying some regularity conditions such as, for example, strict feasibility).In [8], we read: ". . .new optimization modeling techniques and convex relaxations for hard nonconvex problems have shown that the loss of strict feasibility is a more pronounced phenomenon than has previously been realized".This phenomenon can occur because of either the poor choice of functions that describe feasible sets or the degeneration of the feasible sets themselves.According to [23], sometimes the loss of a certain CQ ". . . is a modeling issue rather than inherent to the problem instance. . ." which ". . .justifies the pleasing paradigm: efficient modeling provides for a stable program".
Thus, the idea of a regularization appears quite naturally which is aimed at obtaining an equivalent and more convenient reformulation of the problem with some required properties, one of which is that the regularized problem must satisfy the generalized Slater condition.
The first works on the regularization of abstract convex problems (regularization procedures are called preprocessing there) appeared in the 1980-s, followed by various publications on special classes of conic problems (see, e.g.[5,6]).Nevertheless, as Drusvyatskiy and Wolkowicz wrote in their paper [8] published in 2017, for conic optimization in general, the research in the field of regularization algorithms is still in its infancy.At the same time, the authors of [8] confirm that in order to make a regularization algorithm viable, it is necessary to actively explore the structure of the problem, since for some specific applications of conic optimization, a rich basic structure makes regularization quite possible and leads to significantly simplified models and enhanced algorithms.
Several approaches to the regularization of conic optimization problems are proposed in the literature.In [5,6], the concept of minimal cone of constraints was used by Borwein and Wolkowicz to regularize abstract convex and conic convex problems for which any CQ fails.The algorithm proposed there to describe the minimal cone is based on the sequential reduction of the cone's faces and was named by the authors Facial Reduction Algorithm (FRA).
Another approach called the dual regularization or conic expansion was proposed by Luo, Sturm, and Zhang (see [17] and the references therein).This approach tries to close the duality gap (the difference between the primal and dual optimal values) of the regularized problems by expanding the dual constraints' cone.
In [24], Waki and Muramatsu applied the facial reduction approach to a conic optimization problem in such a way that each primal reduced cone is dual to the cone generated by the conic expansion approach.
The facial reduction approach has been successfully applied to SDP and second-order cone programming problems, as well as to certain classes of optimization problems over symmetric (i.e.self-dual and homogeneous) and nice cones (see, e.g.[18][19][20][21]).At the same time, the question of effective constructive application of this approach to other classes of problems remains open.This is because the known FRAs are more conceptual than practical.
In this paper, based on the results from [11,12,14], we develop a different approach to regularization of linear CoP problems.This approach is based on the concept of immobile indices, i.e. indices of the constraints that are active for all feasible solutions.
The purpose of the paper is to (a) describe in details a finite algorithm for regularization of linear CoP problems that is based on the concept of immobile indices but does not require any additional information about them; (b) analogize two approaches to the regularization of linear CoP problems, one based on facial reduction and the other on the concept of immobile indices, and to compare the corresponding regularized problems constructed using these approaches.
To the best of our knowledge, in CoP there has never been an attempt to develop detailed and easy-to-use algorithms, based on the minimal cone representation (see, e.g. the FRA in [5,6] and the modified FRA in [24]).
Nor do we have any information about any other attempts to describe constructive regularization procedures for linear copositive problems.The regularization algorithms presented in the paper are new, original, and timely due to the growing number of eminent applications of CoP.
The paper is organized as follows.Section 2 contains equivalent formulations of the linear CoP problem and the basic definitions.In Section 3, we consider two different approaches to regularization of copositive problems.In Section 3.1, we show how the minimal face regularization from [5,6] can be applied to linear CoP problems; in Section 3.2, we briefly describe the one-step regularization proposed in [14] and based on the concept of immobile indices, and compare the regularized problems obtained in this subsection with the problem in Section 3.1.Section 4 is devoted to iterative algorithms for regularization of linear copositive problems.The Waki and Muramatsu's facial reduction algorithm is described in Section 4.1, a new regularization algorithm REG-LCoP based on the immobile index set together with its compressed modification is introduced, justified, and compared with the Waki and Muramatsu FRA in Section 4.2.A small clarifying example is proposed.We conclude Section 4 with a brief discussion on the algorithms considered there.Section 5 contains some conclusions.

Linear copositive programming problem: equivalent formulations and basic definitions
Given an integer  > 1, denote by R  + the set of all  vectors with non-negative components, by () and  + () the space of real symmetric  ×  matrices and the cone of symmetric positive semidefinite  ×  matrices, respectively, and let   stay for the cone of symmetric copositive  ×  matrices: The space () is considered here as a vector space with the trace inner product Consider a linear copositive programming problem in the form where  = ( 1 , . ..,   ) ⊤ is the vector of decision variables.The data of the problem are presented by vector  ∈ R  and the constraints matrix function () defined in the form with given matrices   ∈ (),  = 0, 1, . . ., .It is well known (see e.g.[1]) that the copositive problem (2.1) is equivalent to the following convex SIP problem: with a -dimensional compact index set in the form of a simplex where e = (1, 1, . .., 1) ⊤ ∈ R  .Denote by  the feasible set of the equivalent problems (2.1) and (2.3): In what follows, we will suppose that  ̸ = ∅.Evidently, the set  is convex.
Remark According to the commonly used definition, the constraints of the copositive problem (2.1) satisfy the Slater condition if Here int ℬ stays for the interior of a set ℬ.
Following [11,14], let's define the set of normalized immobile indices   in problem (2.1): In what follows, the elements of the set   are called immobile indices.
The following lemma follows from Proposition 1 and Lemma In [12], the following theorem is proved (see Thm. 7.1 in [12]).In what follows we will need the following proposition.
Proposition 2.4.For any  ∈   , the following relations hold true: (2.12) and by definition, for any vector  ∈   , we have  ⊤ () = 0 ∀ ∈ .Hence, for any  ∈ , this vector  is an optimal solution to the following quadratic programming problem: Notice that relations (2.12) are nothing else than the first order necessary optimality conditions for the vector  in problem (QP).The proposition is proved.

Regularization of copositive problems
In this section, first, we remind a known regularization approach developed in [5,6] for conic optimization problems and based on the concept of the minimal face.We briefly describe how this approach can be applied to linear CoP problems.After, for the copositive problem (2.1), we present another regularization approach based on the concept of immobile indices and compare the regularized problems obtained using two considered approaches.

Minimal face regularization
Let us, first, recall the necessary terms and notions.For a given cone F ⊂ (), its dual cone is defined as follows: By definition, a convex subset F of the cone   is its face if for any  ∈   ,  ∈   , the inclusion  +  ∈ F implies  ∈ F,  ∈ F. It is evident that any face of the cone   is also a cone.
Given the copositive problem (2.1) with the feasible set  presented in (2.5), let F min be the smallest (by inclusion) face of   containing a set  defined in terms of the constraints of this problem as follows: In what follows, the face F min will be called the minimal face of the optimization problem (2.1).Generally speaking, for the copositive problem (2.1), the approach suggested in [5,6], is to replace the constraint () ∈   with an equivalent constraint () ∈ F min .The resulting regularized problem takes the form min The dual problem to (3.2) can be written in the form max  ∈() where F * min is the dual cone to the cone F min .It is proved in [5,6], that the constraints of problem (3.2) satisfy the generalized Slater condition: there exists x ∈  such that (x) ∈ relint F min and hence the duality gap between the dual problems (3.2) and (3.3) vanishes.Here relint ℬ stays for the relative interior of a set ℬ.
Unfortunately, there is no information available about how to explicitly construct the cones F min and F * min in a general case and, in particular, in the case of copositive problems.

One-step regularization based on the concept of immobile indices
In our paper [14], for the copositive problem (2.1), we obtained a regularized dual problem that is different from (3.3).The construction of this dual is based on the concept of immobile indices and can be thought of as one-step regularization because it contains a unique step.
Consider the copositive problem (2.1).Let   be the normalized set of immobile indices of this problem defined in (2.7).
If   = ∅, then problem (2.1) satisfies the Slater condition, which means that it is already regular and no regularization is required.Now, suppose that   ̸ = ∅.In this case, the Slater condition is not satisfied and the problem is not regular.Let us describe how one can convert problem (2.1) into a regularized one.
Consider the set conv   and the set  of all vertices of conv   : Suppose that the elements (),  ∈ , of the set  are known.Then we can regularize problem (2.1) in just one step.
In fact, it follows from Theorem 2.3 above and Theorem 3.2 and Corollary 3.3 in [12], that the set  of feasible solutions of the original problem (2.1) coincides with the set of feasible solutions of the following system: and the next condition is satisfied: Here the set Ω( ) is defined by the rules (2.10) with  = .Consequently, the original copositive problem (2.1) is equivalent to the following SIP problem: Let us stress that in problem (3.6)-(3.8), the infinite index set Ω( ) is obtained by removing the set   together with the ( )-neighborhood of its convex hull, from the original index set  .Note here that the set Ω( ) (a) is explicitly constructed by the rules (2.9), (2.10), using the finite set  = {(),  ∈ } of vertices of conv  , (b) does not contain the set conv   , (c) may be sufficiently small.All these properties may be useful for numerical solving the problem (3.6)- (3.8).It is evident that problem (3.6)-(3.8)can be written in the equivalent conic form where It can be shown that  0 ⊂   .
The dual problem to (3.9) is as follows: In the problem above,  * 0 is the dual cone to  0 and has the form where Here and in what follows, for given sets ℬ and , cl ℬ denotes the closure of the set ℬ and ℬ ⊕  stays for the Minkowski sum of the corresponding two sets.
Notice that for the pair of dual conic problems (3.9) and (3.10), the duality gap is zero.
As it was shown in [12], the cone (3.11) in problem (3.10) can be replaced by the following one (which has a more explicit form since it does not contain the closure operator): where   denotes the set of completely positive matrices: and there is no duality gap for problem (3.9) and its dual problem in the form (3.10) with  * 0 replaced by K * 0 .Note that the cones  0 and K * 0 are explicitly described in terms of indices (3.4) and this is an advantage of the approach presented here over the one from 3.1.
The only drawback of the regularization procedure described here is the following: to apply the one-step regularization, one needs to know the finite number of indices (3.4) which are the vertices of the set conv   .
Let us show that the regularized primal problem (3.9) can be modified as follows: where In fact, due to Theorem 2. and, consequently, in problem (3.9) the cone  0 can be replaced by the cone  0 .Note that the inclusions  0 ⊂  0 and  0 ⊂   imply  0 ⊂   .
To show that the regularizations presented above are themselves deeply connected, let us give an explicit description of the minimal face F min in terms of the vertices of the set conv   and the index sets  (),  ∈ , defined as: The following theorem can be proved (see the results obtained in [12]).
Theorem 3.1.Given the copositive problem (2.1), let {(),  ∈ } be the (finite) set of all vertices of the set conv   .Then the minimal face F min of this problem can be described in two equivalent forms Now, having described the minimal face F min via immobile indices, we can compare the regularized problems (3.2), (3.9), and (3.13) in more detail.
The regularized problem (3.2) is formulated using the facial reduction approach to the copositive problem (2.1) and the regularized problems (3.9) and (3.13) are obtained using the immobile indices of this problem.The difference between these three problems is that in problem (3.2), the constraint set is determined by the minimal face F min , while the constraints of problem (3.9) are formulated with the help of the cone  0 , and the constraints of problem (3.13) use the cone  0 .
It should be noticed that the minimal face F min and the cones  0 and  0 satisfy the inclusions At the same time, the cones F min and  0 are faces of the cone of copositive matrices   , while the cone  0 is generally not.In addition, one can show that  0 is an exposed face while the face F min as a whole is not.
For each of the conic problems mentioned above, we face certain challenges caused by the troubles associated with the concrete construction of the respective cones.For example, for the copositive problem (2.1), the following difficulties should be mentioned: -to define the cones  0 and  0 , the elements (),  ∈ , of the finite set of indices (3.4) should be known; -as far as we know, there are no explicit procedures for constructing the minimal face F min and its dual cone F * min .
Theorem 3.1 shows how the minimal face F min can be represented in the form of the cones  min (1) and  min (2) via immobile indices.Notice that to construct these cones, one has to find not only the set of indices (3.4), but also the corresponding sets  (),  ∈ , defined in (3.14).
As mentioned above, regularity is an important property of optimization problems.As a rule, the regularity of copositive problems is characterized by the Slater condition.In this regard, it is important to note that the regularized problem (3.2) satisfies the generalized Slater condition while the regularized problems (3.9) and (3.13) obtained here satisfy the Slater type condition (3.5).This difference can be important for further study of linear CoP problems, as well as for the development of stable numerical methods for them.

Iterative algorithms for regularization of linear copositive problems
In Section 3, we considered general schemes of two theoretical methods that made it possible to obtain regularizations of the linear copositive problem (2.1).In each of these schemes, we meet some difficulties associated with explicit representations of the respective "regularized" feasible cones and their dual ones.In this section, we consider and compare two different approaches to regularization aimed at overcoming these difficulties by using algorithmic procedures.

Waki and Muramatsu's facial reduction algorithm
In [24] for linear conic problems, a regularization algorithm was proposed by Waki and Muramatsu.This algorithm can be thought of as the Facial Reduction Algorithm (FRA) from [5,6], applied to linear conic problems in finite-dimensional spaces.
As above, let ℱ * denote the dual cone of a given cone ℱ⊂ ().
For a given feasible copositive problem (2.1), starting with   , the Waki and Muramatsu's algorithm repeatedly finds smaller faces of   until it stops with the minimal face F min .
The description of the algorithm is very simple but, in practice, its implementation presents serious difficulties which arise on step 2 and especially on step 3.As the matter of fact, in the case of the copositive problem (2.1), the fulfillment of step 3 is hard already on the first two iterations.
Let us consider the initial iteration when  = 0. On step 3, one has to find a matrix  1 ∈ Ker ∩ ℱ * 0 .Since ℱ 0 =   , then at the current iteration ( = 0) we know the explicit description of the dual cone for ℱ 0 : ℱ * 0 =   , where the cone   is defined in (3.12).Therefore, the matrix  1 should have the form and the condition ∑︀ ∈1 (()) ⊤   () = 0 ∀ = 0, 1, . .., , has to be satisfied.At the next iteration ( = 1), one is looking for a matrix  2 such that The first difficulty arises when trying to satisfy the condition C1, as there is no explicit description of the set ℱ * 1 .Notice that this set is defined using the closure operator, this operator being essential for the definition of ℱ * 1 .Therefore, in general, for a matrix  2 satisfying the condition C1, it may happen that  2 ̸ ∈ { ∈ () :  ∈   ⊕  1 ,  ∈ R}.
In [24], there is also no any indication of how to find a matrix  2 satisfying the conditions C2 and C3.Notice that the fulfillment of these conditions is a non-trivial task as well.
Thus, we can state that although the reported in [24] FRA is an easy-to-describe method, its practical implementation is not constructively described, which makes it difficult to apply.There is no information concerning which form should have the matrix   at the -th iteration ( ≥ 1) of the algorithm and how to meet the conditions C1-C3 for it.

A regularization based on the immobile indices
Here we will describe and justify a distinct algorithm for regularization of the copositive problem (2.1).This algorithm has a similar structure to the Waki and Muramatsu's FRA considered in Section4.1 but is based on the concept of immobile indices and described in more detail, being, therefore, more constructive.Note from the outset that although our algorithm exploits the properties of the set of immobile indices, it does not require the initial knowledge of either this set or the vertices of its convex hull.If there exists a feasible solution (x, μ) of this problem with μ < 0, then set  * := 0 and go to the Final step.
Otherwise the vector ( = 0,  = 0) is an optimal solution of the problem (SIP 0 ).It should be noticed that in the problem (SIP 0 ), the index set  is compact and the constraints satisfy the Slater condition.Hence (see e.g.[4]), it follows from the optimality conditions for the vector ( = 0,  = 0) that there exist indices and numbers where   := { (),  ∈   } and the set Ω(  ) is constructed by the rules (2.9), (2.10) with  =   .
In the problem (SIP  ), the index set Ω(  ) is compact and the constraints satisfy the following Slater type condition: Hence, this problem is regular.
If problem (SIP  ) admits a feasible solution (x, μ) with μ < 0, then STOP and go to the Final step with  * := .
Otherwise, the vector ( = 0,  = 0) is an optimal solution of (SIP  ).Since this problem is regular, the optimality of the vector ( = 0,  = 0) provides (see [16]) that there exist indices, numbers, and vectors which satisfy the following conditions: Here and in what follows, without loss of generality, we suppose that ∆  ∩   = ∅.Moreover, applying to the data (4.3),(4.4), the procedure DAM described in [13], it is possible to ensure that the following conditions are met: satisfying relations (4.3) with  replaced by  + 1. Hence the problem (REG) is equivalent to problem (2.1) and can be considered as its regularization.The algorithm is described.
Remark 4.1.In the described above algorithm REG-LCoP, it is assumed that  ̸ = ∅.It is easy to modify the algorithm so that this assumption can be removed.
Proof.Consider equalities (4.5).Add to the equality corresponding to  = 0 the remaining equalities corresponding to  = 1, . .., , multiplied by   .As a result, we get According to (4.3) we have that  () ∈   ,  ∈   .Then it follows from Proposition 2.4 that for  ∈   , relations (4.11) hold true.These relations and the equalities in (4.3) imply Hence it follows from the relations above and (4.6), (2.13), (4.14) that for all  ∈ wherefrom we obtain It follows from the inclusions  () ∈   ∀ ∈ ∆  , and Proposition 2.4 that for all  ∈ ∆  , relations (4.11) hold true.Taking into account these relations, and relations (4.16) we conclude that (4.12) and (4.13) hold true.The proposition is proved.
Hence the procedure DAM from [13] can be correctly applied to the data (4.3) and (4.5).This procedure consists of a finite number of operations and ensures the fulfilment of the conditions (4.7).It follows from these conditions that the algorithm REG-LCoP runs a finite number  * of iterations and comes to final step with a vector (x, μ), μ < 0, which is a feasible solution to the problem (SIP * ).
At the final step, the problem (REG) is formed on the basis of the problem (SIP * ).Let  reg be the set of feasible solutions to the problem (REG): As before, let  be the set of feasible solutions of the original problem (2.1), and  ( * ) the set defined by rules (2.9)-(2.11)with  replaced by  * .For  =  * , relations (4.15) imply that  ⊂  reg .On the other hand, it is clear that  reg ⊂  ( * ).It follows from Proposition 4.3 that  * ⊂   , and consequently due to Theorem 2.3, we have  =  ( * ).Hence we conclude that the problems (REG) and (2.1) have the same sets of feasible solutions:  reg = .The property (A) is proved.By construction, the vector (x, μ), μ < 0, is a feasible solution to the problem (SIP * ).Hence x ∈  reg =  and  ⊤ (x) ≥ −μ > 0 for all  ∈ Ω( * ).The property (B) is proved.

Example
Let us illustrate the iterations of the algorithm REG-LCoP with a small example.Consider the CoP problem (2.1) with the following data: Let us apply the algorithm REG-LCoP to this problem.
Final step.Consider a problem (REG) that is formed on the base of the problem (SIP 2 ), constructed at the last iteration with  * = 2.In our example, the problem (REG) has the form where, as before, Ω( 2 ) = {︀  ∈ R 4 + : e ⊤  = 1,  1 +  4 ≤ 1/2 }︀ .As it was proved, this problem is equivalent to the original problem (2.1) with data (4.17) and possesses the properties (A) and (B).In particular, for this problem, there is a feasible solution x = (4, 1.5, 0.5, 1) ⊤ , such that  ⊤ () ≥ 0.4 ∀  ∈ Ω( 2 ).Another useful property consists in the fact that in problem (4.19), the set of indices Ω( 2 ) is smaller then the index set  in the original problem (2.1).
To illustrate the advantages of the regularized problem (REG), we solved this problem and the problem (2.1) with data (4.17) by a simple discretization on an uniform grid superimposed on the sets  and Ω( 2 ), respectively.The auxiliary discretized linear programming (LP) problems were solved by a computer programme developed using the Matlab programming language, and all computations were performed with a personal computer.The accuracy of computations was 10 −16 .
-By discretizing the regularized semi-infinite problem (4.19) using an uniform grid with a step ℎ = 0.1 overlaid on the set Ω( 2 ), we got an LP problem with 125 linear constraints.Having solved this problem (let us denote this problem by (LP1)), we obtained a solution  0 = (2.0000,1.0000, 4.5000, 4.5000) and the optimal value of the problem  ⊤  0 = 1.One can check that the found vector  0 satisfies all the constraints of the problem (2.1) with the data (4.17), and hence is a feasible solution of this problem.Having verified the optimality conditions for linear copositive problems obtained in [14], one can conclude that  0 is optimal in this problem.
-By discretization of the original problem (2.1) with the data (4.17) using the same uniform grid with the step ℎ = 0.1 superimposed on the set  , we got an LP problem with 286 linear constraints.Denote this problem by (LP2).Having solved this problem, we obtained a solution x = (0.7778, 0.5926, 2.0556, 2.4630) ⊤ and the corresponding value of the objective function  ⊤ x = 0.1852.Recall that both discretized problems, (LP1) and (LP2), were obtained using the same grid' step ℎ = 0.1.But in the case of the problem (LP2), due to the inclusion Ω( 2 ) ⊂  , we have got more than twice as many constraints as in the LP problem (LP1).
Since we already know that the optimal value of the original problem (2.1) is equal to 1, but  ⊤ x = 0.1852, we can easy conclude that the found vector x does not belong to the feasible set of the original problem.
In order to get a more accurate solution of the original problem, we gradually reduced the step ℎ of the grid.For ℎ = 0.01, we obtained an LP problem with 176 851 linear constraints whose optimal solution was  * = (1.8406,0.9469, 4.1812, 4.2343) ⊤ and the optimal value  ⊤  * = 0.8937.
It is important to stress that all vectors x,  * and  ** obtained by discretization of the original (notregularized) problem are not feasible in this problem.
A further reducing the grid's step led to an increase in the number of constraints but not to an improvement in the quality of solution.

On the comparison of the algorithms
To give another interpretation of the algorithm REG-LCoP and to better trace the compliance of the algorithm REG-LCoP to the Waki and Muramatsu's FRA from [24] (presented here in Sect.4.1), let us perform some additional constructions at the iterations of the algorithm REG-LCoP.
At the end of Iteration # 0, having data  (), (),  1 (),  ∈  1 , let us set Notice here that, by construction, where O  is the  ×  null matrix.
Since in the algorithm REG-LCoP, the fulfillment of the condition (IV) is not guaranteed at each iteration, if compare this algorithm with the Waki and Muramatsu's FRA, at the first glance it may seem that, in general, the number of iterations executed by the algorithm REG-LCoP is larger.Such an impression is caused by the fact that in Section 4.2.1, we described in more detail all the steps of the algorithm and explicitly indicated all the computations carried out at each iteration.As for the Waki and Muramatsu's FRA, its iterations are described only in general terms.
In what follows, we set out a modification of the algorithm REG-LCoP, where the number of iterations is reduced and it is guaranteed that all conditions (I)-(IV) are satisfied on each core iteration.This modification is formal, being essentially another way of the iterations' numbering.The real number of the calculations on the steps of this modified algorithm is the same as on the iterations of the original one.

A compressed modification of the algorithm REG-LCoP
Consider the algorithm REG-LCoP presented in Section 4.2.1.Evidently, one can reduce the number of iterations of the algorithm if squeeze into just one iteration that iterations of the algorithm which change the description of the dual cone ℱ *  , but do not change the cone ℱ  itself.In other words, we will only move to the next core iteration when all conditions (I)-(IV) are satisfied.Formally, such a procedure can be described as follows.
Suppose that the algorithm REG-LCoP has constructed matrices and cones Here  * denotes the number of iterations for which the conditions above are met.Notice that the set {,  + 1, . .., } is considered empty if  < .
It is easy to check that the following conditions hold true: Thus, after the described above squeezing, we get  * core iterations of the modified algorithm.It follows from the conditions above that  * ≤ dim(Ker ).
Notice that for any  = 0, 1, . ..,  * − 1, the iterations of the algorithm REG-LCoP having the numbers   + 1 + , where  = 1, . ..,  +1 −   − 1 (the compressed iterations), are not useless.They can be considered as the steps of a regularization procedure for the cone ℱ +1 at the current core iteration # .At each of these iterations, we reformulate the cone ℱ +1 in a new equivalent form.This additional information allows us to improve (make more regular) the representation of the cone ℱ +1 and get a more explicit and useful description of its dual cone ℱ * +1 .

A short discussion on the algorithms considered in this section
By analyzing and comparing the iterative algorithms presented above, we can draw the following conclusions.
(1) The Waki and Muramatsu's facial reduction algorithm from [24], reformulated for copositive problems in Section 4.1, is very simple in the description and runs no more than dim(Ker) iterations.But this algorithm is more conceptual than constructive since it does not provide any information about the structure of the matrix   and the cone ℱ *  at its -th iteration.Moreover, it is not explained in [24] how to fulfill steps 2 and 3 at each iteration.
(2) The algorithm REG-LCoP proposed in Section 4.2.1 also runs a finite number of iterations.This algorithm is described in all details and justified.The quite constructive rules for calculating the matrix   satisfying the condition   ∈ ℱ *  , are presented using the information available at the Iteration # of this algorithm.These rules are derived from the optimality conditions for the optimal solution ( = 0,  = 0) of the regular problem (SIP  ).Notice that it is possible to develop a modification of the algorithm REG-LCoP which runs no more than 2 iterations.
(3) Finally, to show that the described in Section 4.2.1 algorithm REG-LCoP is not worse (by the number of iterations) than the FRA from the Section 4.1, we presented a compressed modification of the algorithm REG-LCoP.This modification consists of no more than dim(Ker) iterations as well as the algorithm from Section 4.1.

Conclusions
The main contribution of the paper is that, based on the concept of immobile indices, previously introduced for semi-infinite optimization problems, we suggested new methods for regularization of copositive problems.The algorithmic procedure of regularization of copositive problems is described in the form of the algorithm REG-LCoP and is compared with the facial reduction approach based on the minimal cone representation.We show that, when applied to the linear CoP problem (2.1), the algorithm REG-LCoP possesses the same properties as the FRA suggested by Waki and Muramatsu in [24], but its iterations are explicit, described in more detail and hence more constructive.
The described in the paper algorithms are useful for the study of convex copositive problems.In particular, for the linear copositive problem, they allow to -formulate an equivalent (regular) semi-infinite problem which satisfies the Slater type regularity condition and can be solved numerically; -prove new optimality conditions without any CQs; -develop strong duality theory based on an explicit representation of the "regularized" feasible cone and the corresponding dual (such as, e.g. the Extended Lagrange Dual Problem suggested for SDP by Ramana et al. [21]).
The described in the paper regularization approach is novel and rather constructive.It is important to stress that no constructive regularization procedures are known for linear copositive problems.
Theorem 2.3.Consider problem (2.1) with the feasible set .For any subset (2.8) of the set of normalized immobile indices of this problem, the equality  =  ( ), holds true, where the set  ( ) is defined in(2.11).Let {  ,  ∈  } be the standard basis of R  .