OPTIMALITY CONDITIONS AND DUALITY RESULTS FOR A ROBUST BI-LEVEL PROGRAMMING PROBLEM

. Robust bi-level programming problems are a newborn branch of optimization theory. In this study, we have considered a bi-level model with constraint-wise uncertainty at the upper-level, and the lower-level problem is fully convex. We use the optimal value reformulation to transform the given bi-level problem into a single-level mathematical problem and the concept of robust counterpart optimization to deal with uncertainty in the upper-level problem. Necessary optimality conditions are beneficial because any local minimum must satisfy these conditions. As a result, one can only look for local (or global) minima among points that hold the necessary optimality conditions. Here we have introduced an extended non-smooth robust constraint qualification (RCQ) and developed the KKT type necessary optimality conditions in terms of convexifactors and subdifferentials for the considered uncertain two-level problem. Further, we establish as an application the robust bi-level Mond-Weir dual (MWD) for the considered problem and produce the duality results. Moreover, an example is proposed to show the applicability of necessary optimality conditions.


Introduction
Bi-level programming problems (BLPPs) are sequence-based problems, also known as leader-follower problems, where the leader endeavors to optimize his/her decisions on the basis of the follower's reaction.The upper-level represents the objective and constraints of the leader, while lower-level represents the objective and constraints of the follower.The concept of BLPPs can be traced back to seminal publications of von Stackelberg [36,37].In 1973, Bracken and McGill [5] developed the first mathematical bi-level model, which led many researchers to develop exciting theories and applications such as optimistic and pessimistic approaches, single-level reformulation, optimality conditions, duality results, algorithms, etc.For more literature, readers may refer to the book [13] and the references therein.
The famous way to develop the optimality conditions and duality theorems of a BLPP is to transform it to an equivalent single-level mathematical programming problem so that we can use the existing literature of mathematical programming.There are many ways to do so, such as primal KKT reformulation, classical KKT reformulation, optimal value reformulation, Ψ function reformulation, etc. [11,12] developed optimality conditions by using KKT reformulation.Optimality conditions with the help of optimal value reformulation can be found in [7,14,15].For comparison in KKT reformulation and optimal value reformulation, readers may refer to [40].Optimality conditions by using Ψ reformulation are studied in [18,27].By utilising an approximation of KKT type conditions, [25] have lately worked on the Pareto and weak Pareto solutions of the multiobjective optimization.In general, for a mathematical programming problem, if the functions involved are convex and differentiable, the optimality conditions are developed in terms of gradients.Now, if the functions lose smoothness, the optimality conditions are established in terms of convex subgradients.Further, for the case where functions are non-convex and non-smooth, the generalized subdifferentials are used.Convexifactors are generalized version of subdifferentials.Usually, they are subsets of many eminent subdifferentials such as Clarke subdifferentials, Mordukhovich subdifferentials and Michel Penot subdifferentials.Thus, the results obtained in terms of convexifactors are sharp.Jennane et al. [22] obtained necessary optimality conditions of a non-smooth multiobjective BLPP in terms of tangential subdifferentials.Optimality conditions for BLPPs in terms of convexifactors are studied in [16,18,26].
In mathematical optimization theory, the principle of duality posits that optimization problems can be seen from one of two perspectives: primal or dual.With the help of the bifunction, image space analysis, and polynomial ring methods, [24] investigated the strong duality of a standard convex optimization problem without relying on constraint qualifications.Despite the vast development of theoretical aspects of BLPPs over the last 48 years, the work on duality theory is scarce.Aboussoror and Adly [2] formulated a Fenchel-Lagrange dual and established the strong duality result, and developed the necessary and sufficient optimality conditions for a BLPP.In 2011, Suneja and Kohli [33] formulated a Wolfe dual (WD) and a MWD corresponding to the BLPP and established the relationship between duals and the bi-level problem via duality results, respectively.Gadhi et al. [18] developed a MWD and corresponding duality theorems for a multiobjective BLPP.Recently, Van Su et al. [35] established strong and weak duality theorems corresponding to the WD and MWD problem for the non-smooth multiobjective BLPP with equilibrium constraints.
It has been seen in real-world problems that even relatively minor fluctuations of uncertain data can severely impair the feasibility and, hence, the significance of the nominal optimal solution (i.e., the optimal solution corresponding to the nominal data) as discussed in [4].Therefore, an approach that generates "immunized against uncertainty" solutions is needed in applications.The only conventional approach of this kind is provided by stochastic programming, which substitutes the original constraints with their "chance versions" and assigns data fluctuation a probability distribution.This places a requirement on a candidate solution to satisfy the constraints with probability ≥ 1 − ,  ≪ 1 being a predetermined tolerance.However, there is no straightforward method to associate the data fluctuations with a probability distribution.To handle optimization problems with uncertain data, robust optimization can be seen as a supplement to stochastic programming.Here, the "uncertain-but-bounded" model of data fluctuation allows the uncertain data to flow through a specified uncertainty set.It requires that a candidate solution be robustly feasible -to satisfy the constraints regardless of how the data from this set are realized.One associates the original uncertain problem with its robust counterpart to obtain the robust optimal solution [8].
In bi-level problems, uncertainty could occur in various ways, e.g., decision uncertainty (uncertainty in the decision of leader or uncertainty in the reaction of the follower) and data uncertainty (uncertainty in the lowerlevel problem, or uncertainty in the upper-level problem).Many real-world applications modeling robust bi-level problems have recently been studied, such as renewable energy location [29], supply distribution [32], resource recovery planning [38], electric vehicle charging stations [41], etc.Recently, Goerigk et al. [20] examined the bi-level combinatorial problems under convex uncertainty.Chuong and Jeyakumar [9] derived a strong duality between affinely adjustable bi-level robust linear program and its dual with the help of generalized Farkas Lemma.Buchheim et al. [6] studied the complexity of robust bi-level problems with uncertainty in the lowerlevel's objective.Beck and Schmidt [3] have investigated the effect of uncertainty of the upper-level decision on the lower-level.Swain and Ojha [34] studied the robust counterparts of the uncertain mean-variance problems under box and ellipsoidal uncertainties by converting the problem into BLPP.
In [26], Kohli has considered the following bi-level model and developed the necessary optimality conditions using an upper estimate of the Clarke subdifferential and the idea of a convexifactor.

(BLPP) min
,  (, ) where for each  ∈ R 1 , Ψ() is the set of optimal solutions to the below stated parametric convex optimization problem min   (, ) Later, Gadhi [19] corrected the flaws in [26] and presented an alternative proof for the main result.Chen et al. [8] used the robust approach to develop the optimality conditions for this model.Motivated by the work of [8,19,26] in this paper, we have considered a bi-level programming problem with uncertainty at the upperlevel constraint.We consider here the case that the probability distribution which this data uncertainty follows is not known, but it is known that it belongs to an uncertainty set.Thus we have adopted the robust counterpart approach to deal with uncertainty.First, we transform the robust counterpart bi-level problem into a single-level problem with the help of optimal value reformulation.We have extended the Abadie constraint qualification to an extended non-smooth robust constraint qualification.We have developed the optimality conditions in terms of subdifferentials and convexifactors by using the concepts used by Kohli [26] and Gadhi [19] to deal with two levels and Chen et al. [8] to deal with the data uncertainty.Furthermore, Mond-Weir type dual is introduced, and the relation between the two problems is established via weak and strong duality theorems.To the best of the author's knowledge, the optimality conditions for robust bi-level problems still need to be developed.Hence the results obtained here are new and will help researchers develop new theories in this exciting new field of robust BLPP.The remainder of the article as pursues: Section 1 provides the basic concepts and used definitions.Section 2 presents the robust BLPP and its reformulation to the single-level robust counterpart problem.In Section 3, we have developed the necessary optimality conditions for the considered problem under the appropriate assumptions.Section 4 is devoted to constructing MWD and establishing the weak and strong duality results.Finally, conclusion is given in Sections 5 and 6 some future directions are discussed.
A function ℎ : and is given by where (, ) ∈ R 1 × R 2 and  > 0. " If function  is convex then its Fréchet subdifferential reduces to the subdifferential of convex analysis.

Non-smooth Robust Bi-level Model
In this section, we have considered an uncertain BLPP ().With the help of optimal value reformulation and the concept of robust counterpart, we transform () to a single-level robust counterpart problem (RBPP).Later in this section, we have introduced the extended non-smooth RCQ.Let the uncertain BLPP () be defined as: where for some sequentially compact topological space Ω  ,   ∈ Ω  is an uncertain parameter.For each  1 ∈ R 1 the parametric optimization problem ( 1 ) has the set of optimal solutions Υ( 1 ) where, For examining uncertainty problems where the decision-maker does not know about the uncertain parameters probability distribution, a well-known robust approach called as a robust counterpart is used.
The robust counterpart of () is: where the uncertain constraints are applied to all alternative values of the parameters within their specified uncertainty sets Ω  ,  = 1, 2, . . ., .
The problem (RP ′ ) can be viewed as the severest possible scenario of ().The robust counterpart is a model that resolves the uncertain worst-case scenario without using uncertain variables.Therefore, optimizing () with (RP ′ ) is the robust technique (worst technique) for ().A feasible solution to the robust counterpart problem is the robust feasible solution to the uncertain problem () which should, by definition, fulfill all realizations of the constraints from Ω  ,  = 1, 2, . . .,  (uncertainty sets).An optimal solution of (RP ′ ) is a robust feasible solution with the best possible value of the objective.Using optimal value function reformulation, we get (RBPP) min where, is the optimal value function.Since   (., .)and   (., .), ∈  are convex functions, therefore, (.) is also a convex function.
represents the upper-level constraint set and be the feasible set of lower-level problem for a fixed  1 and be the feasible set of (RBPP).

Necessary condition
The core of optimization theory is the concept of optimality conditions.The existence of necessary and sufficient optimality conditions enables the generation of efficient numerical approaches for the practical solution of a given optimization problem.Necessary optimality conditions are beneficial because any local minimum must satisfy these conditions.As a result, one can only look for local (or global) minima among points that hold the necessary optimality conditions.

Application of necessary condition
where for each  1 ∈ R 1 , the parametric optimization problem ( 1 ) has the set of optimal solutions Υ( 1 ) The optimal value function is The feasible set of (RBPP) is  = {( 1 , 0) :  1 ≤ 0}.

Robust bi-level Mond-Weir dual
MWD and WD are the two widely used dual in literature.Due to the weaker assumptions used, MWD has an advantage over WD.Here we have formulated a robust bi-level Mond-Weir dual (RBMWD) corresponding to the considered robust BLPP.Moreover, we have developed the relationship between the solutions of these two problems in terms of weak and strong duality theorems.

Results and conclusion
This paper's contribution is as follows: we have considered a bi-level model whose upper-level constraints include some uncertainty, and the lower-level problem is fully convex.An equivalent single-level mathematical problem is established by using the optimal value reformulation.To deal with uncertainty at the upper-level problem, we follow the concept of robust counterpart optimization.To this end, we develop the KKT type necessary optimality conditions.Constraint qualifications play an essential role in developing the necessary optimality conditions and strong duality results.As Abadie CQ is weaker than most existing CQs, naturally, one tries to use it, but as our reformulated problem is non-smooth and uncertain, we have extended ACQ and introduced a non-smooth RCQ.Moreover, an example is given to validate our necessary condition.Further, we have developed the robust bi-level MWD and established the relationship between the solutions to both the problems with the help of weak and strong duality theorems.

Future directions
For future research, developing sufficient conditions for robust BLPPs will be challenging.Formulating sufficient conditions in a non-smooth setting alone is a difficult task, and here along with non-smoothness, uncertainty is also added.So one should be very careful while examining these types of problems.From a computational perspective, one could also develop efficient algorithms based on duality results.