ROBUST DUALITY FOR GENERALIZED CONVEX NONSMOOTH VECTOR PROGRAMS WITH UNCERTAIN DATA IN CONSTRAINTS

. Robust optimization has come out to be a potent approach to study mathematical problems with data uncertainty. We use robust optimization to study a nonsmooth nonconvex mathematical program over cones with data uncertainty containing generalized convex functions. We study suﬃcient optimality conditions for the problem. Then we construct its robust dual problem and provide appropriate duality theorems which show the relation between uncertainty problems and their corresponding robust dual problems.


Introduction
In mathematical programming, data uncertainty may be due to various factors, like measurement/prediction errors or unknown future demands. Robust Optimization (RO) technique is applied for handling such scenarios as it considers all the possible data perturbations into one big picture and gives an immunized solution. This deterministic approach works well even for the worst possible case of uncertainty. This paper deals with two main components of mathematical programs structured from the real world problems: one is the uncertainty in the data infused to the model and the other is nonsmooth non-convex functions being a part of the mathematical model. While the former is quite young as a research field, the latter has its root dating back to 1949.
Duality is one of the pillars in the construction of algorithms, allowing to find efficient solutions since a dual problem gives a concrete lower bound to the value of the mathematical program. Beck and Ben-Tal [2] first modeled a dual for a mathematical problem involving convex functions and data uncertainty. Further development in this direction, with non-convex programs and data uncertainty are carried out by Jeyakumar et al. [14] for single-objective problems, and by Choung [5] for multiobjective programs respectively.
Fakhar et al. [11] defined pseudo quasi generalized convex functions to study optimality and duality for nonsmooth multiobjective programs and used the findings to solve portfolio optimization problems. However, for data uncertain problems over cones, Chen et al. [4] defined type-I generalized convex functions to study optimality conditions and duality results. Suneja et al. [18] have introduced various classes of nonsmooth generalized convex functions to study mathematical programs where data uncertainty is not incorporated.
In this article, we introduce generalized pseudo-quasi type-I-convex functions and illustrate with non-trivial numerical examples. We study the Karush-Kuhn-Tucker type optimality conditions for a mathematical program over cones with data uncertainty. Moreover, we propose a Mond-Weir type dual model for the robust program and justify it using duality theorems. A example of weak duality theorem is also provided.

Preliminaries
Consider R k to be a k-dimensional Euclidean space and R k + denotes its non-negative orthant. Let K ⊆ R k be a non-empty, proper, closed and convex cone and K * denote its positive dual cone (or simply, dual cone) which is given by K * = {y ∈ R n : y T x 0, ∀x ∈ K}.
In the present mathematical model, we assume that the data uncertainty is confined to constraints only and the uncertain parameters of each of the constraints are independent of the parameters of the other constraints. The problem involves m constraints, hence we consider the parameter v := (v 1 , v 2 , . . . , v m ) T such that uncertainty factor in each constraint is given as v i ∈ V i ⊂ R ni , i = 1, 2, . . . , m and v ∈ R p . For each i = 1, 2, . . . ., m, V i is assumed to be non-empty, compact and convex subsets of R ni with m i=1 n i = p. Consider a general nonsmooth nonconvex uncertain vector optimization problem over cones as follows: where vector-valued functions f and g are such that f : R n → R k , g : R n ×R p → R m and for i = 1, 2, . . . ., m, g i : The robust counterpart of the above uncertain problem is as follows: In the above problems, the relation −g(x, v) 0 is equivalent to −g(x, v) ∈ R m + . Let F := {x ∈ R n : −g(x, v) ∈ R m + , ∀v ∈ V } be the set of feasible solutions of (RVOP). This set F , considering all the possible scenarios of uncertainty, is called as robust feasible set or the set of robust feasible solutions of (UCVOP) and any x ∈ F is called robust feasible solution of (UCVOP) or a feasible solution of (RVOP). We assume that the vector valued function g with respect to its first argument and the function f , are locally Lipschitz on R n , and the components of g are upper semi-continuous with respect to its second argument. These assumptions are required in the proof of necessary optimality conditions. Definition 2.1. Letx be a feasible solution of (RVOP). Then (i)x is called a weakly robust efficient solution of (UCVOP) or a weakly efficient solution of (RVOP) if (ii)x is called a robust efficient solution of (UCVOP) or an efficient solution of (RVOP) if The functions involved in the present program are nonsmooth (or nondifferentiable) functions. For such functions the gradient does not exist, so the differentiability is replaced with a generalized version of differentiability which is a weaker assumption but guarantee the directional derivatives. Several mathematical problems in error analysis, distance measurements, electric circuits, norms, and several engineering problems involve nonsmooth optimization problems (convex or nonconvex). So the study of the generalized differentiability becomes a substantial subfield in mathematical programming. Now we discuss generalized subdifferentials of the locally Lipschitz functions as given in [6] (for nonconvex functions).
Definition 2.2. A real valued function r : R n → R is said to be locally Lipschitz atx ∈ R n if there exists a neighborhood N ofx and a number l > 0 such that If r is a Lipschitz function at each point of R n , then we say it is Lipschitz on R n . A vector valued function s(= (s 1 , s 2 , . . . , s k ) T ) : R n → R k is said to be locally Lipschitz on R n , if each of its components s 1 , s 2 , . . . , s k is locally Lipschitz on R n . For the locally Lipschitz function r, the Clarke's generalized directional derivative atx in the direction d ∈ R n is given by The Clarke's generalized subgradient atx is denoted by ∂r(x) and is defined as is a nonempty compact set which is true because r is assumed to be a locally Lipschitz function. In this paper, we discuss nonsmooth locally Lipschitz functions and for differentiable case we get simpler structure as the generalized gradient reduce to a singleton. In other words, if r is a differentiable function, then ∂r(x) = {∇r(x)}.
Also note that for locally Lipschitz vector valued function s : R n → R k , the generalized gradient atx ∈ R n is given by where ξ i (x) is Clarke's generalized subgradient of scalar function s i (i = 1, 2, . . . , k) atx ∈ R n . Convexity assumptions are most frequently used in optimization theory since they contain many global properties. But the problems arising from real-life scenarios do not always contain convex well-behaved functions. So the notion of convexity has to be weakened. One way to do that is introduction of generalized convex functions and we have made a contribution in that direction with the introduction of generalized (K, R m + ) pseudoquasi type-I convex functions. Speaking about applications, some of the optimization problems coming out of data correction technology, productions planning, and financial planning, and several problems in statistics, probability theory and artificial intelligence involve pseudoconvex objective functions (see [20,21]). Quasi-convex functions have several applications in signal processing and machine learning. Motivated by the definitions of generalized convexity introduced in [4,11,18], we introduce the following class of functions.
Or equivalently, the function (f, g) is said to be generalized (K, R m + ) pseudo-quasi type-I-convex atx ∈ R n if for each x ∈ F there exists d ∈ R n such that The following example demonstrates that the class of generalized pseudo-quasi type-I-convex functions is wider than the class of generalized type-I functions [4].
The function (f, g) is The robust feasible region for the data uncertainty problem becomes F = 3 4 , 2 . Let u = 2, then ∂f (u) = [2] × [1, 2] and ∂ x g(u, v) = [0, 1]. Since for A = (2, 1) ∈ ∂f (u) and d = −x 2 , So, the function (f, g) is generalized (K, R 1 + ) pseudo-quasi type-I-convex. We will now show that this function is not in the previous class of functions defined by Chen et al. [4]. At x = 1.5, A = (2, 1) ∈ ∂f (u) and B = 0, Definition 2.5. The function (f, g) is said to be strictly generalized (K, R m + ) pseudo-quasi type-I-convex at x ∈ R n if, for each x ∈ F and A ∈ ∂f (x), B ∈ ∂ x g(x, v), v ∈ V, there exists d ∈ R n such that

Sufficient optimality conditions
For the problem (UCVOP), we obtain the sufficient optimality conditions for a robust feasible solution to be (weakly) robust efficient solution, under the assumptions of nonsmooth generalized pseudo-quasi type-I-convex functions.
Proof. We will prove this by contradiction. Supposex is not a weakly robust efficient solution of (UCVOP) i.e., it is not a weakly efficient solution of (RVOP). Then, there exists an x ∈ F such that Since the function (f, g) is generalized (K, R m + ) pseudo-quasi type-I-convex atx, therefore the relations (3.3) and (3.4) give −Ad ∈ int K, ∀A ∈ ∂f (x) (3.5) and This inequality is true forμ = 0 also. But equation (3.1) implies that there existÃ ∈ ∂f (x) andB ∈ ∂ x (g(x,v)) such thatÃ This contradicts (3.9). So the assumption is not true and we get thatx is a weakly robust efficient solution of (UCVOP).
The above inequality still holds in the case ofμ = 0. But this inequality is a contradiction to (3.1). Hence the supposition is not true and we conclude thatx is a robust efficient solution of (UCVOP).

Mond-Weir type dual
In this section, we study the following robust Mond-Weir type vector dual of (RVOP): Theorem 4.1 (Weak Duality). Let x be a feasible solution of (RVOP) and (y, v, λ, µ) ∈ R n ×V ×K * \{0}×R m + be a feasible solution of (RMWD). If the function (f, g) is generalized (K, R m + ) pseudo-quasi type-I-convex at y, Proof. Suppose to the contrary, f (x) − f (y) ∈ −int K. This can also be written as The dual feasibility condition (4.2) can be rewritten as This, for µ = 0 and µ ∈ R m + , gives Since the function (f, g) is generalized (K, R m + ) pseudo-quasi type-I-convex at y, the relations (4.5) and (4.7) imply − Ad ∈ int K, ∀A ∈ ∂f (y), (4.8) Now for λ ∈ K * \ {0} and µ ∈ R m + we can deduce the following This includes the case of µ = 0 also. Since (y, v, λ, µ) is a feasible solution of (RMWD), we have This means that, for some A 1 ∈ ∂f (y) and B 1 ∈ ∂ x g(y, v), it gives Equations (4.10) and (4.11) contradict each other. So our supposition is incorrect. Hence the weak duality theorem holds.
The following example illustrates the above weak duality theorem.
Theorem 4.2 (Weak Duality). Let x be a feasible solution of (RVOP) and (y, v, λ, µ) ∈ R n ×V ×K * \{0}×R m + be a feasible solution of (RMWD). If the function (f, g) is strictly generalized (K, R m + ) pseudo-quasi type-I-convex at y, then Proof. The proof follows on the lines of Theorems 3.2 and 4.1.
To prove the strong duality theorem we need the Generalized Robust Slater Constraint Qualification (GRSCQ), which says, "there exists x o ∈ R n such that −g(x o , v) ∈ int R m + , for all v ∈ V ". This constraint qualification is studied in [4,16]. We further assume that the assumptions of necessary optimality conditions ( [4], Thm. 3.2, p. 422) hold.
Proof. Ifx is a weakly robust efficient solution of (RVOP) then under the given assumptions there existv ∈ V,λ ∈ K * \ {0}, andμ ∈ R m + such that 0 ∈ ∂f (x) Tλ + ∂ x g(x,v) Tμ , (4.13) µ T g(x,v) = 0. (4.14) This implies (x,v,λ,μ) is a feasible solution of (RMWD) and the objective values of the primal and dual problems are equal. Now we claim thatx is also a weakly efficient solution of (RMWD). If not, then there is a feasible solution of dual say (x,ṽ,λ,μ), such that But for a feasible solutionx of (RVOP) and feasible solution (x,ṽ,λ,μ) of (RMWD), the Weak Duality Theorem 4.1 is contradicted. Sox is a weakly efficient solution of the dual (RMWD). The second part follows from Theorem 4.2.
Proof. The proof follows on the lines of Weak Duality Theorem 4.1.