SHARP LAGRANGE MULTIPLIERS FOR SET-VALUED OPTIMIZATION PROBLEMS

. In this paper, we give a comparison among some notions of weak sharp minima introduced in Amahroq et al . [ Le matematiche J. 73 (2018) 99–114], Durea and Strugariu [ Nonlinear Anal. 73 (2010) 2148–2157] and Zhu et al . [ Set-Valued Var. Anal. 20 (2012) 637–666] for set-valued optimization problems. Besides, we establish sharp Lagrange multiplier rules for general constrained set-valued optimization problems involving new scalarization functionals based on the oriented distance function. Moreover, we provide sufficient optimality conditions for the considered problems without any convexity assumptions.


Introduction
The concept of sharp minimizer has been investigated for different types of optimization problems: real-valued, vector-valued as well as set-valued optimization problems.For real-valued optimization problems, Auslender [6] has established necessary and sufficient optimality conditions for a local sharp minimizer of order  ∈ {1, 2} where the objective function is locally lipschitzian and the feasible set is closed.To the same problem, Studniarski [34] comes to extend the results of Auslender [6] for any extended real-valued objective function (not necessary locally lipschitzian) and the feasible set not necessary closed where the order of sharp minimizer ( ≥ 2).Ward [36] follows the line of Studniarski with different way.
For vector-valued optimization problems, Jiménez [19] has introduced the notion of sharp minimizer of order , in addition, he has developed with Novo in Jiménez [20] and Jiménez and Novo [21] the theory on minimizer of order ( ≥ 1 integer) considering different frameworks.Two years after, Bednarczuk [8] has defined the notion of weak sharp minimizer of order  where the ordering cone is assumed to be closed, convex, and pointed.This concept was used to prove conditions for upper Hölderness continuity and Hölder calmness of the solution mappings to parametric vector optimization problems.Later, Studniarski [35] introduced the notion of weak -sharp local minima in vector optimization problems.Besides, he has extended some necessary and sufficient optimality conditions obtained by Jiménez [19].
To shed light on the study of sharp minimality in set-valued optimization problems we may refer to the papers [5,13,14,40].In [14] Flores-Bazán and Jiménez introduced the concept of sharp minima for a setvalued optimization problem and provided some optimality conditions.In connection with the paper of Durea and Strugariu [13], the sharp minimizer was introduced by means of the oriented distance function and its necessary optimality conditions are established with the use of the Mordukhovich generalized differentiation.Later, Zhu et al. [40] proposed the concept of the sharp minimizer by means of the distance function, they have extended the Fermat rules for the local minimizer of the constrained set-valued optimization problem to sharp and weak sharp minimizers in Banach spaces or Asplund spaces by means of the Mordukhovich generalized differentiation and the normal cone.Very recently, Amahroq et al. [5] introduced this notion in set-valued optimization problems without recourse to the use of distances adopted in Durea and Strugariu [13] and Zhu et al. [40].They have established necessary and sufficient optimality conditions involving set-valued derivatives, besides they have provided optimality conditions in terms of Fritz-John multipliers under convexity assumptions on the objective set-valued mapping using the classical separation theorem.A new concept of sharp minima in set-valued optimization problems by means of the pseudo-relative interior, namely pseudo-relative -sharp minimizer, is proposed and studied in Amahroq and Oussarhan [1].
The importance of the study of weak sharp minima arises in the stability analysis, the sensitivity analysis, and in the study of the convergence of iterative numerical procedures, for instance, see [6,10,12,15,27,38].It is worth also to mention that the study of weak sharp minimizers is closely related to the study of the error bound in optimization, for more details we refer to Bednarczuk [8], Zheng et al. [39] and the references therein.
The tools used in the paper of Durea and Strugariu [13] to derive necessary optimality conditions in terms of multiplier rules require that the function  given in Definition 2.2 be Frchet differentiable at 0 and that (0) > 0, which is not the case for () =   with  ̸ = 1.In this paper, we will study three notions of weak sharp minima those introduced in Amahroq et al. [5], Durea and Strugariu [13], Zhu et al. [40] and we will provide a comparison among them.Due to the concept of sharp minimizer given in Amahroq et al. [5], we will generalize the results of Amahroq et al. [5] and those of Durea and Strugariu [13] when () =   and  is an integer, by establishing Lagrange multiplier rules to the general constrained and explicit constrained set-valued optimization problems in terms of Fritz-John as well as Karush-Kuhn-Tucker multipliers, named, sharp Fritz-John as well as sharp Karush-Kuhn-Tucker multipliers.To do this, we will introduce some scalarization techniques which are suitable for sharp minima based on the oriented distance function.Moreover, we will provide sufficient optimality conditions for global sharp minimizers of order  > 0 that have not been done in Durea and Strugariu [13].
The rest of the paper is organized as follows: In Section 2, we recall some definitions and we prove some preliminary results needed in the sequel of the paper.In Sections 3 and 4, we establish sharp Fritz-John multipliers as well as sharp Karush-Kuhn-Tucker multipliers of order  = 1 in the weak sense.In Section 5, we derive necessary optimality conditions in terms of multiplier rules for sharp minimizers of higher order  ≥ 2 ( integer) in the weak sense.Necessary optimality conditions for sharp minima in the strong sense are established in Section 6.In Section 7, we provide sufficient optimality conditions for global sharp minima ( > 0) in the weak sense without any convexity assumptions.In addition, we show that necessary optimality conditions obtained in Sections 3-5 may be sufficient optimality conditions under suitable assumptions.

Preliminaries
Let  be a set-valued map between Banach spaces  and  ,   ⊂  be a pointed (i.e.,   ∩ (−  ) = {0}) closed solid (i.e., with nonempty interior, int(K Y ) ̸ = ∅) convex cone and  be a set-valued map from  to a Banach space  which is ordered by the pointed closed convex cone   ⊂ .We write ‖(, )‖ = ‖‖ + ‖‖ for the norm on the product space  ×  .In the sequel the domain and the graph of  are respectively given by Dom( ) : If  is a nonempty subset of  and  is a nonempty subset of  , then  () = ∪ ∈  () and Throughout this paper,  * ,  * and  * denote the continuous duals of ,  and  respectively, and we write ⟨•, •⟩ for the canonical bilinear forms with respect to the dualities ⟨ * , ⟩, ⟨ * ,  ⟩ and ⟨ * , ⟩.
For a nonempty subset  ⊂  , let us recall the oriented distance function ∆  (see [37]) which is defined by where (•, ) is the usual distance function In the next proposition we collect some useful properties of ∆  .

Definition 2.4 ([5]
).Let  > 0. It is said that (x, ȳ) ∈ gr( ) ∩ ( ×  ) is a local sharp minimizer of order  in the strong sense (resp. in the weak sense) for (SP 1 ), if there exist  > 0 and a neighborhood  of x such that for all  ∈  ∩ where B  is the closed unit ball in  .When (2.4) (resp.(2.5)) holds for all  ∈ , then (x, ȳ) is said to be a global sharp minimizer of order  in the strong sense (resp. in the weak sense) for (SP 1 ).
Remark 2.5.It is easy to see that, (i) a sharp minimizer of order  in the strong sense is a sharp minimizer of order  in the weak sense.Hence, each necessary condition for the existence of sharp minima in the weak sense is also a necessary condition for the existence of sharp minima in the strong sense.(ii) for () = , a weak -sharp minimizer in the sense of Definition 2.2 is a weak sharp minimizer in the sense of Definition 2.3.(iii) for () =   , a local minimizer of Definition 3.1 from [14] is a local sharp minimizer of order  in the strong sense.(iv) weak sharp minimizers in the sense of Definitions 2.2 and 2.4 are weak Pareto minimizers for (SP 1 ).
Note that, Definition 2.2 also works in the case when int(  ) = ∅; while the weak part in Definition 2.4 does not.In fact, the word "weak" in these definitions refers to different things: in Definition 2.2 it signifies the fact that the set ̃︀  can have more than one element, while in Definition 2.4 it indicates exactly the presence of int(  ).In the next proposition we give some links between these two definitions when ̃︀  = {x} and () =   .
Proof.(i) Since int(  ) ̸ = ∅, by applying Proposition 3 of [1] for  =  and  = ̃︀  = {x}, together with Theorem 2.12 of [9], we conclude the required equivalence.(ii) By assumption, there exist  > 0 and a neighborhood  of x such that for all  ∈  ∩  and  ∈  () one has This equivalent to Since ȳ ∈ Min (x), it follows that Whence (x, ȳ) is a local sharp minimizer of order  in the strong sense for (SP 1 ).
The following examples give a comparison among the above notions of sharp minimizers.
Here we observe that (x, ȳ) = (0, 0) is a local weak sharp minimizer for (SP 1 ) in the sense of Definition 2.3.However, (x, ȳ) is not a local weak sharp minimizer for (SP 1 ) neither in the sense of Definition 2.2 nor in the sense of Definition 2.4.Also, (x, ȳ) is not a weak Pareto minimizer for (SP 1 ).Thus the inclusion in Remark 2.5(ii) is strict.
where [(, ), (, )] is the line segment between (, ) and (, ).Here we observe that (x, ȳ) = (0, (0, 0)) is a local weak Pareto minimizer for (SP Therefore, they seem as natural extensions of the notion of weak sharp minimizer to set-valued maps. In the sequel we shall establish necessary optimality conditions for sharp minimizers in the weak sense for the problem (SP 1 ) and the following explicit constrained set-valued optimization problem (SP 2 ) Now we start with our first preliminary results which will be crucial steps in the sequel.
Hence, for all  ∈ N, we get that This contradicts the fact that (x, ȳ) is a local sharp minimizer of order  in the weak sense for (SP 2 ).Since  is c-Lipschitz, the Clarke penalization ( [11], Prop.2.4.3)completes the proof.

Sharp Fritz-John multipliers (for 𝛾 = 1)
In the sequel, for a closed cone  of  ,  ∘ will be the polar cone of  defined by For a Lipschitzian function ℎ on , we will denote by ℎ(x) the Clarke subdifferential of ℎ at x ∈ .
We can now state necessary conditions for the problem (SP 1 ).
Proof.By Proposition 2.11 together with the assumption (3.1) we obtain that (0, 0, 0 where co is the convex hull.Hence there are By using similar arguments as in the proof of Theorem 3.1, there exist On the other hand one has The assumption (3.1) in the above theorems is always true in the case of unconstrained setvalued optimization problems.Besides, the results obtained here are different than those of Amahroq and Taa [2].Indeed,  * 1 is not necessarily zero as in Amahroq and Taa [2].Comparing with the results of Durea and Strugariu [13], in the above theorems the graph of  (resp, the set-valued data  ) is not necessarily locally closed (resp.locally Lipschitz-like) and ,  are Banach spaces.The Lipschitz-like condition in Durea and Strugariu [13] is replaced here by (3.1).Obviously, the Asplund space condition in Durea and Strugariu [13] is more restrictive, but their conclusion uses the Mordukhovich normal cone which is smaller than the Clarke normal cone considered above.

Sharp Karush-Kuhn-Tucker multipliers (for 𝛾 = 1)
In order to establish necessary optimality conditions in terms of Karush-Kuhn-Tucker multipliers for the constrained problem (SP 2 ) we shall use the following metrical regularity condition.This regularity condition is well known in the literatures when  is a single valued mapping (see [23] and the references therein) and in the above general form it has been considered in Amahroq and Thibault [3] and studied in Amahroq et al. [4].For verifiable conditions ensuring this condition by virtue of the set-valued map  , see Amahroq et al. [4].Now we state sharp Karush-Kuhn-Tucker multipliers of order  = 1 for the constrained problem (SP 2 ).

Necessary conditions for sharp minima of higher order in the weak sense
The following results provide necessary conditions for sharp minimizers of order  ≥ 2 in the weak sense where  is integer.Theorem 5.1.Let  ≥ 2 ( integer), (x, ȳ) ∈ gr( ) with x ∈ .Assume that (3.1) holds for  1 = ( ×  ) and  2 = gr( ).If (x, ȳ) is a local sharp minimizer of order  in the weak sense for (SP 1 ) then there exist , c,  > 0 such that Proof.It is enough to apply Proposition 2.10 together with the assumption (3.1) to get the proof.Remark 5.4.The conclusions in Theorems 5.1-5.3 are presented in terms of Fermat rules, because if we apply the subdifferential of the sum we will lose the dependence with .

Necessary conditions for sharp minima in the strong sense
It is worth noting that Definitions 2.2 and 2.3 still work when int(  ) = ∅.Therefore, some discussion on necessary optimality conditions for sharp minima in the strong sense will be of interest, especially when the interior of the ordering cone   is empty.
In this section, the interior of   is not necessarily nonempty.Let us start with the following scalarization results which will be crucial steps in the sequel.Proof.By assumption there exist  > 0 and a neighborhood  of x such that, for all  ∈  ∩  and  ∈  () one has whence, for all  ∈ B from the definition of ∆ −  together with Proposition 2.1(vi), we get that The rest of the proof is practically the same as that of Proposition 2.10.

Sufficient conditions
Let  be a nonempty subset of  and let x ∈ .The radial cone (, x) of  at x is the subset of  defined by (, where cl denotes the closure.It is obvious to see that for all (x, ȳ) ∈ gr( ) [(gr( ), (x, ȳ))] Note that, the Clarke normal cone  (, x) reduces to the normal cone of convex analysis when  is convex, i.e., We begin with the following theorem that provides sufficient optimality conditions for sharp minima in the weak sense for (SP 2 ) without any convexity assumption on the set-valued objective mapping.Proof.Reasoning ad absurdum, suppose that (x, ȳ) is not a sharp minimizer of order  in the weak sense for the problem (SP 2 ).Then, there exist Then (x, ȳ) is a sharp minimizer of order  in the weak sense for the problem (SP 1 ).
Proof.In Theorem 7.1 let us take  :  ⇒  with gr() =  × , of course, the hypotheses of the theorem hold, so the conclusion follows.
In the next results, we show that necessary conditions in Sections 3-5 may be sufficient conditions under some convexity assumptions.For that purpose, we recall the following notion of -strong convexity of a set-valued map that introduced in Definition 2.9 of [5].

Conclusions
This paper studies three notions of weak sharp minima in set-valued optimization problems and provides some links between them.A natural extension is chosen to establish necessary optimality conditions for constrained set-valued optimization problems in terms of Lagrange multiplier rules, mainly in terms of Clarke differentiation objects (subdifferentials and normal cones).Under suitable assumptions, necessary optimality conditions become sufficient optimality conditions.Besides, some sufficient optimality conditions are derived without any convexity assumption on the set-valued objective mapping.
∈ gr( ) ∩ ( ×  ) is a local weak sharp minimizer for (SP 1 ), if there exist a neighborhood  of x and real numbers ,  > 0 such that Remark 2.9.From the above examples we observe that, (i) the notion of weak Pareto minimizer and weak sharp minimizer in the sense of Definition 2.3 are distinct, so a weak sharp minimizer in the sense of Definition 2.3 is not necessary a weak Pareto minimizer.(ii) weak sharp minimizers in the sense of Definitions 2.2 and 2.4 are necessarily weak Pareto minimizers.
1 ) but not a local weak sharp minimizer neither in the sense of Definition 2.2 nor in the sense of Definition 2.3.Whence the inclusions in Remark 2.5(iv) are strict.