A novel approach for solving stochastic problems with multiple objective functions

In this paper we suggest an approach for solving a multiobjective stochastic linear programming problem with normal multivariate distributions. Our approach is a combination between a multiobjective method and a nonconvex technique. The problem is first transformed into a deterministic multiobjective problem introducing the expected value criterion and an utility function that represents the decision makers preferences. The obtained problem is reduced to a mono-objective quadratic problem using a weighting method. This last problem is solved by DC (Difference of Convex) programming and DC algorithm. A numerical example is included for illustration.


Introduction
Multiobjective stochastic linear programming (MOSLP) is an appropriate tool to model many concrete real-life problems because it is not obvious to have the complete data about the parameters. So, to deal with this type of problems it is required to introduce a randomness framework. Such a class of problems includes investment and energy resources planning [2,30,35], manufacturing systems in production planning [13,14], mineral blending [18], water use planning [7,10] and multi-product batch plant design [36]. Among the applications of MOSLP in portfolio selection, we can mention the recent works of Shing and Nagasawa [28], Ogryczak [25], Ballestero [5] and Aouni [4].
In order to obtain solutions for MOSLP problems, it is necessary to combine techniques used in stochastic programming and multiobjective programming. From this, two approaches can be considered, both of them involve a double transformation, consisting on the transformation of the multiobjective problem into a mono-objective problem and the stochastic problem into its equivalent deterministic one. The difference between the two approaches is the order in which the transformations are carried out. Ben Abdelaziz [7] and Ben Abdelaziz et al. [8] qualified as multiobjective approach the perspective which transform first, the stochastic multiobjective problem into its equivalent multiobjective deterministic problem, and stochastic approach the technique that transform in first the stochastic multiobjective problem into a monobjective stochastic problem.
As we have known in the MOSLP problems, the coefficients of the problem are assumed as random variables with known distributions in most of cases. However, the specifications of the distributions are very subjective. Many researchers invoke the discrete distribution. For instance, we can mention the STRANGE method proposed by Teghem et al. [31], the recourse method using a two stage mathematical programming model by Klein et al. [17], the STRANGE-MOMIX of Teghem [32], the cutting plane methods by Abbas and Bellahcene [1], Amrouche and Moulai [3], Chaabane and Mebrek [12]. Publications dealing with continuous distributions are very few in number and use, in general, the Gaussian (normal) distributions with different parameters. In this context, Stancu-Minasian [29] describe a sequential method for solving MOSLP problem where several probabilities are maximized, Goicoechea et al. [16] present the Probabilistic Trade-off Development Method or PROTRADE which treats problems with general distributions for the random coefficients of linear objectives, Munoz and Ruiz [24] developed the ISTMO method which uses the Kataoka criterion to handle the randomness and combines the concept of probability efficiency for stochastic problems with the reference point philosophy for deterministic multiobjective problems, Bellahcene and Marthon [6] suggest a bisection based method that generates a compromise solution to MOSLP problems in which the objective functions parameters are random variables with multivariate distributions. In this paper, a novel method for solving MOSLP problem with normal multivariate distributions is proposed. First, we assume that decision makers preferences can be represented by exponential utility functions ( One can use the same function for all the objectives). This assumption is motivated by the fact that exponential utility function will lead to an equivalent quadratic problem which can be solved by a DC (Difference of Convex functions) method. The DC programming and DC Algorithm have been introduced by Pham Dinh Tao in their preliminary form in 1985 and developed by Le Thi and Pham Dinh since [19][20][21][22]. This method has proved its efficiency in a large number of nonconvex problems [23,26,27].
Remainder sections of this paper are organized as follows: in section 2, the problem formulation is given. In section 3, we analyze our new formulation for the problem considering the particular structure induced by the combined use of utility functions and the weighting method. The new formulation results in a quadratic problem that can be solved efficiently by a DC algorithm. Section 4 shows how to apply the DC programming and DCA for the resulting problem. Our experimental results are presented in section 5.

Problem statement
Let us consider the multiobjective stochastic linear programming problem formulated as follows: where x = (x 1 , x 2 , ..., x n ) denotes the n-dimensional vector of decision variables. The feasible set S is a subset of n-dimensional real vector space R n characterized by a set of linear inequality constraints of the form Ax ≤ b; where A is an m × n coefficient matrix and b an m-dimensional column vector. We assume that S is nonempty and compact in R n . Each vectorc k follows a normal distribution with mean c k and covariance matrix V k . Therefore, every objectivec k x follows a normal distribution with mean µ k =c k x and variance σ 2 k = x t V k x. In the following section, we will be mainly interested in the main way to transform problem (1) into an equivalent multiobjective deterministic problem which in turn will be reformulated as a DC programming problem.

Transformations and Reformulation
First, we will take into consideration the notion of risk. Assuming that decision maker's preferences can be represented by utility functions, under plausible assumptions about decision maker's risk attitudes, problem (1) is interpreted as: The utility function U is generally assumed to be continuous and convex. In this paper, we consider an exponential utility function of the form U (r) = 1 − e −ar , where r is the value of the objective and a the coefficient of incurred risk (a large corresponds to a conservative attitude). Our choice is motivated by the fact that exponential utility functions will lead to an equivalent quadratic problem which encouraged us to design a DC method to solve it simply and accurately. Therefore, if r ∼ N (µ, σ 2 ), we have: Our aim is to search for efficient solutions of the multiobjective deterministic problem (2) according to the following definition: Applying the widely used method for finding efficient solutions in multiobjective programming problems, namely the weighting sum method [8,11] we assign to each objective function in(2) a non-negative weight w k and aggregate the objectives functions in order to obtain a single function. Thus, problem (2) is reduced to: (2) if and only if x * ∈ S is optimal for problem (4). (4) is a linear function of the random objectivesc t k x; its variance depends on the variances ofc t k x and on their covariances. Since eachc k x follows a normal distribution with mean µ k and covariance σ 2 k , the function F (x,c t ) follows a normal distribution with mean µ and covariance σ 2 where, where σ ks denotes the covariance of the random objectivesc t k x andc t s x. Finally, we obtain the following quadratic problem: or wherec k = (c k1 ,c k2 , ...,c kn ) is the k-th component of the expected value of the random multinormal vectorc, V ks and V k are elements of the positive definite covariance matrix V ofc:

The solution method
In this section, we present briefly the DC programming approach developed for solving nonconvex problems. For more details, see [23,26,27]. And we use DCA for solving problem (8).

Review of DC programming and DCA
A general DC program has the form: where g, h are lower semicontinuous proper convex functions on R n called DC components of the DC function f while g − h is a DC decomposition of f . The duality in DC associates to problem (9) the following dual program: where g * and h * are respectively the conjugate functions of g and h.
The conjugate function of g is defined by: (11) From [21], the most used necessary optimality conditions for problem (9), is: DCA constructs two sequences {x i } and {y i } (candidates for being primal and dual solutions, respectively), such that their corresponding limit points satisfy the local optimality conditions (12) and (13). There are two forms of DCA: the simplified DCA and the complete DCA. In practice, the simplified DCA is most used than the complete DCA because it is less time consuming [19]. The simplified DCA has the following scheme: Simplified DCA Algorithm Step 1 : Let x 0 ∈ R n given. Set i = 0.
Step 4 : If a convergence criterion is satisfied, then stop, else set i = i + 1 and goto step 2.
We also can note that: ( [19][20][21][22]) -DCA is a descent method without linesearch. with where χ S (.) is the indicator function of the set S and Since the matrix V is positive definite, h is a convex function.
For the function g, sincec is the vector of expected values of the random multinormal vectorc, it will e easy to demonstrate the convexity of g and make conditions for each vectorc.
After that, we will compute the two sequences {x i } and {y i } such that y i ∈ ∂h(x i ) and x i+1 ∈ ∂g * (y i ).
Computation of y i : We choose y i ∈ ∂h(x i ) = ∇h(x i ) . It is equivalent to calculate: Computation of x i : We can choose x i+1 ∈ ∂g * (y i ) as the solution of the following convex problem The solution x i is optimal for the problem (14) if one of the following conditions is verified Finally, the DC Algorithm that we can apply to problem (8) with the decomposition (14) can be described as follows:
Step 4 : If one of the conditions (17) or (18) is verified, then stop x i+1 is optimal for (14), else set i = i + 1 and goto step 2.

Experimental Results
In order to investigate the potential of DCA when applied to the considered problem, we implemented it and tested it on two small problems similar to the mathematical model (1). The first is taken from [11] to show the efficiency of the algorithm. The second example is given to present the performances of DCAMOSLP according to the variations of the weights and the risk parameter . Our results are compared in terms of running time and number of iterations to those given by the solver LINGO [33,34].
Example 1: Let us consider the following stochastic bi-objective programming problem: In [11], the non dominated solution obtained for w = (0.8, 0.2) t is (3, 0.5). Now, we solve the same problem test by DCAMOSLP algorithm for different values of the risk parameter a while keeping the same weight vector w = (0.8, 0.2) t . For this, we choose an acceptable tolerance error = 10 −6 for the optimality test and set x 0 = (0, 0) as initial point. The results of this application are shown in Table  1 where nbr it is the number of iterations.  We can observe that the non-dominated solution (3, 0.5) is obtained for values of parameter a ≤ 10 −2 . We also note that the number of iterations decreases with the decrease of the parameter a.
Example 2: Now we will test the performance of DCAMOLSP algorithm on the problem below which has three objective functions and a larger set of feasible solutions.
with c = (5, −2, 3, 6, 8, 4) and positive definite covariance matrix: The results of this application for different values of parameter and the weight vector are given in Table 2 followed by the results given by LINGO software in table 3 for the same parameter and weights. From these results, we observe that the algorithm DCAMOSLP gives efficient solutions of the studied multiobjective stochastic problem for small values of the incurred risk (a ≤ 10 −2 ). The number of iterations decreases with the decrease of this parameter. We also note that, proposed DCAMOLSP algorithm finds the same solutions as LINGO and that it is more efficient than LINGO in terms of CPU time and number of iterations required to reach the optimum.

Conclusion
We have presented a DC programming based method for solving a multiobjective stochastic linear programming problem with multivariate normal distributions in which the objective functions should be minimized. According to the computational experiments, our method outperforms -in terms of number iterations and running time -the solver LINGO. A novel contribution to this issue would consist of considering real problems and comparing the results with those of other methods and solvers used in multiobjective stochastic optimization.