ON THE NUMERICAL APPROXIMATION OF SOME INVERSE PROBLEMS GOVERNED BY NONLINEAR DELAY DIFFERENTIAL EQUATION

. The paper deals with the approximate solving of an inverse problem for the nonlinear delay differential equation, which consists of finding the initial moment and delay parameter based on some observed data. The inverse problem is considered as a nonlinear optimal control problem for which the necessary conditions of optimality are formulated and proved. The obtained optimal control problem is solved by a method based on an improved parallel evolutionary algorithm. The efficiency of the proposed approach is demonstrated through various numerical experiments


Introduction
The recent development of applied mathematics is characterized by increasing attempts to use mathematical modeling tools in different engineering, biological and medicinal fields. This mathematical modeling, which generally based on ordinary and partial differential equations, attempt to better our understanding of more and more complicated phenomena. However, it is becoming clear that the simplest models cannot capture the rich variety of dynamics observed in natural phenomena. One of the more significant approaches is the inclusion of lag terms in differential equations. The delays or lags can represent gestation times, incubation periods, transport delays, or can simply lump complicated biological processes together, accounting only for the time required for these processes to occur [11]. Such models have the advantage of combining a simple, intuitive derivation with a wide variety of possible behavior regimes for a single system. Thereby, involving the delay in ordinary equations equations have been a topic of much interest in the mathematical research literature for more than 50 years. Contributions range from classical applications and theoretical and computational methodologies [9] to modern applications in biology [10,15]. A particular interest has been established mathematical models describing the immune response during infectious diseases which are formulated as systems of nonlinear delaydifferential equations (DDEs) characterized by multiple constant delays, moderate size, and stiffness [4,5]. This paper deals with a topic that has become increasingly relevant in current research: inverse problems [1][2][3]20,24].
Keywords. Inverse problem, delay differential equation, necessary optimality conditions, numerical approximation, parallel evolutionary algorithms.
In particular these involving nonlinear delay systems [29]. It appears in many applications where parameter estimation is ubiquitous. Inverse problems arise when our objective is to recover information about a system from observations of the system. The system is usually closed which refers to the property that the information we wish to recover cannot be directly observed.
In this work, we consider an inverse problem governed by a nonlinear delay differential system. It consists in finding the initial time and the delay that appear in the considered system of nonlinear DDE. We propose an optimal control approach to solve this inverse problem.
The inverse problems are highly ill-posed in Hadamard's sense [13]. There is no standard way to solve these kinds of problems. Therefore, the ill-posedness of inverse problems means that it needs to be somewhat modified to be solved. In this context, the optimal control formulation presents an excellent alternative to handle many inverse problems for differential equations. Indeed, the optimal controls bring a powerful theoretical framework and allow deriving optimality conditions as well as convergence rates. This explains the extended studies of inverse problems using this formulation [7,18,19,26].
In this work, we study the existence of the optimal solution and we derive the necessary optimal conditions. In the numerical case, we develop a method based on the evolutionary algorithm (EA) and parallel technique running on a multiprocessor machine with distributed memory [22]. Indeed, as it's well known that EA is readily lending itself to parallel processing because each individual in the population is independent of the others and therefore can be processed concurrently. Thus, through many concrete problems, we present some numerical experiments illustrated by the performance study in terms of the approximation quality and time-consuming.
We highlight that the proposed theoretical and numerical parts are complementary. Indeed, to face the illposedness of the considered inverse problem, in the theoretical part we have shown the existence of an optimal solution and derived the necessary optimal conditions. Obviously, the optimality conditions are given only the existence of a local solution. By complementary, in the numerical part, we propose a numerical approach based on (EA) which is a robust algorithm that converges to a global solution [6,22]. Consequently, the proposed approach in its entirety is an efficient tool to solve the considered ill-posed problem.
The rest of the paper is organized as follows. The Section 2 is devoted to the statement of the inverse problem. In Section 3, we derive the necessary optimality conditions. The proof of the main theorem is presented in Section 4. The numerical approach, the developed parallel algorithm, the numerical experiments with discussions, and the analysis of the performance of the parallel algorithm are presented in Section 5. The conclusion and remarks are presented in Section 6.
To each element = ( 0 , ) ∈ = [ 1 , 2 ] × [ , 2 ] we assign the delay differential equatioṅ By the step method and Gronwall inequality can be proved that for every element ∈ there exists a unique solution ( ; ) of (2.1) and (2.2) defined on the interval [ˆ, ] and it is continuous with respect to .

Inverse problem
Let ∈ be a given vector. Find element ∈ such that the following condition holds ( ; ) = . The vector , as rule, by distinct error is beforehand given. Thus, instead of the vector we haveˆ(so called an observed vector) which is an approximation to the and, in general,ˆ/ ∈ . Therefore it is natural to change posed inverse problem by the following approximate problem.

The approximate inverse problem
Find an element ∈ such that the deviation takes the minimal value. Where ( ), ∈ [ 0 , ] is a solution of (2.1) and (2.2). It is clear that the approximate inverse problem is equivalent to the following optimal control problem: The problem (2.3) is called optimal control problem corresponding to the approximate inverse problem. The For the equivalence, if * ∈ is a solution of the approximate inverse problem, then there exists ( ), ∈ [ 0 , ] solution of (2.1) and (2.2) such that ( *) ≤ ( ) ∀ ∈ . This means that is a solution of (2.3).
Conversely, ifˆis a solution of the optimal control problem (2.3), then we have which means thatˆis a solution of the approximate inverse problem. From continuity of the function ( ), ∈ it follows the existence of an optimal element 0 .

Necessary optimality conditions
In this section we give some results allow to give the necessary optimality conditions of the control problem (2.3).
be an optimal element. Then the following conditions hold: (1) the condition for the optimal initial moment 00 ( 00 )[˙( 00 ) − ( 00 , ( 00 ), ( 00 − 0 ))] = 0; (2) the condition for the optimal delay parameter 0 It is clear that the condition (2) is equivalent to the following relation be an optimal element. Then the following conditions hold: (3) the condition for the optimal initial moment 1 the condition for the delay parameter 0 Here ( ) satisfies the equation (3.1) and the condition (3.2), where 00 = 1 and 0 = .
be an optimal element. Then the following conditions hold: (5) the condition for the optimal initial moment 2 the condition for the optimal delay parameter 0 Here ( ) satisfies the equation (3.1) and the condition (3.2), where 00 = 2 and 0 = 2 .

Proof of main theorems
In this section we give the proof of the theorems presented in the preview section. We will give the proof of the Theorems 3.1 and 3.3, for other theorems they are proved by the analogous scheme.
This implies < 0, therefore we can take = −1, i.e., satisfies the condition (3.2). Let = = 0 then, taking into account that 0 ∈ R from (4.7) we obtain the condition (1). Let 0 = = 0 then, taking into account that ∈ R from (4.7) we obtain the condition (2). Finally, we note that in order to prove the Theorem 3.3 it suffices to replace the set 0 by the set see the proof of the Theorem 3.1.

Numerical approach
In this section we shall present the numerical approach to solve the problem (2.3). First of all, we replace the constrained problem by its appropriate discrete optimal control problem. We subdivide the interval [ 0 , ] into sub-intervals with knots 0 < 1 < . . . < and = ∆ , where ∆ is the mesh size of th sub-interval. Where ∈ {0, 1, . . . , }.
The discrete problem associates to the continuous problem (2. where Ψ is a method of approximation of the delay differential equations (2.1) and (2.2). In this paper, the state problem will be solved by the solver dde23 using the popular approach based on the Runge-Kutta triple BS(2,3) [27].

Numerical algorithm
Evolutionary Algorithms are based on a model of natural, biological evolution, which was primarily developed by Holland [14]. It is essentially a searching method based on the Darwinian principles of biological evolution that explains the adaptive change of species by the principle of natural selection, which favors those species for survival and further evolution that are best adapted to their environmental conditions. It has been successfully applied to various optimization problems [6][7][8]19].
In the evolutionary framework, a new generation of individuals is produced using the simulated genetic operations selection, crossover, and mutation. The evolution is based only on fitness. The fitness of an individual is measured only indirectly by its growth rate in comparison to others. Selection is the ability of those individuals that have outlasted the struggle for existence to bring their genetic information to the next generation. The crossover is the mechanism of reproduction based on the combination of the selected individuals. While the mutation is characterized by modifying one or more genes of an individual with some probability. The object of the mutation operator is to restore the lost or unexplored genetic material into the population to stop the premature convergence of the evolutionary algorithm to sub-optimal solutions.
The evolutionary algorithms differ from other methods of search and optimization in several practices. (1) Evolutionary algorithms, search from a population of possible solutions instead of a single one. (2) The fitness or cost function used to solve the redundancy has no necessity for continuity of the derivatives, so practically any fitness function can be chosen for optimizing. (3) Evolutionary algorithms use random operators across the process, including reproduction, crossover, and mutation. (4) Evolutionary algorithms are blind since no specified information about the intended problem is required to get the final solution.
The disadvantage of EA is the high cost of function evaluations, notably for large structures and larger population sizes, where many design iterations are necessary to reach the global optimum.
To circumvent this drawback, an improved evolutionary algorithm is considered to increase the performance of the optimal control problem based on parallel computing. Indeed, as it's well known, the EA readily lends itself to parallel processing because each individual in the population is independent of the others and therefore can be processed concurrently.
Furthermore, many parallel evolutionary algorithm implementations for solving optimization problems have demonstrated their success [21][22][23][24] as seen in the literature. For the implementation, we use a parallel simulation executed on a multiprocessor machine with distributed memory. There are various manners to exploit the parallelism of the EA algorithm. We opt for the implementation based on the master-slave approach. The interprocessor communication was realized via standard MPI (Message Passing Interface) [16]. In this procedure, each processor can directly access its memory and must explicitly communicate with other processors to access the data in their memories [16]. MPI's objective is to give a standard for writing message-passing programs. The ed algorithm is summarized in the Algorithm 1. Denotes by the size of the unknown parameter vector .

Numerical experiments
In order to establish the efficiency of the proposed approach. We consider some concrete problems governed by nonlinear DDE in the form (2.1). A good test problem must have an analytical solution. Finding attractive and nontrivial DDEs whose solution is known is sometimes a hard task. Each of the following examples emphasizes one or more of the peculiar aspects of DDEs that make their numerical solution challenging. In all sequel examples, we run the Algorithm 1 using the parameters listed in Table 1. All tests are executed under the same

Example 1
We consider the following with constant and multiple delays.
The evolution of the cost function is presented in the Figure 2.
In the Table 2 we present the comparison between the obtained values of the parameter = ( 0 , 1 , 2 ), and the exact one.
As we can see from these results the obtained approximation is in good agreement with the exact solution.   Table 2. Comparison between the exact and the approximate parameter = ( 0 , 1 , 2 ).

Example 2
In this example, we consider the model of the four-year life cycle of a population of lemmings [30]. The problem is defined as follows The parameters = 3.5 and = 19. Note that with these values, the problem has a constant (steady-state) solution, ( ) = 19. The original model is proposed with the solution as history and perturbs the initial value to (0) = 19.00001 so that ( ) will move away from the steady state. Here we use (0) = 19.001 so as to find out the cyclic behavior sooner. The exact control parameters in this case are = ( , 0 ) = (0.74, 0). In the Figure 3, we present the obtained solution compared to the exact one.   In the Figure 4, we present the evolution of the cost function related to the iteration. In the Table 3 we present the comparison between the obtained values and the exact parameters. We can see from the Figures 3 and 4 and the Table 3, that the obtained results show the robustness of the proposed method despite the difficulty of the problem which presents a very high sensitivity to perturbation.

Example 3
In this example, we consider the predator-prey models derived by introducing a resource limitation on the prey and assuming the birth rate of predators respond to adjustment in the magnitude of the population 1 of prey and the population 2 of predators just after a time delay .   Suppose that the parameters = 0.25, = −0.01, = −1.00, = 0.01, and = 200. In the Figure 5 we present the behavior of the population of predators 2 ( ) according to the population of prey 1 ( ) for 0 ≤ ≤ 100. In the Figure 6, we present the evolution of the cost function related to the iterations. The Table 4 presents the obtained values of parameters and the exact parameters.
This model is more complicate to the previews. This explains the little lack of precision in the obtained results. However, the results still in good agreement with the exact solution.

Example 4
In this last example, we consider a cardiovascular model due to Ottesen [25] concerning the arterial pressure, ( ) = 1 ( ), the venous pressure, ( ) = 2 ( ), and the heart rate, ( ) = 3 ( ). Ottesen study conditions under which the delay causes qualitative differences in the solution and in particular, oscillations in ( ). The For the history is given by The used parameters are given in the Table 5.
For the sake of visibility, we will limit ourselves to the presentation of the behavior of the arterial pressure ( ) in the Figure 7. The other obtained results has a similar quality. The evolution of the cost function for this example is presented in the Figure 8. The approximation of the free parameter and 0 is presented in the Table 6.
We have shown the efficiency of the proposed approach through different problems with different kinds of difficulties. According to the presented figures and tables, the obtained results confirm the robustness of the  developed approach in terms of precision quality. In the following section, we will study the performance of the proposed algorithm in terms of time execution.

Performance of the parallel Algorithm 1 in term of execution time
In order to examine the performance of the parallel algorithm, we start by analyzing the time complexity of the sequential evolutionary algorithm. The sequential Complexity is given by where denotes the number of iterations, denotes the number of used processors and is the communication cost. Comparison of consuming time-related to the number of processors is not enough to analyze the performance of the parallel algorithm. Indeed, the communication cost may offset any gains in computation time. Therefore, to ensure that the parallel implementation has a better performance than a simple one we must  where denotes the times of sequential algorithm and is the one of the parallel algorithm. In the Figure 9, we present the behavior of the time consuming related to the used number of processor. The parallel speedup and the efficiency related the used number of processors is presented in the Figure 10.
According to the presented results in Figures 9 and 10, we can see that the parallel version reduces considerably the execution time in comparison to the sequential version. Indeed, only passing from a single processor to two processors reduces the computing time by half. Obviously, the ideal would be that the time observed using processors should be exactly the time observed using 1 processor divided by . However, in practice, the difference increases with the number of processors. This is due to the communication time, which increases with the number of processors. This is confirmed by noting the passage from 8 to 16 processors which did not save time. On the contrary, the time has intensified. This is explained by the fact that communication time has become more important than the gain from the distribution of tasks. Thus, from Figure 10 we can see that the parallel speedup increases until 8 processors, then stagnates. While the efficiency we can observe that it decreases as the number of processors increases. The critical number of slaves that maintain a predetermined efficiency and maximize the parallel speedup is 8.

Conclusion
In this work, we have proposed an optimal control approach for solving an inverse problem governed by a nonlinear delay differential equation. Based on some observed data we have determined the initial moment and delay parameter. Thus, the necessary conditions of optimality related to the associate control problem have been formulated and proved. To approximate the obtained optimal control problem we have discussed an approach based on an improved parallel evolutionary algorithm. Through some numerical experiments, we have studied the performance of the proposed algorithm. In particular, a useful analysis of the parallel algorithms is presented. We claim that the proposed approach can easily be adapted to solve other inverse problems of parameter identification.