ANALYSIS OF A DISCRETE-TIME MARKOV PROCESS WITH A BOUNDED CONTINUOUS STATE SPACE BY THE FREDHOLM INTEGRAL EQUATION OF THE SECOND KIND

. A discrete-time Markov process with a bounded continuous state space is considered. We show that the equilibrium equations on steady-state probability and densities form Fredholm integral equations of the second kind. Then, under a sufficient condition that the transition densities from one state to another state inside the boundaries of the state space can be expressed in the same separate forms, the steady-state probability and density functions can be obtained explicitly. We use it to demonstrate an economic production quantity model with stochastic production time, derive the expressions of the steady-state probabilities and densities, and find the optimal maximum stock level. A sensitivity analysis of the optimal stock level is performed using production time and cost parameters. The optimal stock level decreases with respect to the holding cost and the production cost, whereas it increases with respect to the lost sale cost and the arrival rate.


Introduction
A Markov process can be applied to simulate a random system that changes system conditions based on a transition rule that is only dependent on the present state.The most significant property of the Markov process is the conditional probability distribution of the process's future states, which depends only on the current state [12,18].A discrete-time Markov process model with a continuous state space is applied to many systems, such as a liquid store model, electricity model, autoregressive (AR) model, and so on.It is also useful as the approximate model of the Markov process with a discrete space.The steady-state distributions of the Markov model satisfy the equilibrium equations, but in almost cases, those cannot be represented in an explicit form.In this paper, Fredholm integral equations are applied for analysis of a general discrete-time Markov process with a bounded continuous state space, and we give a sufficient condition for obtaining the steady state probability and density functions.

Background
In the late nineteenth century, Fredholm and Volterra largely establish the theory of integral equations.Their works have a significant impact on the study of integral equations in the twentieth century.Numerous mathematical models in engineering and science, such as the anomalous diffusion problem, biological population ecological model, and population prediction model, can be described by an integral equation model [9].The Fredholm integration equation is found in the theory of signal processing, linear forward modelling, and inverse problems.Fluid mechanics issues involving hydrodynamic interactions near finite-sized elastic interfaces also use Fredholm integral equations [2,3].A special use of the Fredholm equation is the creation of photorealistic images in computer graphics, in which the Fredholm equation is used to represent light transport from virtual light sources to the image plane.Due to the fact that a vast class of initial and boundary value problems can be transformed into Volterra or Fredholm integral equations, many scientific domains use Fredholm integral equations, including engineering, applied mathematics, and mathematical physics [10].Other literature regarding the Fredholm integral equations focuses on efficient numerical solution techniques for the Fredholm integral equations [5,14,17,19].
Recently, the Fredholm integral equations have been applied to a specific stochastic process.In particular, Lindemann and Thümmler [13] provide a generic state-space Markov chain (GSSMC) approach for the transient analysis of deterministic and stochastic Petri nets with concurrently enabled deterministic transitions as an application of the Fredholm equation to the study of the stochastic process.The general state-space Markov chain approach is built on a numerical iterative solution of a system of Fredholm integral equations.Fuh et al. [8] use the Fredholm integral equation to compute the R'enyi divergence of two-state Markov switching models.Ramsden and Papaioannou [16] derive a Fredholm integral equation of the second kind for the ultimate ruin probability and achieve a clear expression in terms of ruin quantities for the Cramér-Lundberg risk model.Later, they extend the capital injection delayed risk model such that the delay of the capital injections depends explicitly on the amount of the deficit.Dibu et al. [4] introduce a Markov Arrival Process risk model that permits capital injections to be received promptly or with an arbitrary delay, depending on the amount of shortage experienced by the firm.For this model, they originate a system of Fredholm integral equations of the second kind for the Gerber-Shiu function and derive a straightforward formulation in matrix form in terms of the Gerber-Shiu function of the Markov Arrival Process risk model.

Motivation, research gap and objective
Generally, the steady-state distributions of the Markov model cannot be represented in an explicit form.From literature, for the specified model, the analysis is made theoretically by using Fredholm integral equations.To the best of our knowledge, there is currently no mathematical approach that uses the Fredholm integral equation of the second kind to study the steady state of a general discrete-time Markov process with a bounded continuous state space.Recently, Karim and Nakade [11] show such an application in a special case of an economic production quantity (EPQ) model.They derive the expression of the steady-state distribution by using Fredholm integral equations.Although the model they analyse is a restricted one, their analysis suggests that the general discrete-time Markov process with continuous states can also be analysed by the Fredholm integral equation.
Thus motivated, the application of the Fredholm integral equation of the second kind for the steady-state analysis of a general discrete-time Markov process with a bounded continuous state space is studied in this paper.Then, as an example application, we analyse a fundamental EPQ model.We show that the Fredholm integral equation of the second kind can be used to express the equilibrium equations on steady-state probability densities.The functions of the Fredholm integral equations can be solved using the degenerate kernel method when some function satisfies separable properties [15].
The contribution of this paper is as follows: -For a general discrete-time Markov process with a bounded continuous state space, we derive sufficient conditions on separable properties of transition density functions, under which expressions of the steadystate probabilities and densities can be derived explicitly.-Then, as an application, we discuss a basic EPQ model, apply the derived analytical method, and derive the optimal size of the maximum inventory level.
The organization of this paper is as follows: In Section 2, the Fredholm integration equation of the second type is described.In Section 3, it is shown that this equation can be applied to the analysis of the Markov process with a bounded state space under some separable conditions on the transition densities and probabilities.In Section 4, the analysis in Section 3 is applied to the basic EPQ model, and the optimal maximum inventory level is derived.In Section 5, we give the conclusion.

Fredholm integral equation of the second kind
The Fredholm integral equation of the second kind is found in the theory of signal processing, linear forward modelling, and inverse problems.The equation on the function  () is given as follows.
In the following, we set  = 0 and  = .That is, The degenerate kernel method [15] can be applied when (,  ′ ) satisfies the following separate form: where   () and   ( ′ ) are functions of only  and  ′ , respectively.Then, by inserting (2.2) into the function inside the integral in (2.1), we have where   = ∫︀  0   ( ′ ) ( ′ )d ′ ( = 0, 1, . . .,  − 1).Set and By multiplying   () on both sides of (2.3) and integrating it from 0 to , we have the following equations. ( Thus, if   and   can be computed for all  and , we can derive   by solving (2.6).
Here, we consider the case  = 2. Then ) where ) −  01  10 ̸ = 0, a unique non-zero solution of the system of equation (2.8) exists and is given by and Thus, we obtain  () by (2.7).

A discrete time Markov process with a bounded continuous state space
We consider a discrete-time Markov process with bounded continuous state space.Without loss of generality, the state space is set as  = [0, ], the transition intensity (stochastic kernel) from  ′ ∈  to  ∈ (0, ) is   ′ , , and the transition probabilities from  ∈  to 0 and  are  0 and   , respectively.Let steady-state mass probabilities in states 0 and  be  0 and   , respectively, and let the steady-state probability density in state  ∈ (0, ) be denoted by  ().
The Markov process assumes that the state space forms one recurrent class.This implies that  00 < 1 and   < 1.Then, we have the following equilibrium equations (see e.g., [7]).
We also have the total probability of 1 and thus and thus, Inserting equations (3.4) and (3.5) into (3.2),we have We now give a condition under which the steady-state density can be derived. ) we have which is the same as the Fredholm equation of the second kind.Thus, if (,  ′ ) is represented as (,  ′ ) = ∑︀ −1 =0   ()  ( ′ ), we can apply the degenerate kernel method shown in Section 2 and derive the probability density  ().If   ′  is represented as ), for all ,  ′ ∈ (0, ), the term in the brace of equation (3.6) is formed as where If we can derive expressions of integrals ∫︀  0  ()d, ∫︀  0  ()  d and ∫︀  0  () 0 d explicitly, the steady-state probabilities   and  0 can also be represented explicitly by (3.4) and (3.5).

Analysis of an EPQ model
The modelling of production-inventory systems is one of the most important applications of the Markov process.Revenue is earned when product supply meets customer demand.Inventory control is primarily related to the matching of supply and demand.Some of the earliest research papers on inventory system modelling with Markovian models are from the 1950s (see, e.g., [1,6]).Markovian models have since gained a good deal of popularity in inventory control.
In this section, as an application of result established in the previous section, we consider the following EPQ model and derive the optimal upper limit of the items in inventory.
One unit period corresponds to one day.In each period, the amount of demand of items is , and it is fixed.The manufacturer produces items.The process may fail, and  is the possible number of items produced by the epoch when the failure occurs.The upper bound of items in inventory is .Here, we assume that  ≥ .At the beginning of each day, the production restarts even if the system had failed in the previous day by repairing the process at the end of the previous day (see Fig. 1).
When the number of items in inventory is  ′ at the beginning of each period, if the number of items produced in this period reaches  +  −  ′ , the system stops production, and after the demand of this period is met, the number of items in inventory becomes .
When the number of possible produced products is , the number of items in inventory, denoted by , is given by  = max(0,  ′ +  − ). Thus, , and (iii) when  <  −  ′ ,  = 0.
In the last case, the excess demand is lost.Note that  ′ ≤  <  implies that the last case is possible.Then, we have for each Thus, when  follows an Erlang-type distribution, the sufficient condition of Corollary 3.3 in Section 3 is satisfied.Thus, we can derive the steady-state probability density  () and steady-state mass probability functions  0 and   .
In the following, we study the case where  follows an exponential distribution with parameter .This is satisfied when the failure rate is constant and the failure is not unpredictable.Then,   () =  − is a density function of  and   () = 1 −  − is a distribution function of .Thus, we have for each By (3.7) and (3.8), where ) , and then Since  − < 1 for  > 0, we have  − < 1 because it has been assumed that  ≤ .From (2.9) Thus, by (2.3), we have

An optimization problem
For a given , we derive the average cost by using the derived limiting distributions of the number of items in inventory.Let the production cost per unit item, the holding cost rate per unit time, and the lost sale cost for each lost demand be , ℎ, and , respectively.We also define a function as for  ≥ 0 By using equations )︂ .
(c) Average holding cost • Thus, the average cost for a given  is given by We derive the optimal .By differentiating both sides of (4.1), We assume that  − (ℎ + ) > 0. This is not restrictive because the lost sale cost is higher than a production cost and unit time holding cost.Otherwise, the system will not produce a product.

Sensitivity analysis
Here, we perform sensitivity analysis on  * .We note that  * satisfies ( * ) = 0.By letting  = −1 +   and  = ℎ −(ℎ+) ,  * satisfies   * =  −  * .Figure 2 shows that  * is the intersection point of  =   and  =  − .Since  increases in ℎ and  while decreasing in , from Figure 2, we find that  * decreases in ℎ and  while increasing in .In addition, it is observed from Figure 1 that  * increases in .
It should be highlighted that when  >  is allowed, a lower average cost can be reached.This occurs when the probability of a short production time is high, and it is preferable to store more items to prepare for the  subsequent short production times.In this case, however, the analytical method in this paper cannot be applied because the expression of transition intensity   ′  depends on the combination of  ′ and .

Conclusion
This paper considers the application of a Fredholm integral equation of the second kind to the analysis of a discrete-time Markov process with a continuous state space having a finite interval.We first show that the equilibrium equations on the steady-state mass and density probability functions are formed as the Fredholm integral equation.Then, under some separable conditions, the transition density from  ′ to  in the inner state-space forms the product of two functions that are functions of only  ′ and , respectively, we can obtain expressions of these probability functions explicitly.As an example, a basic EPQ model is analysed, and its optimal bound is developed.The optimal upper bound is derived, and we analytically study the sensitivity of the optimal bound.The bound decreases with respect to the holding cost and the production cost, whereas it increases with respect to the lost sale cost and the arrival rate.
The application of the Fredholm integral equation to such a discrete-time Markov process may be limited because to obtain expressions of the probability and density functions, several conditions on transition probability densities must be satisfied.As Dibu et al. [4] have applied the Fredholm integral equations to MAP processes, the general method of Fredholm integral equations to other stochastic processes may exist.Further extensions of the equations to the general stochastic processes are left for future research.

Figure 2 .
Figure 2.  * is the intersecting point of two lines.