
Generation of the Markovian Arrival Process of Order 2 Based on Joint Distribution
© 2024 KIIE
Abstract
In queueing network analysis, the Markovian arrival process, MAP, can be used as approximating arrival or departure processes. While the renewal processes such as Poisson process can be generated based on the marginal moments, the Markovian arrival process requires lag-1 joint moment or joint distribution function. In fact, the Markovian arrival process can be simulated by two transition rate matrices, (D0, D1), which is called the Markovian representation. However, finding a Markovian representation of higher order involves a numerically iterative procedure due to redundancy in the Markovian representation. Since there is one-to-one correspondence between joint moments and the coefficients of the joint Laplace transform, generating a MAP based on joint distribution can be much less complicated. In this paper, we propose an approach to generate a MAP based on joint distribution function which can be quickly obtained from joint moments and joint Laplace transform. Closed form formula and streamlined procedures are given for the simulation of MAP of order 2.
Keywords:
Markovian Arrival Process, Moment Matching, Laplace Transform, Lag-1 Joint Distribution Of Intervals1. Introduction
The Markovian arrival process, MAP, can be generated by two transition rate matrices (D0, D1) which is called the Markovian representation. However, finding a Markovian representation of higher order MAP by moment matching is quite complicated due to redundancy of transition rate matrices (D0, D1). On the other hand, however, the moment matching is straightforward for the Laplace transform (LT) of which minimal representation is known; see Kim (2016) for details. It is well known that the minimal number of parameters for a MAP(n) is n2; see Bodrog et al. (2008), Casale et al. (2010) and Telek et al. (2007) for details. Since the minimal LT representation is available for the MAP of any order, the parameters of the LT can be obtained in closed form by moment matching whereas closed - form transformation is not available from moments to the Markovian representation (D0, D1) except for the MAP of order 2, MAP(2); see Bodrog et al. (2008), Kim (2017), Ramirez-Cobo et al. (2010, 2012) for details.
As for generating or simulating a MAP is concerned, the lag-1 joint distribution can be used instead of the Markovian representation (D0, D1). A minimal joint LT can be easily converted into a joint distribution by LT inversion. If the inverse LT is not in explicit form of known distribution function, a simple algebraic manipulation can be used to determine exact distribution which is needed for generating a MAP. We are interested in the problem of obtaining a simulation - ready joint distribution from the set of minimal moments and minimal joint LT. We focus only on MAP(2) in this paper even though our approach can be generalized to MAP of higher order.
The paper is organized as follows. In Section 2, we review known results on the representation of MAP(2)s. Then, we present main result on the joint distribution of stationary intervals of a MAP(2) by inversion of joint LT in Section 3 followed by the numerical examples given in Section 4. We conclude in Section 5 with discussions on future direction of research.
2. Preliminaries
2.1 Markovian representation (D0, D1)
A MAP(2) is fully described by the two transition rate matrices (D0, D1) given in terms of six rate parameters (λ11, λ12, λ21, λ22, σ1, σ2), i.e.
where (λ1, λ2) = (λ11+λ12, λ21+λ22). Each off - diagonal rate parameter of D0 represents the transition rate out of a given phase into another phase but without invoking an arrival whereas each rate parameter of D1 represents the transition rate invoking an arrival with or without changing phase. Let Q = D0 + D1 and P = (-D0)-1D1. Then, Q is an infinitesimal generator for the continuous time Markov chain whereas P is the transition probability matrix for the embedded discrete time Markov chain. Let p be the stationary probability vector for P, i.e. pP = p and pe = 1 where e is a vector of ones.
2.2 Moments and the Laplace transform
Let Ti be the i-th stationary interval of a MAP. Then, the marginal moments and the lag-1 joint moments are obtained as follows
respectively. In order to simplify the notation, we use reduced moments defined as
A MAP(2) can be completely described by three marginal moments (r1, r2, r3) and one joint moment r11. In fact, a MAP(n) is completely described by (2n-1) marginal moments and (n-1)2 lag-1 joint moments; see Casale et al. (2010) and Telek et al. (2007) for a minimal set of moments for MAP(n)s.
It was shown in Kim (2016) that the lag-1 joint Laplace transform (LT) of stationary intervals of MAP(n)s can be written in terms of n2 parameters. Especially for MAP(2)s, the marginal and lag-1 joint LTs can be written in terms of (a0, a1, b1, c11) as follows
(2.1) |
(2.2) |
where (a0, a1, b1, c11) = . A MAP(2) can be completely described by Eq. (2.2) which is given in terms of four coefficients (a0, a1, b1, c11).
2.3 Jordan Representation
Another minimal representation used in our study is the Jordan representation given in two matrices (E, R); see Telek et al. (2007) and Kim (2020) for details. A MAP(n) can be represented by two n × nmatrices (E, R) given in terms of n2 parameters. The diagonal entries of the matrix E are eigenvalues of (-D0)-1. The matrix R accounts for lag-1 correlation and satisfies Re = e; see Telek et al. (2007) for details.
For a MAP(2), let (ν1, ν2) be the eigenvalues of (-D0)-1 with ν1 ≥ ν2. For the case of distinct eigenvalues, (E, R) can be written in terms of (ν1, ν2, ν3, ν4), i.e.
If ν1 = ν2, the E is written as
The marginal moments and the lag-1 joint moments are obtained as
where v = (ν4/(ν3+ν4), ν3/(ν3+ν4)). Again, by matching moments (ν1, ν2, ν3, ν4) can be determined by (r1, r2, r3, r11).
3. Joint Distribution and Lag-1 Conditional Distribution
In this section, we propose a new approach in generating a MAP without (D0, D1). We present closed - form formula for transformation of the joint LT to explicit joint distribution by which stationary intervals are generated.
Since moments can be obtained from the marginal and joint LTs in (2.1) and (2.2), the LT coefficients of the (a0, a1, b1, c11) can be uniquely determined from moments as follows
of which details can be found in Kim (2016, 2017).
For the inversion of the marginal and joint LTs, we consider the case of distinct eigenvalues and the case of identical eigenvalues separately. First, Let X and Y be the random variables representing two consecutive stationary intervals of a MAP(2).
3.1 Distinct eigenvalues
If (-D0)-1 has distinct eigenvalues, then (-1/ν1,-1/ν2) can be determined as roots of the characteristic polynomial equation given as s2+a1s+a0 = 0. Assuming, WLOG, that ν1 > ν2, we have
and
by moment matching based on Eq. (2.3). Again, by moment matching, the following equations can be obtained to write the marginal and joint LTs in (2.1) and (2.2) in terms of (ν1, ν2, ν3, ν4)
By inversion of the LT in (2.1) and (2.2) given in terms of (ν1, ν2, ν3, ν4), we get the following marginal and lag-1 joint density functions
(3.1) |
(3.2) |
where
Note that (ϕ1, ϕ2) = (ϕ11+ϕ12, ϕ21+ϕ22). Depending on the sign of ν3, the marginal distribution of a stationary interval is either hyper-exponential or mixed generalized Erlang (MGE). In order to simplify the notation, let u1 (x) and u2 (x) be the exponential density function each with mean ν1 and ν2 respectively, i.e.
Also, let u12 (x) be the density function of the hypo-exponential distribution that is a sum of two independent exponential random variables each with mean ν1 and ν2, i.e.
Below, we denote this hypo-exponential distribution by Hypoexp(1/ν1, 1/ν2).
If ν3> 0, then we have ϕ1 > 0, ϕ2 > 0. Since ϕ1 + ϕ2 = 1, the marginal distribution of a stationary interval in (3.1) can be rewritten as
and therefore X is a hyper-exponential, i.e.
(3.3) |
The joint density function (3.2) can be written as
By the definition of conditional probability, the conditional density function is given as
by which the conditional distribution of Y given X is again hyper-exponential. That is, If X ~Exp(1/ν1), then the conditional distribution of Y is
(3.4) |
Otherwise, if X ~ Exp(1/ν2), then the conditional distribution of Y is
(3.5) |
If ν3 < 0, then we have ϕ1 > ϕ2 and the marginal distribution in (3.1) is not hyper-exponential. By a simple manipulation, however, it can be shown that the marginal distribution is a mixed generalized Erlang, i.e.
where
That is,
(3.6) |
The joint density function is written as
where
By the definition of conditional probability, the conditional density function is given as
by which the conditional distribution of Y given X is either exponential or hypo-exponential. That is, if X ~ Exp(1/ν1), then the conditional distribution of Y is
(3.7) |
or if X ~ Hypoexp(1/ν1, 1/ν2), then the conditional distribution of Y is
(3.8) |
3.2 Identical Eigenvalues
If (-D0)-1 has identical eigenvalues, i.e. ν1 = ν2, then we have in Eq. (2.1) and we have ν1 = a1/2 and
along with
Let u11 (x) be the density function of the Erlang(2, 1/ν1) distribution that is a sum of two independent and identical exponential random variables with mean ν1 i.e.
Then, by inverse LT, the marginal LT in Eq. (2.1) is converted into the following marginal density function
where (ψ1, ψ11) = (1-ϕ1/ν1, ϕ1/ν1). That is,
(3.9) |
The joint LT given in Eq. (2.2), by inverse LT, is converted into the following joint density function.
where
By the definition of conditional probability, the conditional density function is given as
by which the conditional distribution of Y given X is either exponential or Erlang. That is,
(3.10) |
or if X ~ Erlang(2,1/ν1), then the conditional distribution of Y is
(3.11) |
4. Numerical Exmples
4.1 Distinct Eigenvalues: ν1 ≠ ν2
Consider a MAP(2) with the following set of moments (r1, r2, r3, r11) = (5/21, 17/294, 39/2744, 67/1176) for which we have (a0, a1, b1, c11) = (28, 11, 13/3, 19), (ν1, ν2) = (1/4, 1/7), and (ν3, ν4) = (1/12, 2/3). Since (ϕ1, ϕ2) = (8/9, 1/9), the marginal distribution is hyper-exponential which can be generated as follows
by Eq. (3.3). Since (ϕ11, ϕ12, ϕ21, ϕ22) = (22/27, 2/27, 2/27, 1/27), the next interval Y can be generated conditional to the distribution of X, i.e.
or
by Eqs. (3.4) and (3.5). The next interval is generated conditional to the distribution of Y.
Consider a MAP(2) with the following set of moments (r1, r2, r3, r11) = (3/5, 1/3, 8/45, 16/45) for which we have (a0, a1, b1, c11) = (6, 5, 7/5, 9/5), (ν1, ν2) = (1/2, 1/3), and (ν3, ν4) = (-1/2, 4/3). Since (ϕ1, ϕ2) = (8/5, -3/5), the marginal distribution is MGE(2) which can be generated as follows
by Eq. (3.6). Since (ψ1,1, ψ1,12, ψ12,1, ψ12,12)= (9/20, 1/4, 1/4, 3/10), the next interval Y can be generated conditional to the distribution of X, i.e.
or
by Eqs. (3.7) and (3.8). The next interval is generated conditional to the distribution of Y.
4.2 Identical Eigenvalues: ν1 = ν2
Consider a MAP(2) with the following set of moments (r1, r2, r3, r11) = (3/7, 11/63, 13/189, 103/567) for which we have (a0, a1, b1, c11) = (9, 6, 15/7, 31/7), (ν1, ν2) = (1/3, 1/3), and (ν3, ν4) = (-1/2, 4/3). We have (ϕ1, ϕ2) = (19/27, 2/27) and the marginal distribution is MGE(2) which can be generated as follows
by Eq. (3.9). Since (ψ1,1, ψ1,12, ψ12,1, ψ12,12) = (31/63, 14/63, 14/63, 4/63), the next interval Y can be generated conditional to the distribution of X, i.e.
or
by Eqs. (3.10) and (3.11). The next interval is generated conditional to the distribution of Y.
5. Conclusions
The one-to-one correspondence between the minimal set of moments and the minimal LT representation enables us to find a joint distribution function of the stationary intervals of a MAP(n) by linear transformation from moments to minimal LT and then by inversion of the LT. By simple algebraic manipulation of the inverse LT a joint distribution can be obtained by which a MAP can be generated without transition rate matrices. The presented analytic procedure is only for MAP(2)s of which canonical form is known for transformation from moments to the Markovian representation. However, no canonical transformation is available for MAPs of order 3 or higher for which a generalization of our approach can be useful in modeling and simulating a queueing system with a MAP. Unlike the MAP(2) whose eigenvalues are always real valued, higher order MAPs may have complex-valued eigenvalues in which case the MAP can be represented as a repetition of a real-valued MAP. A natural direction of future research is the study of MAPs with cyclic Markov chain and trigonometric distribution function due to complex eigenvalues.
References
-
Bodrog, L., Heindl, A., Horv'ath, G., and Telek, M. (2008), A Markovian canonical form of second - order matrix - exponential processes, European J. of Oper. Res., 190(2) 459-477.
[https://doi.org/10.1016/j.ejor.2007.06.020]
-
Bodrog, L., Horv'ath, G., and Telek, M. (2008), Moment characterization of matrix exponential and Markovian arrival processes, Annals of Oper. Res., 160(1) 51-68.
[https://doi.org/10.1007/s10479-007-0296-8]
-
Casale, G., Zhang, E., and Smirni, E. (2010), Trace data characterization and fitting for Markov modeling, Performance Evaluation, 67 61-79.
[https://doi.org/10.1016/j.peva.2009.09.003]
-
Kim, S. (2011), Modeling cross correlation in three - moment four - parameter decomposition approximation of queueing networks, Oper. Res., 59(2) 480-497.
[https://doi.org/10.1287/opre.1100.0893]
-
Kim, S. (2016), Minimal LST representations of MAP(n)s: moment fittings and queueing approximations, Naval Research Logistics, 63(7) 549-561.
[https://doi.org/10.1002/nav.21719]
-
Kim, S. (2017), The characteristic polynomial and the Laplace representations of MAP(2)s, Stochastic Models, 33(1) 30-47.
[https://doi.org/10.1080/15326349.2016.1199282]
-
Kim, S. (2020), Jordan Representation of the Markovian Arrival Process of Order 2 and Moments Fitting, J. of the Korean Operations Research and Management Science Society, 45(1), 1-10
[https://doi.org/10.7737/JKORMS.2020.45.1.001]
-
Ramirez-Cobo, P., Lillo, R., and Wiper, M. (2010), Nonidentifiability of the two - state markovian arrival process, J. of Applied Probability, 47, 630-649.
[https://doi.org/10.1239/jap/1285335400]
-
Ramirez-Cobo, P. and Carrizosa, E. (2012), A note on the dependence structure of the two - state Markovian arrival process, J. of Applied Probability, 49, 295-302.
[https://doi.org/10.1239/jap/1331216848]
-
Telek, M. and Horv'ath, G. (2007), A minimal representation of Markov arrival processes and a moments matching method, Performance Evaluation, 64(9-12), 1153-1168.
[https://doi.org/10.1016/j.peva.2007.06.001]
김선교: 1987년 서울대학교 경영학과에서 학사학위, 1995년 University of California at Berkeley에서 석사학위, 2000년 Purdue University에서 산업공학 박사과학위를 취득하였다. 2002년부터 아주대학교 경영학과에서 교수로 재직하고 있다. 연구분야는 대기행렬 네트워크 분석, 마코비안 도착과정 등이다.