Current Issue

Journal of the Korean Institute of Industrial Engineers - Vol. 50 , No. 2

[ Article ]
Journal of the Korean Institute of Industrial Engineers - Vol. 50, No. 2, pp. 83-88
Abbreviation: JKIIE
ISSN: 1225-0988 (Print) 2234-6457 (Online)
Print publication date 15 Apr 2024
Received 15 Jan 2024 Accepted 27 Jan 2024
DOI: https://doi.org/10.7232/JKIIE.2024.50.2.083

Generation of the Markovian Arrival Process of Order 2 Based on Joint Distribution
Sunkyo Kim
School of Business, Ajou University

결합 분포에 기초한 2계 마코비안 도착과정의 생성에 관한 연구
김선교
아주대학교 경영학과
Correspondence to : 김선교 교수, 16499 경기도 수원시 영통구 월드컵로 206 아주대학교 경영학과, Tel: 031-219-2841, Fax: 031-219-1616, E-mail: sunkyo@ajou.ac.kr


© 2024 KIIE

Abstract

In queueing network analysis, the Markovian arrival process, MAP, can be used as approximating arrival or departure processes. While the renewal processes such as Poisson process can be generated based on the marginal moments, the Markovian arrival process requires lag-1 joint moment or joint distribution function. In fact, the Markovian arrival process can be simulated by two transition rate matrices, (D0, D1), which is called the Markovian representation. However, finding a Markovian representation of higher order involves a numerically iterative procedure due to redundancy in the Markovian representation. Since there is one-to-one correspondence between joint moments and the coefficients of the joint Laplace transform, generating a MAP based on joint distribution can be much less complicated. In this paper, we propose an approach to generate a MAP based on joint distribution function which can be quickly obtained from joint moments and joint Laplace transform. Closed form formula and streamlined procedures are given for the simulation of MAP of order 2.


Keywords: Markovian Arrival Process, Moment Matching, Laplace Transform, Lag-1 Joint Distribution Of Intervals

1. Introduction

The Markovian arrival process, MAP, can be generated by two transition rate matrices (D0, D1) which is called the Markovian representation. However, finding a Markovian representation of higher order MAP by moment matching is quite complicated due to redundancy of transition rate matrices (D0, D1). On the other hand, however, the moment matching is straightforward for the Laplace transform (LT) of which minimal representation is known; see Kim (2016) for details. It is well known that the minimal number of parameters for a MAP(n) is n2; see Bodrog et al. (2008), Casale et al. (2010) and Telek et al. (2007) for details. Since the minimal LT representation is available for the MAP of any order, the parameters of the LT can be obtained in closed form by moment matching whereas closed - form transformation is not available from moments to the Markovian representation (D0, D1) except for the MAP of order 2, MAP(2); see Bodrog et al. (2008), Kim (2017), Ramirez-Cobo et al. (2010, 2012) for details.

As for generating or simulating a MAP is concerned, the lag-1 joint distribution can be used instead of the Markovian representation (D0, D1). A minimal joint LT can be easily converted into a joint distribution by LT inversion. If the inverse LT is not in explicit form of known distribution function, a simple algebraic manipulation can be used to determine exact distribution which is needed for generating a MAP. We are interested in the problem of obtaining a simulation - ready joint distribution from the set of minimal moments and minimal joint LT. We focus only on MAP(2) in this paper even though our approach can be generalized to MAP of higher order.

The paper is organized as follows. In Section 2, we review known results on the representation of MAP(2)s. Then, we present main result on the joint distribution of stationary intervals of a MAP(2) by inversion of joint LT in Section 3 followed by the numerical examples given in Section 4. We conclude in Section 5 with discussions on future direction of research.


2. Preliminaries
2.1 Markovian representation (D0, D1)

A MAP(2) is fully described by the two transition rate matrices (D0, D1) given in terms of six rate parameters (λ11, λ12, λ21, λ22, σ1, σ2), i.e.

D0=-λ1-σ1σ1σ2-λ2-σ2, D1=λ11λ12λ21λ22

where (λ1, λ2) = (λ11+λ12, λ21+λ22). Each off - diagonal rate parameter of D0 represents the transition rate out of a given phase into another phase but without invoking an arrival whereas each rate parameter of D1 represents the transition rate invoking an arrival with or without changing phase. Let Q = D0 + D1 and P = (-D0)-1D1. Then, Q is an infinitesimal generator for the continuous time Markov chain whereas P is the transition probability matrix for the embedded discrete time Markov chain. Let p be the stationary probability vector for P, i.e. pP = p and pe = 1 where e is a vector of ones.

2.2 Moments and the Laplace transform

Let Ti be the i-th stationary interval of a MAP. Then, the marginal moments and the lag-1 joint moments are obtained as follows

ET1k=k!p-D0-ke,
ET1kT2l=k!l!p-D0-kP-D0-le

respectively. In order to simplify the notation, we use reduced moments defined as

rk=ET1k/k!,
rk,l=ET1kT2l/k!l!,

A MAP(2) can be completely described by three marginal moments (r1, r2, r3) and one joint moment r11. In fact, a MAP(n) is completely described by (2n-1) marginal moments and (n-1)2 lag-1 joint moments; see Casale et al. (2010) and Telek et al. (2007) for a minimal set of moments for MAP(n)s.

It was shown in Kim (2016) that the lag-1 joint Laplace transform (LT) of stationary intervals of MAP(n)s can be written in terms of n2 parameters. Especially for MAP(2)s, the marginal and lag-1 joint LTs can be written in terms of (a0, a1, b1, c11) as follows

f~s=psI-D0-1D1e=b1s+a0s2+a1s+a0(2.1) 
f~s,t=psI-D0-1D1tI-D0-1D1e=c11s+a0b1s+t+a02s2+a1s+a0t2+a1t+a0(2.2) 

where (a0, a1, b1, c11) = -D0, Trace D0,pD1e,pD12e. A MAP(2) can be completely described by Eq. (2.2) which is given in terms of four coefficients (a0, a1, b1, c11).

2.3 Jordan Representation

Another minimal representation used in our study is the Jordan representation given in two matrices (E, R); see Telek et al. (2007) and Kim (2020) for details. A MAP(n) can be represented by two n × nmatrices (E, R) given in terms of n2 parameters. The diagonal entries of the matrix E are eigenvalues of (-D0)-1. The matrix R accounts for lag-1 correlation and satisfies Re = e; see Telek et al. (2007) for details.

For a MAP(2), let (ν1, ν2) be the eigenvalues of (-D0)-1 with ν1ν2. For the case of distinct eigenvalues, (E, R) can be written in terms of (ν1, ν2, ν3, ν4), i.e.

E=ν100ν1,R=1-ν3ν3ν41-ν4

If ν1 = ν2, the E is written as

E=ν100ν1.

The marginal moments and the lag-1 joint moments are obtained as

r1,r2,r3,r11=vEe,vE2e,vE3e,vEREe,

where v = (ν4/(ν3+ν4), ν3/(ν3+ν4)). Again, by matching moments (ν1, ν2, ν3, ν4) can be determined by (r1, r2, r3, r11).


3. Joint Distribution and Lag-1 Conditional Distribution

In this section, we propose a new approach in generating a MAP without (D0, D1). We present closed - form formula for transformation of the joint LT to explicit joint distribution by which stationary intervals are generated.

Since moments can be obtained from the marginal and joint LTs in (2.1) and (2.2), the LT coefficients of the (a0, a1, b1, c11) can be uniquely determined from moments as follows

a0,a1=r12-r2r22-r1r3,r1r2-r3r22-r1r3,b1=2r1r2-r13-r3r22-r1r3,c11=a12-2a0a1r1+a02r11

of which details can be found in Kim (2016, 2017).

For the inversion of the marginal and joint LTs, we consider the case of distinct eigenvalues and the case of identical eigenvalues separately. First, Let X and Y be the random variables representing two consecutive stationary intervals of a MAP(2).

3.1 Distinct eigenvalues

If (-D0)-1 has distinct eigenvalues, then (-1/ν1,-1/ν2) can be determined as roots of the characteristic polynomial equation given as s2+a1s+a0 = 0. Assuming, WLOG, that ν1 > ν2, we have

ν1,ν2=a1+a12-4a02a0,a1-a12-4a02a0,

and

ν3,ν4=r1ν1+ν2-r11-ν1ν2r1-ν2ν1-ν2,r1ν1+ν2-r11-ν1ν2r1-ν1ν2-ν1

by moment matching based on Eq. (2.3). Again, by moment matching, the following equations can be obtained to write the marginal and joint LTs in (2.1) and (2.2) in terms of (ν1, ν2, ν3, ν4)

a0, a1, b1=1ν1ν2,ν1+ν2ν1ν2,ν1ν3+ν2ν4ν1ν2ν3+ν4,
c11=ν12ν31-ν4+ν22ν41-ν3+2ν1ν2ν3ν4ν12ν22ν3+ν4.

By inversion of the LT in (2.1) and (2.2) given in terms of (ν1, ν2, ν3, ν4), we get the following marginal and lag-1 joint density functions

fx=ϕ1ν1e-x/ν1+1-ϕ1ν2e-x/ν2(3.1) 
fx+y=ϕ1,1ν22e-x+y/ν1+ϕ1,2ν1ν2e-x/ν1-y/ν2 +ϕ2,1ν2ν1e-x/ν2-y/ν1+ϕ2,2ν22e-x+y/ν2(3.2) 

where

ϕ1,ϕ2=ν4ν3+ν4,ν3ν3+ν4,ϕ1,1=ν41-ν3ν3+ν4,        ϕ1,2=ϕ2,1=ν3ν4ν3+ν4,ϕ2,2=ν31-ν4ν3+ν4.

Note that (ϕ1, ϕ2) = (ϕ11+ϕ12, ϕ21+ϕ22). Depending on the sign of ν3, the marginal distribution of a stationary interval is either hyper-exponential or mixed generalized Erlang (MGE). In order to simplify the notation, let u1 (x) and u2 (x) be the exponential density function each with mean ν1 and ν2 respectively, i.e.

u1x=1ν1e-x/ν1,u2x=1ν2e-x/ν2

Also, let u12 (x) be the density function of the hypo-exponential distribution that is a sum of two independent exponential random variables each with mean ν1 and ν2, i.e.

u12x=e-x/ν1ν1-ν2+e-x/ν2ν2-ν1

Below, we denote this hypo-exponential distribution by Hypoexp(1/ν1, 1/ν2).

(1) Hyper-exponential distribution: ν3> 0

If ν3> 0, then we have ϕ1 > 0, ϕ2 > 0. Since ϕ1 + ϕ2 = 1, the marginal distribution of a stationary interval in (3.1) can be rewritten as

fx=ϕ1u1x+ϕ2u2x

and therefore X is a hyper-exponential, i.e.

XExp1/ν1w.p. ϕ1Exp1/ν2w.p. ϕ2(3.3) 

The joint density function (3.2) can be written as

fx,y=ϕ1,1u1xu1y+ϕ1,2u1xu2y+ϕ2,1u2xu1y+ϕ2,2u2xu2y.

By the definition of conditional probability, the conditional density function is given as

fyx=ϕ1,1u1x+ϕ2,1u2xϕ1u1x+ϕ2u2xu1y                +ϕ1,2u1x+ϕ2,2u2xϕ1u1x+ϕ2u2xu2y

by which the conditional distribution of Y given X is again hyper-exponential. That is, If X ~Exp(1/ν1), then the conditional distribution of Y is

YXExp1/ν1Exp1/ν1w.p. ϕ1,1/ϕ1Exp1/ν2w.p. ϕ1,2/ϕ1(3.4) 

Otherwise, if X ~ Exp(1/ν2), then the conditional distribution of Y is

YXExp1/ν2Exp1/ν1w.p. ϕ2,1/ϕ2Exp1/ν2w.p. ϕ2,2/ϕ2(3.5) 
(2) MGE(2): ν3 < 0

If ν3 < 0, then we have ϕ1 > ϕ2 and the marginal distribution in (3.1) is not hyper-exponential. By a simple manipulation, however, it can be shown that the marginal distribution is a mixed generalized Erlang, i.e.

fx=ψ1u1x+ψ12u12x

where

ψ1,ψ12=ϕ1+ϕ2ν1ν2,ϕ2-ϕ2ν1ν2=ν1ν3+ν2ν4ν2ν3+ν4,ν2-ν1ν3ν2ν3+ν4.

That is,

XExp1/ν1w.p. ψ1Hyperexp1/ν1,1/ν2w.p. ψ12(3.6) 

The joint density function is written as

fx,y=ψ1,1u1xu1y+ψ1,12u1xu12y+ψ12,1u12xu1y+ψ12,12u12xu12y

where

ψ1,1=ϕ1,1+ϕ1,2+ϕ2,1ν1ν2ν22+ϕ2,2ν12ν22,ψ1,12=ν2-ν1ν22ν2ϕ1,2+ν1ϕ2,2,ψ12,1=ν2-ν1ν22ν2ϕ2,1+ν1ϕ2,2,ψ12,12=ν1-ν22ν22ϕ2,2.

By the definition of conditional probability, the conditional density function is given as

fyx=ψ1,1u1x+ψ12,1u12xϕ1u1x+ϕ12u12xu1y+ψ1,12u1x+ψ12,12u12xψ1u1x+ψ12u12xu12y

by which the conditional distribution of Y given X is either exponential or hypo-exponential. That is, if X ~ Exp(1/ν1), then the conditional distribution of Y is

YXExp1/ν1Exp1/ν1w.p. ψ1,1/ψ1Hyperexp1/ν1,1/ν2w.p. ψ1,12/ψ1(3.7) 

or if X ~ Hypoexp(1/ν1, 1/ν2), then the conditional distribution of Y is

YXHyperexp1/ν1,1/ν2               Exp1/ν1w.p. ψ12,1/ψ1Hyperexp1/ν1,1/ν2w.p. ψ12,12/ψ1(3.8) 
3.2 Identical Eigenvalues

If (-D0)-1 has identical eigenvalues, i.e. ν1 = ν2, then we have a12=4a0 in Eq. (2.1) and we have ν1 = a1/2 and

ν3,ν4=r11-r21-r1+ν1r1-ν12,r11-r2r1-ν1

along with

a0,a1,b1=1ν12,2ν1,1ν11-ν4ν1ν3+ν4,               c11=1ν121-2ν1-ν4ν4ν12ν3+ν4.

Let u11 (x) be the density function of the Erlang(2, 1/ν1) distribution that is a sum of two independent and identical exponential random variables with mean ν1 i.e.

u11x=xe-x/ν1ν12.

Then, by inverse LT, the marginal LT in Eq. (2.1) is converted into the following marginal density function

fx=ψ1u1x+ψ11u11x

where (ψ1, ψ11) = (1-ϕ1/ν1, ϕ1/ν1). That is,

XExp1/ν1w.p. ψ1Erlang2,1/ν1w.p. ψ11(3.9) 

The joint LT given in Eq. (2.2), by inverse LT, is converted into the following joint density function.

fx,y=ψ1,1u1xu1y+ψ1,11u1xu11y+ψ11,1u11xu1y+ψ11,11u11xu11y

where

ψ1,1=ϕ1,1+ϕ1,2+ϕ2,1ν1ν2ν22+ϕ2,2ν12ν22,ψ1,11=ν2-ν1ν22ν2ϕ1,2+ν1ϕ2,2,ψ11,1=ν2-ν1ν22ν2ϕ2,1+ν1ϕ2,2,ψ11,11=ν1-ν22ν22ϕ2,2.

By the definition of conditional probability, the conditional density function is given as

fyx=ψ1,1u1x+ψ11,1u11xψ1u1x+ψ11u11xu1y+ψ1,11u1x+ψ11,11u11xψ1u1x+ψ1u11xu11y

by which the conditional distribution of Y given X is either exponential or Erlang. That is,

YXExp1/ν1Exp1/ν1w.p. ψ1,1/ψ1Erlang2,1/ν1w.p. ψ1,11/ψ1(3.10) 

or if X ~ Erlang(2,1/ν1), then the conditional distribution of Y is

YXEralng2,1/ν1Exp1/ν1w.p. ψ11,1/ψ11Erlang2,1/ν1w.p. ψ11,11/ψ11(3.11) 

4. Numerical Exmples
4.1 Distinct Eigenvalues: ν1ν2
(1) Hyperexponential Distribution: ν3 > 0

Consider a MAP(2) with the following set of moments (r1, r2, r3, r11) = (5/21, 17/294, 39/2744, 67/1176) for which we have (a0, a1, b1, c11) = (28, 11, 13/3, 19), (ν1, ν2) = (1/4, 1/7), and (ν3, ν4) = (1/12, 2/3). Since (ϕ1, ϕ2) = (8/9, 1/9), the marginal distribution is hyper-exponential which can be generated as follows

XExp4w.p. 8/9Exp7w.p. 1/9

by Eq. (3.3). Since (ϕ11, ϕ12, ϕ21, ϕ22) = (22/27, 2/27, 2/27, 1/27), the next interval Y can be generated conditional to the distribution of X, i.e.

YXExp4Exp4w.p. 11/12Exp7w.p. 1/12

or

YXExp7Exp4w.p. 2/3Exp7w.p. 1/3

by Eqs. (3.4) and (3.5). The next interval is generated conditional to the distribution of Y.

(2) MGE(2): ν3 < 0

Consider a MAP(2) with the following set of moments (r1, r2, r3, r11) = (3/5, 1/3, 8/45, 16/45) for which we have (a0, a1, b1, c11) = (6, 5, 7/5, 9/5), (ν1, ν2) = (1/2, 1/3), and (ν3, ν4) = (-1/2, 4/3). Since (ϕ1, ϕ2) = (8/5, -3/5), the marginal distribution is MGE(2) which can be generated as follows

XExp2w.p. 7/10Hyperexp2,3w.p. 3/10

by Eq. (3.6). Since (ψ1,1, ψ1,12, ψ12,1, ψ12,12)= (9/20, 1/4, 1/4, 3/10), the next interval Y can be generated conditional to the distribution of X, i.e.

YXExp2Exp 2w.p. 9/14Hyperexp 2,3w.p. 5/14

or

YXHyperexp2,3Exp 2w.p. 5/6Hyperexp 2,3w.p. 1/6

by Eqs. (3.7) and (3.8). The next interval is generated conditional to the distribution of Y.

4.2 Identical Eigenvalues: ν1 = ν2

Consider a MAP(2) with the following set of moments (r1, r2, r3, r11) = (3/7, 11/63, 13/189, 103/567) for which we have (a0, a1, b1, c11) = (9, 6, 15/7, 31/7), (ν1, ν2) = (1/3, 1/3), and (ν3, ν4) = (-1/2, 4/3). We have (ϕ1, ϕ2) = (19/27, 2/27) and the marginal distribution is MGE(2) which can be generated as follows

XExp 3w.p. 5/7Erlang 2,3w.p. 2/7

by Eq. (3.9). Since (ψ1,1, ψ1,12, ψ12,1, ψ12,12) = (31/63, 14/63, 14/63, 4/63), the next interval Y can be generated conditional to the distribution of X, i.e.

YXExp3Exp 3w.p. 31/45Erlang 2,3w.p. 14/45

or

YXErlang2,3Exp 3w.p. 7/9Erlang 2,3w.p.2/9

by Eqs. (3.10) and (3.11). The next interval is generated conditional to the distribution of Y.


5. Conclusions

The one-to-one correspondence between the minimal set of moments and the minimal LT representation enables us to find a joint distribution function of the stationary intervals of a MAP(n) by linear transformation from moments to minimal LT and then by inversion of the LT. By simple algebraic manipulation of the inverse LT a joint distribution can be obtained by which a MAP can be generated without transition rate matrices. The presented analytic procedure is only for MAP(2)s of which canonical form is known for transformation from moments to the Markovian representation. However, no canonical transformation is available for MAPs of order 3 or higher for which a generalization of our approach can be useful in modeling and simulating a queueing system with a MAP. Unlike the MAP(2) whose eigenvalues are always real valued, higher order MAPs may have complex-valued eigenvalues in which case the MAP can be represented as a repetition of a real-valued MAP. A natural direction of future research is the study of MAPs with cyclic Markov chain and trigonometric distribution function due to complex eigenvalues.


References
1. Bodrog, L., Heindl, A., Horv'ath, G., and Telek, M. (2008), A Markovian canonical form of second - order matrix - exponential processes, European J. of Oper. Res., 190(2) 459-477.
2. Bodrog, L., Horv'ath, G., and Telek, M. (2008), Moment characterization of matrix exponential and Markovian arrival processes, Annals of Oper. Res., 160(1) 51-68.
3. Casale, G., Zhang, E., and Smirni, E. (2010), Trace data characterization and fitting for Markov modeling, Performance Evaluation, 67 61-79.
4. Kim, S. (2011), Modeling cross correlation in three - moment four - parameter decomposition approximation of queueing networks, Oper. Res., 59(2) 480-497.
5. Kim, S. (2016), Minimal LST representations of MAP()s: moment fittings and queueing approximations, Naval Research Logistics, 63(7) 549-561.
6. Kim, S. (2017), The characteristic polynomial and the Laplace representations of MAP(2)s, Stochastic Models, 33(1) 30-47.
7. Kim, S. (2020), Jordan Representation of the Markovian Arrival Process of Order 2 and Moments Fitting, J. of the Korean Operations Research and Management Science Society, 45(1), 1-10
8. Ramirez-Cobo, P., Lillo, R., and Wiper, M. (2010), Nonidentifiability of the two - state markovian arrival process, J. of Applied Probability, 47, 630-649.
9. Ramirez-Cobo, P. and Carrizosa, E. (2012), A note on the dependence structure of the two - state Markovian arrival process, J. of Applied Probability, 49, 295-302.
10. Telek, M. and Horv'ath, G. (2007), A minimal representation of Markov arrival processes and a moments matching method, Performance Evaluation, 64(9-12), 1153-1168.

저자소개

김선교: 1987년 서울대학교 경영학과에서 학사학위, 1995년 University of California at Berkeley에서 석사학위, 2000년 Purdue University에서 산업공학 박사과학위를 취득하였다. 2002년부터 아주대학교 경영학과에서 교수로 재직하고 있다. 연구분야는 대기행렬 네트워크 분석, 마코비안 도착과정 등이다.