Transient analysis of one-sided L\'evy-driven queues

In this paper we analyze the transient behavior of the workload process in a L\'evy input queue. We are interested in the value of the workload process at a random epoch; this epoch is distributed as the sum of independent exponential random variables. We consider both cases of spectrally one-sided L\'evy input processes, for which we succeed in deriving explicit results. As an application we approximate the mean and the Laplace transform of the workload process after a deterministic time.


Introduction
This article studies the transient workload in a queue fed by a Lévy input process X = {X t } t≥0 ; here the workload process, in the sequel denoted by {Q t } t≥0 , is defined as the reflection of X at zero. This workload process can be constructed from the input process X as the (unique) solution of the so-called Skorokhod problem [6,13,14] . It turns out that the process Q follows from X through The process {L t } t≥0 is often referred to as local time (at zero) or regulator process [9] .
As mentioned above, we are interested in the transient behavior of the workload process. In queueing theory, transient analysis is a classical topic that is treated in various standard textbooks [2,5,12] . Typically, transient analysis is important in situations where the time horizon considered is relatively short, so that it cannot be ensured that the system is 'close to stationarity' . In addition, transient results are useful in cases that the net-input process changes over time; it, for instance, facilitates the analysis of systems with time-varying demand as well as the assessment of the impact of specific workload control mechanisms. In general, transient analysis allows us to assess the impact of the initial state Q 0 .
The main contribution of this article is the generalization of the existing results on the transient behavior of the workload process {Q t } t≥0 . We consider n exponentially distributed random variables T 1 , . . . , T n with parameters q 1 , . . . , q n and we analyze the joint behavior of the vector with a specific focus on Q T 1 +···+T n . It is noted that this also directly yields Q T when T follows a Coxian distribution; see Section 6 for some additional background on this claim. This observation is particularly useful owing to the fact that any distribution on the positive half line can be approximated arbitrarily closely by a sequence of Coxian distributions (in the sense of convergence (Section 3.4 of Ref. [2] )). For the case of a spectrally positive input process, the results are given in terms of the Laplace-Stieltjes transform (LST), whereas we find an expression for the associated density for the spectrally negative case.
Apart from the general results obtained, a second contribution lies in the reasoning behind our proofs. More specifically, our proofs reveal that the above formulae obey an elegant and simple tree structure. The transient workload behavior consists of 2 n terms that can be recursively evaluated. We prove our results by induction; given that we know the expression for the quantity under consideration at n − 1 exponential epochs, we derive the expression at n exponential epochs. In this induction step, from n − 1 to n that is, it can be seen how each term produces two offspring terms, thus giving insight into the underlying structure. The idea behind the proofs yields a mechanism to address questions related to transient analysis at random epochs, which may help in obtaining a deeper understanding of the behavior of the underlying continuous-time queueing system. Transient analysis of queueing systems started with the analysis of the waiting times in the M/M/1 queue [12] . In Refs. [3,15] , the authors analyzed the LST of the waiting time process in the M/G/1 queue. The argument used there was also applied in Ref. [6] , so as to derive Theorem 2.1.1 below for the case of a compound Poisson input process. The transient analysis of Lévy-driven queues is of a much more recent date [2,6,8] for results on the workload process in a Lévy-driven queue at an exponential epoch (which are briefly summarized in Section 2). As a direct application, the authors of Ref. [4] studied clearing models, where special attention is paid to clearings at exponential epochs (relying on results on the workload at an exponential epoch in an M/G/1 setting).
Concerning the structure of the article, in Section 2 we present our notation, as well as the preliminaries that are needed in order to prove our results. In Sections 3 and 4, we present and prove the main results of the article, which are Theorems 3.1.1 and 4.1.1. Section 3 concerns the spectrally positive case and Section 4 the spectrally negative case. We support the final results with intuitive arguments based on a tree structure. In Section 5, we illustrate how our findings can be applied to numerically study the transient behavior of Lévy-driven queues. Finally, Section 6 contains conclusions and a brief discussion.

Model, notation, and preliminaries
In this section, we present the workload at an exponential epoch for queues with spectrally positive (Section 2.1) and spectrally negative (Section 2.2) Lévy input processes. These results are heavily relied upon throughout the article, and in addition serve as a benchmark. In passing, we also introduce our notation.
Our interest is in the transient behavior of the workload process {Q t } t≥0 . We consider an exponentially distributed random variable T with parameter q (sampled independently from the Lévy input process) and focus on the transform E x e −αQ T , where α ≥ 0 and x denotes the initial workload. In this case, the transform E x e −αQ T is explicitly known, and is given in the following theorem [6,8,15] . Theorem 2.1.1. Let X ∈ S + and let T be exponentially distributed with parameter q, independently of X. For α ≥ 0, x ≥ 0, Using Laplace inversion techniques [1] , information about the process can then be inferred from the LST as it uniquely determines the distribution of Q t , for each t and any initial workload x.
When working with spectrally negative Lévy processes, the so-called q-scale functions, W (q) (·) and Z (q) (·) play a crucial role, particularly when studying the fluctuation properties of the reflected process [6,11] . For q ≥ 0 and x ≥ 0, let W (q) (x) be a strictly increasing and continuous function whose Laplace transform satisfies we let W (q) (x) equal 0 for x < 0. From (Theorem 8.1.(i) of Ref. [9] ), it follows that such a function exists. Having defined the function W (q) (·), we define the function Z (q) (·) as follows: We immediately see the importance of the q-scale function in the density of the workload process at an exponential epoch, given in the following theorem (Section 4.2 in Ref. [6] ), which is originally due to Pistorius [10] . Theorem 2.2.1. Let X ∈ S − and let T be exponentially distributed with parameter q, independently of X. For α ≥ 0, x ≥ 0 and β > 0, The result on the LST of Q T in the above theorem follows for β > (q) by a direct computation from the density of Q T ; by a standard analytic continuation argument, the resulting expression then holds for any β > 0.

Spectrally positive case
In this section, we present our main result for the case the input process is spectrally positive, viz. Theorem 3.1.1. We first derive the workload behavior at two exponential epochs as this clearly demonstrates how the various terms appear. We then present and prove our result; we mainly rely on mathematical induction and a recursion formula relating the desired coefficients. At last, we elaborate on the mechanism for obtaining the workload at n exponential epochs, yielding an intuitively appealing tree structure.

Main result
Suppose we have a spectrally positive Lévy process X. We want to describe the behavior of the workload process {Q t } t≥0 at consecutive exponential epochs. We do this by considering exponentially distributed random variables T 1 , . . . , T n with distinct parameters q 1 , . . . , q n and calculate, for α i ≥ 0 and some initial workload x ≥ 0, the joint Laplace transform given by It is instructive to first illustrate how to derive an expression for the joint transform at two exponential epochs, i.e., for E x e −α 1 Q T 1 −α 2 Q T 1 +T 2 . From Theorem 2.1.1, we have an expression for the transform E x e −αQ T . Consider now two exponentially distributed random variables with parameters q 1 , q 2 . Then, conditioning on Q T 1 in combination with applying Theorem 2.1.1 twice, yields We see that by conditioning on the value of the workload at the first exponential epoch we can derive the transform at two exponential epochs. The above reasoning rests on the property that the process {Q t } t≥0 is a Markov process.
Some special attention is needed for the case α 1 = 0 and q 1 = q 2 , i.e., when T has an Erlang-2 distribution. From the last term in Equation (3.2), we see that an additional limiting argument is required. A straightforward application of 'l'Hôpital' then yields the expression for E x e −αQ T as in (Section 4.1 in Ref. [6] ).
The main idea for the case of n exponentially distributed random variables T i is very similar: condition on the workload at the first exponential epoch, thus obtaining 3) is used in combination with Theorem 2.1.1 to determine the transform at n exponential epochs given the joint transform at n − 1 epochs. In what follows, we use the following notation: (2 l j−2 l−1 +1,l) denote the coefficients of exp[−(α 1 + · · · + α l−1 + ψ (q l ))x] (where l = 2, 3, . . . , n and j = 1, 2, . . . , 2 n−l ). We note here that the superscript (n) in these factors corresponds to the number of exponential random variables considered. We now proceed to the main result for the case of a spectrally positive input process.  The vectorsq = (q 1 , . . . , q n ) andᾱ = (α 1 , . . . , α n ) are here explicitly included, so as to show the dependence of the coefficients on the q's and α's. Later on, these vectors are omitted to keep the notation concise.
Proof of Lemma 3.1.1. From Theorem 2.1.1, we see that for n = 1 the signs of the coefficients are +, −. For n = 2 and from Equation (3.2) we see that the signs are +, −, −, +. Since we know how the terms are produced when we go from the step with n exponential times to the step with n + 1 exponential random variables (see Equation (3.3)) we see that the signs at every step can be represented by a tree graph. In this tree, row n consists of 2 n nodes and, starting from the left, the nodes represent the sign of every factor when the expression is written as in Equation (3.4).
We see that row n + 1 can be derived from row n when substituting every + in row n by the pair +, −, and every − by the pair −, +. We can understand why this holds by looking at the expression in Theorem 2.1.1. Denote by c ( j,n) the sign of the jth element in the nth row in the above tree. Then c ( j,n) , for j = 1, 2, . . . , 2 n , corresponds to the sign of the jth coefficient when considering n exponentially distributed in Equation (3.4).
We observe that, because of symmetry, for the signs of the kth row, for j = 1, . . . , 2 k−1 , Hence the signs j and j + 2 k−1 in the kth row will always be opposite. We prove the lemma by induction on the number of exponentially distributed random variables.
(2) We assume that the lemma holds for n = k. Hence, for j = 1, . . . , 2 k , we have Here, we make the following observation. In the tree presented above, consider an arbitrary row n. The 2 n signs of that row and the first 2 n signs of the (n + 1)th row are the same. Now consider the (k + 1)th row. Using the observation above and the induction hypothesis, the lemma holds for the first 2 k signs of this (k + 1)th row. Hence, we need to prove this statement only for j = 2 k + 1, . . . , 2 k+1 . For j = 1, 2, . . . , 2 k , we have We also know that the binary expansion of j has one more 1 than the binary expansion of j since we add 2 k , i.e., Before proceeding with the proof of Theorem 3.1.1, we present some general remarks which will be used later as well. (a) 2 l j − 2 l−1 + 1 is an odd number.
which is always an odd number. In addition, Proof of Theorem 3.1.1. We use induction on the number of exponential random variables T 1 , . . . , T n . For the proof, it is sufficient to start with n = 1 (where it can be readily checked that Theorem 3.1.1 holds for n = 1), but the case n = 2 is more instructive. The joint transform for n = 2 can be found in Equation (3.2).
Using Definition 3.1.1, we find the following expressions: , (1,2) = α 2 , and d (2,2) = 0. Moreover, where we see from the table for the factors , and d (2,4) = 0. This leads to the following result: For the last term, the coefficient of e −(α 1 +ψ(q 2 ))x , we get Since d (1,3) = ψ (q 2 ) and d (2,3) = 0, this agrees with Equation (3.4), and thus the result holds for n = 2. We now assume that our formula holds for n = k − 1. Hence, we have that where the coefficients L (k−1) (2 l j−2 l−1 +1,l) are given by Definition 3.1.1 for n = k − 1 and the signs of all the factors are given by Lemma 3.1.1. In the induction step, we prove this theorem for n = k given that it holds for n = k − 1. The expression for n = k is derived from calculating the integral where the expectation in the integral is known by the induction hypothesis. Here, we see that we must raise all indices in Equation (3.8) by one when we do the calculations because we start from time T 2 with parameter q 2 instead of from T 1 . Combining the above with Equation (3.8), we obtain The two integrals in Equation (3.9) can be computed using Theorem 2.1.1. Each integral gives two new terms; the exponents are easily observed after an application of Theorem 2.1.1. Therefore, below we primarily focus on the coefficients.
When considering the integrals in Equation (3.9), the two terms obtained are referred to as the first and second term and are denoted by adding a 1 or 2 as indices to I and II. We now successively consider (the coefficients of) I 1 , II 1 , I 2 , and II 2 .
(3.10) This corresponds to the coefficient of the first term in Theorem 3.1.1.
r Coefficient of II 1 . For l = 2, 3, . . . , k it is seen that the terms L (k) (2 l j−2 l−1 +1,l) for j = 1, 2, . . . , 2 k−l can be derived from the terms L (k−1) (2 l−1 j−2 l−2 +1,l−1) by taking the first term of the integrals: This table follows from Definition 3.1.1 and the observation that the factor d (i,2 l−1 j−2 l−2 +1) initially was the factor added to the term α i−1 (this is why we use the notationd for these terms); this is due to the fact that in Equation (3.11) all indices are raised by one. In order to bring this into the form of Definition 3.1.1, we observe the following: (1) Concerning the signs, we have the relation c (2 l−1 j−2 l−2 +1,k−1) = c (2 l j−2 l−1 +1,k) for all l = 2, . . . , k and j = 1, . . . , 2 k−l . We see this as follows. From Lemma 3.1.1, we see it is sufficient to show that the numbers 2 l−1 j − 2 l−2 and 2 l j − 2 l−1 have the same parity. But this holds as 2 l j − 2 l−1 = 2(2 l−1 j − 2 l−2 ). Intuitively, we can see this from the tree graph in Figure 1; every time we move down and left, the sign is always the same. (2) Concerning the labeling of the terms, using the fact that (3) From the four properties in Remark 3.1.3, we see that d (1,2 l j−2 l−1 +1) = α 2 + · · · + α l−1 + ψ (q l ) and The arguments in (a)-(c) show that, for l = 2, 3, . . . , k, j = 1, 2, . . . , 2 k−l , r Coefficient of I 2 . For the terms L (k) , the coefficients of exp[−ψ (q 1 )x] for k exponentially distributed random variables), we observe that these are given from all terms in the previous step, one from each. The first term, L (k) (2,1) results from the integration which leads to and, hence, we get By using these facts, it follows that r Coefficient of II 2 . In general, the terms L (k) Consider the terms L (k) (2 l+1 j−2 l +2,1) for l = 1, . . . , k − 1 and j = 1, . . . , 2 k−1−l . From the integral in Equation (3.15) we obtain, for l = 1, . . . , k − 2 and j = 1, 2, . . . , 2 k−2−l , where the factorsd (i,2 l j−2 l−1 +1) are given bȳ is even.
Using the same observation as in Equation (3.12), it is found that, for j = 1, . . . , 2 k−l−1 and i = l + 1, . . . , k, From Remark 3.1.3 (a)-(c), we see that and, for i = 2, 3, . . . , l, These observations allow us to write L (k) (2 l+1 j−2 l +2,1) as follows: (3.16) Concerning the signs, we obtain the relation c (2 l+1 j−2 l +2,k) = −c (2 l j−2 l−1 +1,k−1) , since the numbers 2 l+1 j − 2 l + 1 = 2(2 l j − 2 l−1 ) + 1 and 2 l j − 2 l−1 have opposite parities. This final expression agrees with those presented in Theorem 3.1.1. The expression in Equation (3.16) yields also the second recursion relation given in Equation (3.6). Now, we combine the above results to complete the proof. Using the coefficients of I 1 and I 2 , i.e., Equations (3.10) and (3.14), we can rewrite Equation (3.9) to Using the definition of L (k) (2,1) , in conjunction with the coefficients II 1 and II 2 and Definition 3.1.1, the above expression can be written as It remains to write the last sum in the desired form. This double sum has in total 2 k−1 − 1 terms, and we observe that for l = 2, . . . , k and j = 1, . . . , 2 k−l , 2 l j − 2 l−1 + 2 defines a partition of the even numbers 4, 6, . . . , 2 k into k − 1 classes each one containing 2 k−l numbers. Relabeling the terms with only one subscript, we can write this double sum as where i = 2 l−1 j − 2 l−2 + 1 for l = 2, . . . , k and j = 1, . . . , 2 k−l . From the above, it follows that L can be written as the expression in Theorem 3.1.1. This completes the proof.
Remark 3.1.4. Similar to the Erlang-2 situation (i.e., n = 2 and q 1 = q 2 ), the case in which some of the q i 's are the same has to be treated separately. For instance, n successive applications of 'l'Hôpital' lead to an expression for E x e −αQ T , when T has an Erlang-n distribution.

Intuition
After the formal proof of Theorem 3.1.1, we present an intuitive argument to understand how the coefficients in the exponential terms of the transform appear; this is illustrated in Figure 2 below. Specifically, due to the integration in Equation (3.3) and Theorem 2.1.1, it follows that each term produces two new terms when an exponential epoch is added (i.e., when moving from n − 1 to n exponential epochs), such that the transform at n exponential epochs consists of 2 n exponential terms. We observe that in the expression for n random variables the first term is, for every n, exp[−(α 1 + · · · + α n )x] (multiplied by some coefficient). The exponents exp[−(α 1 + · · · + α l−1 + ψ (q l ))x], where l = 1, . . . , n, produce one exponential term of higher order exp[−(α 1 + · · · + α l + ψ (q l+1 ))x] as well as one term corresponding to exp[−ψ (q 1 )x]; it is seen that the latter terms always appear at the 'even positions' . This mechanism is depicted in the tree diagram in Figure 2, where row n shows the 2 n factors when we have n exponentially distributed random variables T 1 , . . . , T n . For ease, we only write the exponent at every node, hence the node α 1 + ψ (q 2 ) represents the term corresponding to exp[−(α 1 + ψ (q 2 ))x] (multiplied by some coefficient). In every row, the factors are counted from the left. We observe that the entire tree consists of subtrees starting from a node ψ (q 1 ) (apart from the first element of every row). Suppose we have the element exp[−(α 1 + · · · + α l−1 + ψ (q l ))x] in the nth row. This originates from a subtree generated by an initial node ψ (q 1 ) that is l − 1 rows higher in the tree. This follows from the fact that if we start from the node ψ (q 1 ) we have to move l − 1 times down and left in order to reach the node α 1 + · · · + α l−1 + ψ (q l ). So the node α 1 + · · · + α l−1 + ψ (q l ) in the nth row belongs to a subtree spanned from the node ψ (q 1 ) in the (n − l + 1)th row. For the ordering of terms, we assume that this initial node is at position 2 j for some j = 1, 2, . . . , 2 n−l ; we recall here that the nodes ψ (q 1 ) are located at the even positions of each row. Since the node is at position 2 j there are 2 j − 1 nodes in front of it. At every step downwards in the tree, the number of terms doubles since every term will give two new terms after using Theorem 2.1.1. Since we go down l − 1 rows, those 2 j − 1 nodes will produce in total (2 j − 1)2 l−1 = 2 l j − 2 l−1 nodes. Hence, we see that the element exp[−(α 1 + · · · + α l−1 + ψ (q l ))x] in the nth row is at the position 2 l j − 2 l−1 + 1. The numbering of the coefficients is based on this ordering.
From the tree presented in Figure 3, we can also see why the recursion relations given in Equations (3.5) and (3.6) hold. Equation (3.5) corresponds to the case of the coefficient L (n−1) (2 l j−2 l−1 +1,l) , i.e., row n − 1 and node 2 l j − 2 l−1 + 1 of the tree, and we move down and left. Similarly, Equation (3.6) corresponds to the case of the coefficient L (n−1) (2 l j−2 l−1 +1,l) , and we move down and right.

Spectrally negative case
In this subsection, we concentrate on the case of a spectrally negative input process X. The joint workload density has a structure that is very similar to that observed for the LST in the spectrally positive case. Due to the strong Markov property, the joint density can be decomposed into P x (Q T 1 ∈ dy 1 ; · · · ; Q T 1 +···+T n ∈ dy n ) = P x (Q T 1 ∈ dy 1 ) · · · P y n−1 (Q T n ∈ dy n ).
That is, the joint density is simply the product of densities at single exponential epochs, as given in Theorem 2.2.1. Henceforth, we focus on the density of the workload process at consecutive exponential epochs, i.e.,

Main results
First, we illustrate how to obtain an expression for the density P x (Q T 1 +T 2 ∈ dy) for some initial workload x ≥ 0 and y > 0. From Theorem 2.2.1, we have an expression for the density P x (Q T ∈ dy). Consider now two exponentially distributed random variables T 1 , T 2 with distinct parameters q 1 , q 2 . Conditioning on Q T 1 and applying Theorem 2.2.1 twice yields After some standard calculus and using the definition of the q-scale functions, we find the expression Again, the case q 1 = q 2 has to treated separately, by using l'Hôpital's rule; the detailed computations corresponding to this case can be found in[ 6 , Section 4.2].
We see that by conditioning on the value of the workload at the first exponential epoch we can derive the transform at two exponential epochs. As a next step, our aim is to find an expression for Equation (4.1) for an arbitrary n > 0 and for exponentially distributed random variables T i with parameter q i (i = 1, . . . , n). Conditioning on the workload at the first n − 1 exponential epochs yields For the case of a spectrally positive input process (which was the topic of the previous subsection), one should condition on the value at the first exponential epoch, which allows the use of the induction hypothesis, but one needs to adjust the indices appropriately as the first exponential random variable is actually T 2 . For the spectrally negative case, however, conditioning on the value of T 1 + · · · + T n−1 (and not only on T 1 ) allows us to circumvent this technicality. We now proceed with the main result for the spectrally negative case. given that Q 0 = x, is given by where the coefficients L (n) (2 l j−2 l−1 +1,l) (y) are given in Definition 4.1.1.
Proof of Lemma 4.1.1. Let us first consider the sign of the jth term when we have n exponentially distributed random variables. From Theorem 2.2.1, we see that, for n = 1, the signs of the coefficients are −, +. For n = 2 and the expression in Equation (4.3), it turns out that the signs are +, −, −, +. Following how the terms are produced when we go from the step with n exponential times to the step with n + 1 exponential times (i.e., according to Equation (4.4)), the signs at every step can be represented by the tree graph in Figure 4. In every row, starting from left to right, the nodes represent the sign of each factor.
We see that row n + 1 can be obtained from row n if we substitute every + by the pair −, +, and every − by the pair +, −. This holds due to the order the corresponding integrations are done (see Theorem 2.2.1 and Equation (4.4)). Denote by c ( j,n) the sign of the jth element in the nth row in the tree. Then j = 1, 2, . . . , 2 n and c ( j,n) corresponds to the sign of the jth coefficient when we have n exponentially distributed random variables in the expression considered in Theorem 4.1.1.
We prove this lemma by induction.
(1) For n = 1, we have to find the values of c (1,1) and c (2,1) . For j = 1, we need the binary expansion of 2 1 − 1 = 1, which has one 1, while for j = 2 we need the binary expansion of 2 1 − 2 = 0 which has zero ones. Thus, we get that c (1,1) = −1 and c (2,1) = +1. (2) We assume the lemma holds for n = k, i.e., for row k of the tree graph in Figure 4. Hence, for j = 1, 2, . . . , 2 k , From the tree presented above, we observe that the 2 n signs of an arbitrary row are the same as the last 2 n signs of row n + 1.
Consider now the (k + 1)th row of the tree. Using the induction hypothesis and the observation above it follows that the lemma holds for the last 2 k signs of the (k + 1)th row as well. We can also see this by observing that for the last 2 k signs of the (k + 1)th row we are interested in the binary expansions of 2 k+1 − j for j = 2 k + 1, . . . , 2 k+1 , which is essentially equivalent to considering the binary expansions of 2 k − j for j = 1, . . . , 2 k . This shows that, for j = 1, . . . , 2 k , What remains is to prove the lemma for the elements 2 k − j, j = 1, . . . , 2 k , of the (k + 1)th row. At this point, we observe that at an arbitrary row n, because of symmetry Hence, the signs of terms j and j + 2 n−1 in the nth row will always be opposite. This yields that in the (k + 1)th row we have, for j = 1, 2, . . . , 2 k , k+1) . But we know that We also know that, for j = 1, 2, . . . , 2 k , the binary representation of 2 k+1 − j has one more 1 than the binary representation of 2 k+1 − 2 k − j = 2 k − j. This leads to the expression Before proceeding to the proof of Theorem 4.1.1, we make some remarks concerning the result established in Lemma 4.1.1; these remarks are used in the proof of Theorem 4.1.1.

Remark 4.1.2.
For an arbitrary row n in the tree presented in Figure 4, we have that (1,n+1) .
We know that in order to find the first sign of the nth row, we must find the binary expansion of the element 2 n − 1, which has exactly n ones. Thus, for an arbitrary n ≥ 1, we get the expression c (1,n) = (−1) n and this also shows the relation mentioned in the remark. To see this, we observe, as in the proof of Lemma 4.1.1, that the 2 n signs of the nth row are the same as the last 2 n signs of the (n + 1)th row. This gives, for l = 1, . . . , n and j = 1, . . . , 2 n−l , Using the symmetry of the signs in each row, i.e., for i = 1, . . . , 2 n−1 , c (i,n) = −c (i+2 n−1 ,n) , we obtain the equality above.

Remark 4.1.4.
For the jth sign of the nth row and the (2 n + j)th sign of the (n + 1)th row, we have the following expression: For the value of c ( j,n) , we need the binary representation of 2 n − j while for the value of c (2 n + j,n+1) we need the binary representation of 2 n+1 − 2 n − j = 2 n − j, yielding Equation (4.7).
From Lemma 4.1.1, we have that c (1,1) = −1 and c (2,1) = +1. Due to Definition 4.1.1, L (1) (2,1) = + (q 1 )e − (q 1 )y . Hence, we obtain This is the expression found in Equation (2.2), and we conclude that our result holds for n = 1. We now assume that Theorem 4.1.1 holds for n = k − 1. Consider now the case of n = k exponentially distributed random variables, then conditioning on the value of the workload at the first k − 1 exponential epochs yields We see that we have to evaluate the following four integrals: 10) (4.12) r Integral J 1 . By a change of variable argument and using the fact that W (q) (x) = 0 for x < 0, we find that From Equation (4.8), we see that the first term of P Using Remark 4.1.2, we have −c (1,k−1) = c (1,k) and this shows that we have identified the first term of the expression in Theorem 4.1.1.
r Integral J 2 . It is straightforward that (4.14) Hence, the second term of Equation (4.8) is equal to This term corresponds to l = k in the summation of the second term in Theorem 4.1.1. We need to show that First, we observe that, for l = k and j = 1, we have that m(1, k) = k and due to Remark 4.1.4 we have that c (1,k−1) = c (2 k−1 +1,k) . This shows that yielding that the expression for the second term, for l = k, agrees with Theorem 4.1.1.
Returning to Equation (4.8), and using the expressions found in Equations (4.13), (4.14), (4.17), and (4.19), we find The last expression can be written more compactly, yielding This concludes the proof of Theorem 4.1.1.
Using the result obtained in Theorem 4.1.1, we can find an expression for the transform with respect to the initial workload as well; again, analytic continuation is used to obtain the result for any β > 0. Corollary 4.1.1. For α > 0, β > 0 and for n independent exponentially distributed random variables T 1 , . . . , T n with distinct parameters q 1 , . . . , q n , we have where m( j, l) and c (2 l (4.20) where the coefficients L (n) (2 l j−2 l−1 +1,l) are given in Definition 4.1.1. We see that we have to work with the following three integrals: Relying on the properties of the q-scale functions and after some straightforward calculus, we find We conclude that by substitution of K 1 , K 2 , and K 3 in Equation

Intuition
Similarly to the case of a spectrally positive input process, the transition from step n − 1 to n can again be represented by using an elegant tree structure. The expression for the density at n − 1 exponential epochs has 2 n−1 terms and each term produces two new terms when integrated with the density P z (Q T n ∈ dy) (with respect to z). We also notice that in the expression for n exponentially distributed random variables, the first term is always of the form (W (q n ) · · · W (q 1 ) )(x − y) while the other terms are of the form (Z (q l ) W (q l−1 ) · · · W (q 1 ) )(x), for l = 1, 2, . . . , n, multiplied by some coefficients that in general are functions of y. The underlying mechanism is illustrated in Figure 5. In this tree, the node W (q n ) (x − y) denotes the term (W (q n ) · · · W (q 1 ) )(x − y) while the nodes Z (q l ) (x), for l = 1, . . . , n, denote the terms (Z (q l ) W (q l−1 ) · · · W (q 1 ) )(x). We see that at every row, say row k for ease, a new subtree with root (Z (q k ) W (q k−1 ) · · · W (q 1 ) )(x) is created. These terms do not change as we move downwards in the tree since they only depend on the initial workload x and do not take part in the integrations, similar to those carried out in Equation (4.2). Their coefficients change though, by a mechanism that was analyzed in the proof of Theorem 4.1.1.
For the spectrally negative case, we use a tree graph to illustrate, similar to Figure 3, how the coefficients of the convolution terms change from step n to n + 1. Based on the reasoning that led us to the tree diagram in Figure 5 for the spectrally negative case, we adopt the numbering of the coefficients L (n) (2 l j−2 l−1 +1,l) as in Figure 6. Comparing this tree graph with the one corresponding to the spectrally positive case (i.e., Figure 3), we observe that now the numbering of the coefficients is 'mixed' . In the spectrally positive case, every time we went right a new subtree with root ψ (q 1 ) was generated, whereas here only the first time we turn right a subtree with root (Z (q l ) W (q l−1 ) · · · W (q 1 ) )(x) is generated. So, in the spectrally negative case, convolution terms of the same order gather in one subtree. When moving downwards in the subtree, the coefficients change according to a recursive pattern. This mixed enumeration of the coefficients is due to these two characteristics. In the spectrally positive case, terms of all possible orders are generated in all subtrees generated by some ψ (q 1 ). From Figure 6, we see that the coefficient L (n) generates the coefficients L (n+1) (2 l j−2 l−1 +1,l) and L (n+1) (2 l j−2 l−1 +1+2 n ,l) . This also explains why the recursions presented in Equation (4.5) hold. At step n, we observe that, for l = 1, . . . , n, we have a total of 2 n−l terms of type (Z (q l ) W (q l−1 ) · · · W (q 1 ) )(x), which is why we choose to label the coefficients of these terms by the numbers 2 l j − 2 l−1 + 1. This trick leads to an expression which is relatively easy to work with in the proof, and at the same time has a structure similar to the one featuring in Theorem 3.1.1 for the spectrally positive case.

Numerical calculations
In this section, we present numerical illustrations of the transient workload behavior. We consider examples corresponding to the spectrally positive case (noting that the spectrally negative case can be dealt with similarly). The expression found in Theorem 3.1.1 is, from an algorithmic standpoint, highly attractive; the only drawback is that for every n we have to compute 2 n terms, thus increasing the computation time significantly at every step. We also comment on ways to determine the workload distribution at a fixed (deterministic, that is) time; the main idea there is to approximate a deterministic epoch t by the sum of exponentially distributed random variables with appropriately chosen parameters.
In our illustrations, we consider the impact of n, i.e., the number of exponential variables, with n ∈ {1, . . . , N}. The algorithm makes use of the expression established in Theorem 3.1.1 and the recursive Equations (3.5) and (3.6) to compute transient related performance measures, i.e., the joint transform of the workload at consecutive exponential times and the mean workload. The outline of the algorithm is as follows: We focus on two specific Lévy processes: Brownian motion and the Gamma process. For the case of Brownian motion, the input process X is a Brownian motion with a drift, henceforth denoted by X ∈ Bm(d, σ 2 ). Then, for α ≥ 0, φ(α) := log Ee −αX 1 = −αd + α 2 σ 2 /2, and the right inverse function is, for q ≥ 0, ψ (q) = d+ √ d 2 +2σ 2 q σ 2 . For reflected Brownian motion (i.e., the workload of a queue with Brownian motion as input), there is an explicit expression for the conditional distribution P(Q t ≤ y|Q 0 = x), for y > 0 (Section 1.6 in Ref. [7] ). It is a matter of straightforward calculus to use this formula to find an expression for the transform This result is used to evaluate the performance of our procedure in the case where the fixed time t is approximated by the sum of exponentials.
The Gamma process is characterized by the Lévy-Khintchine triplet (d, σ 2 , ), where σ 2 = 0; the Lévy measure is given, for some β, γ > 0, by (dx) = (β/x) e −γ x , for x > 0, and the drift is d = 1 0 x (dx). From the definition of the Lévy measure, we see that the Gamma process is a spectrally positive process with a.s. non-decreasing sample paths. We also add a negative drift such that the Laplace transform is equal to where ρ > β/γ in case of a negative drift d = (β/γ ) − ρ. For the Gamma process with parameters γ , β > 0 and a drift (β/γ ) − ρ, we use the notation G(γ , β, ρ).
If the input is a Gamma process, there is no explicit expression for the transform E x e −αQ t in contrast with the case of a Brownian input. Suppose now that we wish to characterize the distribution of Q t for a deterministic t. The idea is that we can approximate t by a sum of, say n, independent exponential random variables. An optimal choice of the parameters q i then follows from solving the following constrained optimization problem: This constrained optimization problem has solution q 1 = · · · = q n = n/t. A complicating factor is that in Theorem 3.1.1 the parameters q i should be chosen distinct. To remedy this, we propose to impose a small perturbation of the optimal q i 's such that they are distinct: where the i are suitably chosen small numbers that sum up to 0. In the two tables that follow, we present the numerical results obtained from calculating the expression in Theorem 3.1.1 for the case X ∈ Bm(−1, 1) ( Table 1) and for the case X ∈ G(1, 1, 2) ( Table 2). Here, we consider the situation of x = 0 and t = 1. The  (If n is odd we choose (n+1)/2 = 0 and the rest as indicated above.) In the first table, X ∈ Bm(−1, 1), we take n = 1, 4, 6, 7, 8 and compare our approximations with the exact values obtained from E x e −αQ t for different values of α. In the last column, we present the relative errors between the exact value and the approximation value for n = 8. It should be realized that the numerical procedure has its limitations. First, from the expression in Theorem 3.1.1, we see that at every step we have to compute 2 n terms, which complicates the computation for n large. We also see that when the parameters q i are 'almost equal' (i.e., i in Equation (5.1) is small), we add and subtract terms that are large in absolute value (as the denominators featuring in the result of Theorem 3.1.1 are close to zero), which potentially causes instability. Our numerical tests show that the choice of the parameters q i influence the numerical stability; for the parameters indicated in Equation (5.1), the results begin to deviate for n > 9 due to numerical issues. From Table 1, corresponding to the case of a Brownian input process (for which we can compare with exact results), we see that for n = 8 our relative error is below 1%. For the case of a Gamma input process, we verified that the transform converges to the steady-state workload as given by the generalized Pollaczek-Khintchine formula (Theorem 3.2 in Ref. [6] ). As a second application of Theorem 3.1.1, we use the results obtained in Tables 1  and 2 to approximate the value of E x Q t for the two cases X ∈ Bm(−1, 1) and X ∈ G(1, 1, 2), essentially relying on numerical differentiation. By considering an α sufficiently small and an n sufficiently large, we use the approximation We present our findings for the cases X ∈ Bm(−1, 1) and X ∈ G(1, 1, 2), respectively, displaying the qualitative behavior of E x Q t as a function of time for various values of x. For the mean value of the stationary workload, we know that EQ = φ (0)/(2φ (0)), as follows directly from the generalized Pollaczek-Khintchine formula. In Figures 7 and 8, we observe three different scenarios corresponding to different values of the initial workload. When the initial workload is 0, the mean workload increases and converges to the mean value of the steady-state workload. This follows directly, as, for any Lévy input process, Q t d = sup 0≤s≤t X s when the initial workload is 0, implying that E 0 Q t is increasing over time. When the initial workload is slightly above the steady-state workload, it is interesting to notice that EQ t first decreases below the steady-state version, and then converges from below. For higher initial workloads, EQ t is always decreasing and converges to the steady-state value from above.

Conclusion and discussion
In this article, we have analyzed the transient behavior of spectrally one-sided Lévydriven queues. We considered the joint behavior of Q T 1 , Q T 1 +T 2 , . . . , Q T 1 +···+T n where T i is exponentially distributed with parameter q i , and we specifically focused on Q T 1 +···+T n . From the main results, it follows that this transient behavior obeys an elegant and appealing tree structure. Interestingly, some numerical illustrations showed that E x Q t is first decreasing in t and then converges to the steady-state workload from below in case x is chosen 'slightly' above the stationary workload.
We have restricted ourselves to analyzing Q T with T distributed as the sum of n independent exponential random variables, but our result is readily extended to that of Q T with T obeying a Coxian distribution. This is a particularly useful fact, as any distribution on the positive half line can be approximated arbitrarily closely by a sequence of Coxian distributions, see e.g., Section 3.4 in Ref. [2] . In more detail, the analysis looks as follows. Consider the situation that T follows a Coxian distribution with n phases; we let the length of phase i be drawn from an exponential distribution with parameter q i , and we let the probability of moving from phase i to i + 1 be p i (with the convention that p n = 0). Then, for the spectrally positive case, where E x e −αQ T 1 +...+T k is as obtained in Theorem 3.1.1. The density in the spectrally negative case follows by a similar argument.
To conclude, we like to mention some topics that are of interest for future investigation. Although the class of Coxian distributions for the epoch T is sufficiently rich, it might of interest to study the behavior of Q T if T has a general phase-type distribution. Specifically, we did not explicitly derive the results in case some parameters q i are identical. This follows as a direct application of l'Hôpital's rule, but the expressions tend to become cumbersome. Another open question concerns the transient behavior for spectrally two-sided Lévy processes. Finally, we expect that the transient analysis presented here may be applicable in inference procedures, to estimate the queue's Lévy input process from a finite number of successive workload observations.