A general inversion theorem for cointegration

Abstract A generalization of the Granger and the Johansen Representation Theorems valid for any (possibly fractional) order of integration is presented. This Representation Theorem is based on inversion results that characterize the order of the pole and the coefficients of the Laurent series representation of the inverse of a matrix function around a singular point. Explicit expressions of the matrix coefficients of the (polynomial) cointegrating relations, of the Common Trends and of the Triangular representations are provided, either starting from the Moving Average or the Auto Regressive form. This contribution unifies different approaches in the literature and extends them to an arbitrary order of integration. The role of deterministic terms is discussed in detail.


Introduction
The inversion of Moving Average (MA) forms into Auto Regressive (AR) forms (and vice versa) plays a central role in the representation theory of linear processes; see for instance Brockwell and Davis (1991, Chapter 3) for the case of ARMA stationary processes. This is also true for nonstationary integrated processes of order d, I(d), i.e. processes X t possessing a MA representation in dth differences D d X t ¼ FðLÞe t with F(z) analytic for all jzj<1 þ d; d > 0; Fð1Þ 6 ¼ 0, and e t a white noise process; see Johansen (1996, Chapter 4) for the cases of d ¼ 1, 2.
The first result of this kind for I(1) processes is the celebrated Granger Representation Theorem, see Granger (1981) and Engle and Granger (1987). Starting from DX t ¼ FðLÞe t , with Fð1Þ 6 ¼ 0 singular, Engle and Granger (1987) considered the inversion of F(z) in order to derive the (infinite order) Error-Correction form; their proof was completed by Johansen (1996, Theorem 4.5), using inversion results from Johansen (1991).
The Granger Representation Theorem also linked the Common Trends representation, derived by summation of the MA form, to the Error-Correction form, containing the cointegrating relations-associated with equilibrium in the system-and the adjustment toward it. This proved that error-correction, common trends, and cointegration were different characteristics of the same system and not competing concepts, see Granger (2004) and Hendry (2004).
The Granger Representation Theorem also established that there is complementarity between the (number of) common trends and the (number of) cointegrating relations, and paved the way to the interpretation of cointegrating relations as (deviations from) equilibrium and of common trends as drivers of the system. The Granger Representation Theorem initiated a literature on representations for I(d) systems, to which many authors have contributed. Starting from the MA form of an I(1) system, Phillips (1991) introduced the Triangular representation, which was subsequently generalized by Stock and Watson (1993) to the general I(d) case.
The Triangular representation summarizes the cointegration properties of the system; it does so by providing the MA representation for a set of (polynomial) linear combinations of the variables, whose number equals the dimension of the system. This set of (polynomial) linear combinations contains the cointegrating relations in the system plus some complementary linear combination of the differences of order d.
The Triangular representation formed the basis of a semi-parametric inference approach on cointegration, in which the cointegrating relations are estimated parametrically, while the MA form-representing a stationary colored process-is estimated nonparametrically; see Phillips and Hansen (1990), Sims et al. (1990), and Stock and Watson (1993).
An alternative derivation of the Granger Representation Theorem was presented in Yoo (1986), which made use of the Smith form of the matrix function F(z) in the MA representation DX t ¼ FðLÞe t . The approach based on the Smith form was further extended to the case of I(2) systems in Engle and Yoo (1991) and Haldrup and Salmon (1998). 1 In a parallel strand of literature, the cointegrated VAR literature, Johansen (1988aJohansen ( ,b, 1991 considered the dual problem of inverting the AR representation FðLÞX t ¼ e t with F(z) a matrix polynomial and Fð1Þ 6 ¼ 0 singular; he derived conditions under which the Granger Representation Theorem holds for VAR processes. These conditions consist of a reduced rank restriction on F(1) and a full rank condition that involves the first derivative of F(z) at z ¼ 1, see also Schumacher (1991). 2 The reduced rank condition corresponds to the existence of a pole of some order m ! 1 in FðzÞ À 1 at z ¼ 1, while the full rank condition establishes that the order of the pole m is exactly equal to one. This pair of conditions is here called the POLE(1) condition.
Under the POLE(1) condition, X t is I(1) and Johansen (1988a,b) derived the Common Trends representation of a VAR. He obtained in particular the explicit expression of the matrix that loads the random walk component in the Common Trends representation, C 0 say-the MA impact matrix-as a function of the AR coefficients. Johansen (1994) used it to derive hypotheses on the constant and on deterministic terms; this led to cointegrated VAR models with restricted deterministic components, see Johansen (1996, Chapter 5.7) and Hansen (2005). Moreover, the explicit form of C 0 was crucial in proving the mixed normality of the asymptotic distribution of the estimator of the cointegrating vectors in Johansen (1991).
The explicit form of the MA impact matrix C 0 was also exploited to derive maximum likelihood estimation and inference on it, see Paruolo (1997a) and Phillips (1998). Counterfactual thought experiments on the long-run behavior of cointegrated systems also lead to long-run impact multipliers that are functions of the MA impact matrix C 0 , see Johansen (2005). Omtzigt and Paruolo (2005) derived maximum likelihood estimation and inference on related long-run impact multipliers in cointegrated systems. The MA impact matrix plays also a central role in the estimation of the long-run variance matrix, see Paruolo (1997b) and Phillips (1998).
Still starting from the AR form, another derivation of the Granger Representation Theorem was given by Archontakis (1998) employing the Jordan decomposition of the AR companion matrix and using the results by D' Autume (1992), who showed that the POLE(1) condition can be stated as the absence of a Jordan block of size > 1 in the Jordan representation of the AR companion matrix; see also Neusser (2000) for an approach based on the Drazin inverse.
A generalization of the Granger Representation Theorem to I(2) AR processes was given in Johansen (1992), who stated the POLE(2) condition, under which X t is I(2), and he derived the corresponding Common Trends representation. The POLE(2) condition consists of two reduced rank 1 The Smith form is also a standard tool in the treatment of vector ARMA processes, see e.g. Hannan and Deistler (1988, Section 1.2). 2 The same condition can be found in the engineering literature, see Howlett (1982), Lancaster (1966, eq. (4.4.7)), Schumacher (1986). restrictions and one full rank condition. The two reduced rank conditions correspond to the existence of a pole in FðzÞ À 1 at z ¼ 1 of some order m ! 2, while the full rank condition establishes that the order of the pole m is exactly equal to two, see Franchi (2007) and Faliva and Zoia (2009). Johansen (1992Johansen ( , 2008b derived the explicit form of the first two matrices in the Laurent expansion of the inverse, C 0 and C 1 say, which load the cumulated random walk and the random walk components in the I(2) Common Trends representation. The form of C 0 and C 1 shows in which directions the process X t is I(d), for d ¼ 0, 1, 2. The explicit expression of C 0 was instrumental in Paruolo (2002) to derive inference on it via likelihood methods.
In the AR framework, the case of generic I(d) processes was considered by several authors. D' Autume (1992) showed that the maximal dimension of a Jordan block of the AR companion matrix identifies the order of integration for generic d. la Cour (1998) extended recursively the algebraic necessary and sufficient conditions of Johansen (1992) to the case of AR process integrated of any order d, and she described the associated cointegration properties of the system.
In the state space framework, Bauer and Wagner (2012) provided a canonical representation of processes with unit roots at arbitrary frequencies and arbitrary integer integration orders. In this approach, the order of integration is characterized as the maximal size of the Jordan blocks of the state matrix corresponding to the eigenvalue of unit modulus, in line with the results by D'Autume (1992) cited above on the companion matrix.
The main contribution of the present paper is to show that all these derivations can be unified via local spectral theory, see Gohberg et al. (1993), making use of the results in Paruolo (2011b, 2016). In particular, this paper employs a general inversion theorem which (i) provides explicit expressions for (polynomial) cointegrating relations and (ii) common trend loading matrices; (iii) applies to processes integrated of any order, (iv) starting either from a MA or AR forms, (v) possibly in the presence of polynomial deterministic trends. This general inversion theorem offers a unified treatment of the different representations of cointegrated systems, irrespectively of the chosen starting point, extending them (when appropriate) to any order of integration.
These tools provide a constructive approach to compute the relevant matrices of each representation in terms of alternative ones. This is useful for the interpretation of cointegrated systems in terms of adjustment, equilibrium relations, common trends identification and loadings. Moreover, these results provide a way to specify the deterministic polynomial trends so as to bound the overall trend degree in the data. All these developments are key for the derivation of properties of cointegration processes, that are, e.g. useful in deriving asymptotics for estimators and tests.
For a given matrix function F(z), the order m of the pole of FðzÞ À 1 at z ¼ 1 is shown to play a central role in the representation theory. When starting from the MA form D d X t ¼ FðLÞe t , the order m, which is generally different from d, characterizes the cointegration properties of X t . A generalization of the Triangular representation in Stock and Watson (1993)-which assumes m ¼ d-is given; it is shown that the cointegrating relations involve cumulations (and possibly differences) of X t when m > d, while they involve only differences of X t when m < d. On the other hand, when starting from the AR form FðLÞX t ¼ e t , the order m of the pole of the inverse gives the order of integration of the process and characterizes its cointegration structure. For m ¼ 1, 2 the representation results in Johansen (1996) are obtained.
The present results also apply to fractionally integrated processes, both in the case of ARFIMA and for the class introduced by Johansen (2008a,b), and further studied in Franchi (2010) and Nielsen (2010, 2012). Furthermore, they can be applied to any stationary, unit or explosive root with minor modifications, thus covering also the case of seasonal cointegration, see Hylleberg et al. (1990) and Johansen and Schaumburg (1999), and of common cyclical features, see Engle and Kozicki (1993), Vahid and Engle (1993), and Franchi and Paruolo (2011a).
Finally, the Granger-Johansen Representation Theorems have recently been shown to hold also for infinite dimensional AR processes in Hilbert spaces, see Chang et al. (2016), Hu andBeare et al. (2017) for the I(1) case and Beare and Seo (2018) for the I(2) case. Franchi and Paruolo (2017) provide an extension of the present results to the generic I(d) case for infinite dimensional AR processes in Hilbert spaces.
The rest of the paper is organized as follows: the remaining part of this introduction reports notational conventions and preliminaries; Section 2 introduces basic definitions; Section 3 contains the general inversion theorem; Section 4 presents a characterization of common trends, cointegration, and the Triangular representation of MA and AR processes based on the inversion results in Section 3, including a discussion of deterministic terms. Section 5 reports conclusions and Appendix A contains proofs.

Notation and preliminaries
In the following, a :¼ b or b ¼: a indicates that a is defined equal to b; for any square matrix A, jAj indicates its determinant, while for z 2 C; jzj denotes the modulus of z. For any sequence ðv t Þ t2Z , where Z :¼ f:::; À 2; À 1; 0; 1; 1:::g is the set of integers, D :¼ 1 À L indicates the difference operator and L is the lag operator, defined as Lv t :¼ v t À 1 .
The paper considers the inversion of the p Â p matrix function F(z) with F(1) singular in the MA form D d X t ¼ FðLÞe t or in the AR form FðLÞX t ¼ e t . The matrix function F(z) is assumed to be analytic for all z 2 C satisfying jzj<1 þ d for d > 0, so that the coefficients of its expansion around 0 are geometrically decreasing, and hence absolutely summable. This implies that F(z) is infinitely differentiable and its derivatives are analytic in the same disc, see e.g. Lemma 3.2.10 in Greene and Krantz (1997). This includes finite order ARs or MAs, in which case F(z) is a matrix polynomial, which is analytic for all z 2 C.
The process e t represents a p Â 1 white noise process with finite second moments; this is usually taken either as an i.i.d. process, see e.g. Johansen (1996) or as a martingale difference sequence, see e.g. Stock and Watson (1993). The choice of type of white noise is irrelevant for the representation results discussed in this paper, in the sense that each representation result holds for the specific chosen type of white noise.
In the invertible MA or causal AR cases, the point of interest for the expansion of F(z) is 0, FðzÞ ¼ P 1 n¼0 F n z n , and Fð0Þ ¼ F 0 ¼ I is nonsingular; the coefficients of the inverse FðzÞ À 1 ¼: CðzÞ ¼ P 1 n¼0 C n z n , which solves the system of equations FðzÞCðzÞ ¼ CðzÞFðzÞ ¼ I, are found using the following recursions, see e.g. Johansen (1996, Theorem 2.1), K k C n À k ; K k :¼ À F À 1 0 F k ; n ¼ 1; 2; :::: (1.1) In the integrated case, the point of interest for the expansion of F(z) is 1; at this point FðzÞ ¼ P 1 n¼0 F n ð1 À zÞ n is singular, i.e. jFð1Þj ¼ 0. This yields an inverse of F(z) with a pole of some order m ¼ 1; 2; ::: at z ¼ 1.
In the engineering literature, the inversion of a matrix function around a point of singularity is a well-studied problem, see among others Avrachenkov et al. (2001) and Howlett et al. (2009), who used the approach in Howlett (1982) recursively to characterize the order of the pole. In the mathematical literature, a classical approach to characterize the relation between a matrix function and its inverse is the local spectral theory, see Gohberg et al. (1993), which is based on the concepts of root functions and partial multiplicities.
Within this literature, Paruolo (2011b, 2016) introduced a procedure called "extended local rank factorization" (ELRF) which provides an explicit way to construct all the relevant quantities of the local spectral theory in Gohberg et al. (1993). Moreover, the ELRF was shown to provide an efficient way to compute the recursions in Avrachenkov et al. (2001) and Howlett et al. (2009), thus unifying these two different approaches. The results in Franchi and Paruolo (2016) are reviewed in Section 3 below and act as building blocks for the representation results in Section 4, which are the novel contributions of this paper.
The paper makes repeated use of rank factorizations: given a p Â p matrix u of rank 0<r<p, its rank factorization is written as u ¼ À ab 0 , where a and b are p Â r full column rank matrices, which respectively span the column space and the row space of u; the negative sign is chosen for convenience in later calculations. The matrix u ? indicates a p Â ðp À rÞ full column rank matrix that spans the orthogonal complement of the column space of u ¼ À ab 0 , i.e. the orthogonal complement of the column space of a.
The orthogonal projection matrix on the column space of u ¼ À ab 0 is indicated by P a :¼ aa 0 ¼ a a 0 , where a :¼ aða 0 aÞ À 1 , with rank r; the orthogonal projection matrix on the orthogonal complement of the column space of u ¼ À ab 0 is P a ? :¼ I À P a ¼ a ? a 0 ? ¼ a ? a 0 ? , of rank p À r. Similarly, one defines P b and P b ? replacing a with b. When u of full rank, one can set either ða; bÞ equal to ðI; uÞ or to ðu; IÞ, with The rank factorization is not unique, because all previous assignments of a, b can be replaced by aQ, bQ 0 À 1 with Q a generic nonsingular square matrix. Similarly, a ?
and b ? can be replaced by a ? H; b ? K with H, K generic nonsingular square matrices. As a last piece of notation, P n ðtÞ indicates the set of scalar polynomials p n ðtÞ ¼ P n i¼0 c i t i in t or order n, with c i 2 R; when c i 2 R p , p > 1, the class of vector polynomials p n ðtÞ ¼ P n i¼0 c i t i in t or order n is indicated P n;p ðtÞ. The truncation of order q of a generic function aðzÞ ¼ P 1 n¼0 a n ð1 À zÞ n is denoted as a ðqÞ ðzÞ :¼ P q n¼0 a n ð1 À zÞ n , i.e. aðzÞ ¼ a ðqÞ ðzÞ þ ð1 À zÞ qþ1 a ? ðzÞ, where a ? ðzÞ :¼ P 1 n¼0 a nþqþ1 ð1 À zÞ n is the remainder.

Integrated processes
This section introduces the definitions of difference and integral operators, following Gregoir (1999) and Gregoir and Laroque (1994), and of integrated and cointegrated processes of any integer order (including negative ones), following Johansen (1996, Chapter 3).
Definition 2.1 (Difference operator D and integral operator S). For a generic process v t , t 2 Z, the difference operator D is defined as Dv t :¼ v t À v t À 1 and the integral operator S is defined as 3 where 1 Á ð Þ is the indicator function. Remark that by definition S assigns value 0 to the cumulated process at time 0. In fact, applying the definition, see Properties 2.1, 2.2 in Gregoir (1999) and Lemma A.2 in Appendix A, one can verify that, for t 2 Z, one has (2.2) Equation (2.2) shows that S applied to Dv t regenerates the level of the process v t , up to a constant; this parallels the constant of integration in indefinite integrals. The integral operator S is hence the inverse of the difference operator D up a constant; Definition 2.1 chooses this constant so as to make any cumulated process equal 0 at time t ¼ 0.

3
In Gregoir (1999) S is denoted S x , for x ¼ 0, where x is the frequency.
When v t ¼ e t is white noise, Eq. (2.1) shows that Se t is a bilateral random walk for t 2 Z. In fact for t > 0 one has Se t ¼ P t i¼1 e i while for t < 0 one finds Se t ¼ À P 0 i¼tþ1 e i , i.e. on both sides of t ¼ 0 a random walk is generated with increment e t for positive time t and À e tþ1 for negative t.
The notion of integration of order 0 is presented next.
Definition 2.2 (I(0) and I nc ð0Þ processes). Let V(z) be a ðrectangularÞ matrix function, analytic for all jzj<1 þ d; d > 0, and let e t be a white noise process; then v t is said to be 'integrated of order zero', indicated v t $ Ið0Þ, and 'integrated of order zero and non-cointegrated', indicated v t $ I nc ð0Þ, if in addition V(1) has full row rank; in symbols: (2.4) The notation I nc ð0Þ is introduced here to indicate explicitly the case in which v t does not cointegrate (at frequency 0), see Remark 2.8. The next definition presents positive and negative orders of integration.

Definition 2.3 (Order of integration).
Let v t $ Ið0Þ as in (2.3) and let a, b be finite non-negative integers; if then z t is said to be integrated of order a À b, indicated z t $ Iða À bÞ. Similarly, if v t $ I nc ð0Þ as in (2.4), then z t satisfying (2.5) is said to be integrated of order a-b and non-cointegrated, indicated z t $ I nc ða À bÞ.
Definition 3.3 in Johansen (1996) of an I(d) process is found by setting b ¼ 0 in (2.5). Note that b > 0 allows to define also negative orders of integration. The order of integration is given by the difference between a and b, and can be thought of as "dividing both sides of (2.5) by D b ". In the following, expression of the type D À h X t $ Ið0Þ for positive h are understood to mean X t ¼ D h v t for some v t $ Ið0Þ. Some implications of Definition 2.3 on the simplification of D are discussed in Remark 2.4. The remarks in the rest of this section consider for simplicity the case of constant expectations h s :¼ Eðs t Þ; s t ¼ z t ; v t , but can be modified for general Eðs t Þ in a straightforward way.
Remark 2.4. (Cancellations of D). Take a ¼ b ¼ 1 in (2.5), which in this case reads Dz t ¼ Dv t with v t $ Ið0Þ. Applying the S operator on both sides one obtains z t À z 0 ¼ v t À v 0 , see (2.2). 4 If one assigns the initial value of z 0 equal to v 0 , one obtains z t ¼ v t , which corresponds to the cancelation of D from both sides of (2.5). The same reasoning applies for generic a; b > 0 to the cancelation of D minða;bÞ from both sides of (2.5).
Remark 2.4. shows that one can simplify powers of D from both sides of (2.5) by properly assigning initial values; this observation is implicitly incorporated in Definition 2.3 of I(d) processes, which is next specialized for I(1) and Ið À 1Þ processes in Remarks 2.5 and 2.6.
and applying the S operator to both sides of the equation, one finds thanks to (2.2), that 4 This result is usually stated as z t ¼ v t À a 0 where a 0 :¼ z 0 À v 0 is a generic constant, see e.g. Hannan and Deistler (1988) eq. (1.2.15).
where y t :¼ V ? ðLÞe t is a stationary component, z 0 À y 0 depends on initial values 5 of z and y and Se t is a bilateral random walk. Note that Vð1Þ 6 ¼ 0 guarantees that the random walk component does not vanish.
Remark 2.6 (Ið À 1Þ process). Take a ¼ 0 and b ¼ 1 in Definition 2.3; Eq. (2.5) takes the form z t À h z ¼ Dv t and applying the S operator one obtains Z t : Hence the cumulated process Z t is the sum of an I(0) process, a constant and a linear trend.
Remark 2.5 and 2.6 show that the S operator generates deterministic components (constants and trends in the cases above) whose coefficients depend on the initial values of the processes.
Definition 2.7 (Cointegrating relations). Let z t $ IðdÞ and let bðLÞ be a p Â s matrix polynomial of order n ! 0 in D, with b 0 of full column rank; then b(L) is called a cointegrating matrix polynomial (of order n) if bðLÞ 0 z t $ I nc ðd À jÞ for j > 0.
Observe that Definition 2.7 applies to any order of integration d, including negative orders.
This justifies the condition in (2.4). A similar situation applies to the case of negative d.
Remark 2.9 (Normalization of cointegrating relations). Definition 2.7 requires that b 0 6 ¼ 0, which can be shown not to be a restriction. In fact, assume by contradiction that bðLÞ ¼ D q b ? ðLÞ with b ? ð1Þ 6 ¼ 0 and q > 0; in this case bðLÞ 0 D d z t ¼ bðLÞ 0 VðLÞe t would read b ? ðLÞ 0 D qþd z t ¼ D q b ? ðLÞ 0 VðLÞe t , which can be simplified by Remark 2.4 as b ? ðLÞ 0 D d z t ¼ b ? ðLÞ 0 VðLÞe t . This shows that b 0 6 ¼ 0 is not a restriction, but a (convenient) normalization of a cointegrating relation.
Definition 2.7 also requires b 0 to be of full column rank. Again, this is not restrictive; in fact, in case b 0 is not of full column rank s but of rank r < s say, one can rotate b 0 so that its first r columns are nonzero and of full rank, and all the remaining columns are equal to 0; then one can redefine b 0 as the set of these first r columns. This shows that requiring b 0 to be of full column rank is a (convenient) normalization of a cointegrating relation.

The inversion theorem
This section reports the main technical results on inversion, presented in Theorems 3.3 and 3.5; the former provides explicit expressions for the coefficients of the inverse function, while the latter provides a construction of the local Smith factorization. These theorems are restatements of results in Paruolo (2011b, 2016) and are reported here because they are instrumental in obtaining the representation results in Section 4, which are the novel contributions of this paper.

5
If one could choose the initial value z 0 of the process z t equal to y 0 , this would set the last term z 0 À y 0 to 0. Johansen (1996, Chapter 4) chooses z 0 ; z À 1 so as to make b 0 z t and b 0 ? Dz t stationary, where b are the cointegrating linear combinations. This amount to requiring b 0 ðz 0 À y 0 Þ ¼ 0 and b 0 ? ðDz 0 À VðLÞe 0 Þ ¼ 0. Any of these approaches on initial values can be applied to the more general case studied in Section 4.

Consider the problem of inversion of a matrix function
around the singular point z ¼ 1. This includes the case of matrix polynomials F(z), in which the degree of F(z) is finite, k say, with F n ¼ 0 for n > k.
The inversion of F(z) around the singular point z ¼ 1 yields an inverse with a pole of some order m ¼ 1; 2; ::: at z ¼ 1; an explicit condition on the coefficients fF n g 1 n¼0 in (3.1) for FðzÞ À 1 to have a pole of given order m is described in Theorem 3.3; this is indicated as the POLE(m) condition in the following. Under the POLE(m) condition, FðzÞ À 1 has Laurent expansion around z ¼ 1 given by Note that Cð1Þ ¼ C 0 6 ¼ 0 is finite by construction and C(z) is expanded around z ¼ 1. In the following, the coefficients fC n g 1 n¼0 are called the Laurent coefficients. The first m of them, fC n g m À 1 n¼0 , make up the principal part and characterize the singularity of FðzÞ À 1 at z ¼ 1. The next definition introduces quantities that are subsequently employed in the statements of the results. 6 where P x denotes the orthogonal projection on the space spanned by the columns of x and F hþ1;n :¼ b i a 0 i F iþ1;n for h ¼ 1; 2; ::: ; n ¼ 0; 1; :::: Finally, let and define H jþ1;n :¼ where 1 Á ð Þ is the indicator function.

6
The case r 0 ¼ 0 is excluded because otherwise one could re-define F(z) factorizing ð1 À zÞ s from (3.1) for some positive s. The case r 0 ¼ p is also excluded because it would imply Fðz 0 Þ nonsingular, in which case the inversion formula (1.1) would apply.
Remark 3.2 (Bases with orthogonal blocks). Observe that a h and a j , h 6 ¼ j, are orthogonal matrices, a 0 h a j ¼ 0; similarly this holds for b h and b j , h 6 ¼ j. Moreover, for some j ¼ 1; 2; :::, it is possible that P a j? F j;1 P b j? ¼ 0, i.e. r j ¼ 0 and a j ¼ b j ¼ 0. In this case, one needs to exclude a j ; b j from a jþ1 ; b jþ1 in (3.3). Note also that, as j increases, the space spanned by a j and the space spanned by b j are nondecreasing, and eventually coincide with R p for some j ¼ s; for all subsequent values of j, j > s, P a j? F j;1 P b j? is equal to 0, because the orthogonal complements a j? and b j? have dimension 0, and hence all subsequent a j , b j are equal to 0. Thus there exists an integer s such that r s > 0 and r j ¼ 0, j > s, and a sþ1 ¼ ða 0 ; :::; a s Þ and b sþ1 ¼ ðb 0 ; :::; b s Þ are p Â p nonsingular matrices with (nonzero) orthogonal blocks.
The next theorem states that the integer s such that r s > 0 and r j ¼ 0 for all j > s in Remark 3.2 is precisely the order m of the pole of FðzÞ À 1 at z ¼ 1; moreover, it provides a recursion for the Laurent coefficients, see (3.8), which generalizes formula (1.1) to the singular case.
Theorem 3.3 (POLE(m) condition and Laurent coefficients). A necessary and sufficient condition for F(z) to have an inverse with pole of order m ¼ 1; 2; ::: Moreover, the Laurent coefficients fC n g 1 n¼0 satisfy K k C n À k for n ¼ 1; :::; m X n k¼1 K k C n À k for n ¼ m þ 1; m þ 2; ::: ; Observe that because rank P a j? F j;1 P b j? ¼ rank a 0 j? F j;1 b j? , one has r j ¼ rank a 0 j? F j;1 b j? ; hence m ¼ 1 if and only if This corresponds to the condition in Howlett (1982, Theorem 3) and to the I(1) condition in Johansen (1991, Theorem 4.1). Similarly, one has m ¼ 2 if and only if r 1 <r max 1 , which corresponds to the I(2) condition in Johansen (1992, Theorem 3). Theorem 3.3 is thus a generalization of the Johansen's I(1) and I(2) conditions and shows that, in order to have a pole of order m in the inverse, one needs m þ 1 rank conditions on F(z): the first j ¼ 0; :::; m À 1 are reduced rank conditions, r j <r max j , which establish that the order of the pole is greater than j; the last one is a full rank condition, r m ¼ r max m , which establishes that the order of the pole is exactly equal to m. These requirements make up the POLE(m) condition.
Theorem 3.3 provides in (3.8) a generalization of formula (1.1) to the singular case by giving a recursive expression of C n in (3.2) in terms of the output of the ELRF. Equation (A.18) in the proof, see Appendix A, shows that H n can be simplified as H n ¼ À b m À n a 0 m À n þ P m j¼m À nþ1 b j a 0 j H jþ1;n for n ¼ 0; 1; :::; m. The additive term H n in (3.8), which is absent in the nonsingular case, see (1.1), is present only for the steps j ¼ 1; :::; m in (3.8) and then disappears. After m þ 1 steps, the two formulae are identical, except for the definition of K k , which involves the inverse of F 0 in the nonsingular case and the Moore-Penrose inverse of a j b 0 j ; b j a 0 j , in the singular case; see e.g. Theorem 5, p. 48, in Ben-Israel and Greville (2003) on Moore-Penrose inverses.
Finally, consider the local Smith factorization of F(z) at z ¼ 1, see Gohberg et al. (1993), i.e. the factorization FðzÞ ¼ EðzÞDðzÞHðzÞ, where DðzÞ ¼ diagðð1 À zÞ j h Þ h¼1;:::;p is uniquely defined and contains the partial multiplicities j 1 ÁÁÁ j p of F(z) at 1 and EðzÞ; HðzÞ are analytic and invertible in a neighborhood of z ¼ 1 and are nonunique. D(z) and EðzÞ; HðzÞ are respectively called the local Smith form and extended canonical system of root functions of F(z) at 1. Theorem 3.5 provides two constructions of the local Smith factorization in terms of the output of the ELRF.
Theorem 3.5 (Local Smith factorization). One has that In what follows, every statement concerning a j or b j implicitly assumes that they are nonzero, i.e. that r j > 0. The modifications required in the case r j ¼ 0 are straightforward.
Theorem 3.5 shows that the ELRF fully characterizes the elements of the local Smith factorization of F(z) at 1. In fact, the values of j with r j > 0 in the ELRF provide the distinct partial multiplicities of F(z) at 1 and r j gives the number of partial multiplicities that are equal to a given j; this characterizes the local Smith form KðzÞ. Moreover, it also provides two constructions of extended canonical system of root functions.
Remark that the jth block of rows in (3.14) can be written as where c j ð1Þ 0 ¼ b 0 j and / j ð1Þ 0 ¼ À a 0 j have full row rank. This shows that / j ðzÞ 0 are r j left root functions of order j of F(z) and that c j ðzÞ 0 are r j left root functions of order m À j of C(z). As shown in Theorems 4.1 and 4.3, the concept of cointegrating relation coincides with that of left root function and its order of integration is equal to the corresponding entry in the local Smith form KðzÞ, i.e. to the corresponding partial multiplicity.
Similarly, observe that the jth block of columns in (3.15) can be written as where w j ð1Þ ¼ À b j and p j ð1Þ ¼ a j have full column rank. That is, w j ðzÞ are r j right root functions of order j of F(z) and p j ðzÞ are r j right root functions of order m-j of C(z). This fact will be used when discussing deterministic terms in Theorem 4.8.

Common Trends, cointegration, and Triangular representations
This section contains the novel representation results; these include the explicit expressions of the matrix coefficients of the (polynomial) cointegrating relations, of the Common Trends and Triangular representations, either starting from the MA or the AR form of an I(d) process. In particular, Section 4.1 (respectively Section 4.2) considers a generic MA (respectively AR) form and describes its cointegration properties in Theorem 4.1 (respectively Theorem 4.3) and its Triangular representation in Corollary 4.2 (respectively Corollary 4.6). This includes the Triangular representation in Stock and Watson (1993) as a special case.
Moreover, Corollaries 4.4 and 4.5 in Section 4.2 present the Granger Representation Theorem and the Johansen Representation Theorem for AR forms as special cases of Theorem 4.3. Section 4.3 considers the case with deterministic terms, Section 4.4 describes the explicit connection between the local Smith form and the Jordan structure and Section 4.5 discusses the case of noninteger d.
All the results in this section follow from Theorems 3.3 and 3.5, which thus prove to be unifying and useful tools in the representation theory of cointegrated processes.

MA forms
Consider a generic I(d) process with F(z) analytic for all jzj<1 þ d; d > 0, having roots at z ¼ 1 and at jzj > 1. This includes finite order MAs, in which case F(z) is a matrix polynomial. Applying Theorems 3.3 and 3.5 to F(z) in (4.1), one obtains the following result.
(3.1) as FðzÞ ¼ P d À 1 n¼0 F n ð1 À zÞ n þ ð1 À zÞ d F d ðzÞ, let Y t :¼ F d ðLÞe t and for h 2 N define the h-fold cumulated bilateral random walk S h;t :¼ S h e t $ I nc ðhÞ; then the I(d) process X t in (4.1) admits the following Common Trends representation for t 2 Z: where Y t is stationary, vðtÞ :¼ P d À 1 n¼0 v n t n 2 P d À 1;p ðtÞ where v 0 ; :::; v d À 1 depend on initial values of X t ; Y t ; e t for t ¼ À d; :::; 0.
Next assume that F(z) in (4.1) satisfies the POLE(m) condition; then the cointegration properties of X t are fully described by the cointegrating relations where / ðj À 1Þ j ðzÞ ¼ P j À 1 k¼0 / j;k ð1 À zÞ k is the truncation of order j À 1 of the left root functions / j ðzÞ in (3.9). Additionally, defining U ðzÞ :¼ ð a 0 ; / ð0Þ 1 ðzÞ; :::; / ðm À 1Þ m ðzÞÞ 0 , one has where KðzÞ is the Local Smith form of F(z), see (3.13), and UðzÞ is a truncation of the extended canonical system of left root functions UðzÞ in (3.10). Moreover, the initial values can be chosen so that v(t) does not appear in (4.4).
Note that the cointegrating relations coincide with the truncated left root functions of F(z), in this case chosen as / j ðzÞ, that the order of integration of a cointegrating relation is equal to the corresponding partial multiplicity and that the cointegration structure of X t coincides with the truncation of an extended canonical system of left root functions of F(z), in this case chosen as UðzÞ.
The previous theorem leads to a Generalized Triangular representation, as shown in the following corollary.
Corollary 4.2 (Triangular representation of MA processes). Let X t in (4.1) satisfy the POLE(m) condition on F(z); then X t admits the Generalized Triangular representation which reduces to the Triangular representation in Eq. ð3:2Þ of Stock and Watson (1993) in the special case m ¼ d.
Observe that the order of integration d of X t is not affected by the structure of F(z) and hence by the order m of the pole of FðzÞ À 1 . On the other hand, the cointegration properties of X t do not depend on the order of integration d but on the order m of the pole, which is associated with the structure of F(z).
For example, an I(d) process with m ¼ 1 admits Generalized Triangular representation In this case, ða 0 ; a 1 Þ is a block orthogonal basis of R p and one has that a 0 0 X t $ I nc ðdÞ, a 0 1 X t $ I nc ðd À 1Þ; this fully describes the cointegration properties of X t $ IðdÞ and shows that no polynomial cointegration arises even though the order of integration is greater than one.
On the other hand, an I(1) process with generic m admits Generalized Triangular representation In this case, cointegrating relations occur in the direction of a j , j 6 ¼ 0, and, if j > 1, they require cumulation of X t in order to obtain an I nc ð0Þ on the r.h.s. In fact, ða 0 ; :::; a m Þ is a block orthogonal basis of R p and one has that a 0 0 X t $ I nc ð1Þ, a 0 1 X t $ I nc ð0Þ, a 0 2 X t À / 0 2;1 DX t $ I nc ð À 1Þ, and so on until a 0 m X t À P m À 1 k¼1 / m;k 0 D k X t $ I nc ð1 À mÞ. In general, Corollary 4.2 shows that the cointegrating relations involve D j X t for j ¼ d À m; :::; d À 1, and some of these powers may be negative due to the fact that m can be greater than d. In this case D j X t corresponds to cumulations of X t , see Definition 2.3 and Remark 2.6. While m does not influence the order of integration of X t , it does impact the number of differences or cumulations that enter the cointegrating relations of X t and thus determines the Generalized Triangular representation of the process.

AR forms
Consider a generic AR process F L ð ÞX t ¼ e t ; F 0 6 ¼ 0; jF 0 j ¼ 0; (4.5) with F(z) analytic for all jzj<1 þ d; d > 0, having roots at z ¼ 1 and at jzj > 1. This includes finite order ARs, in which case F(z) is a matrix polynomial and hence it is analytic for all z 2 C. One can then apply Theorems 3.3 and 3.5 to F(z) in (4.5), obtaining the following result.
Theorem 4.3 (Cointegration properties of AR processes). The AR process X t in (4.5) is I(d), d ¼ m, if and only if the POLE(m) condition applies to F(z). Write C(z) in (3.2) as CðzÞ ¼ P d À 1 n¼0 C n ð1 À zÞ n þ ð1 À zÞ d C d ðzÞ and let Y t :¼ C d ðLÞe t ; then X t admits the following Common Trends representation: where S h,t :¼ S h e t , Y t is I(0), vðtÞ :¼ P d À 1 n¼0 v n t n 2 P d À 1;p ðtÞ where v 0 ; :::; v d À 1 depend on the initial values of X t ; Y t ; e t for t ¼ À d; :::; 0. The cointegration properties of X t are fully described by the cointegrating relations where c ðm À j À 1Þ j ðzÞ ¼ P m À j À 1 k¼0 c j;k ð1 À zÞ k is the truncation of order m À j À 1 of the left root functions c j ðzÞ in (3.9). Additionally, defining CðzÞ :¼ ðc ðm À 1Þ 0 ðzÞ; :::; c ð0Þ m À 1 ðzÞ; b m Þ 0 , one has K L ð ÞC L ð ÞX t $ I nc 0 ð Þ; jC 1 ð Þj 6 ¼ 0; (4.8) where KðzÞ is the Local Smith form of F(z), see (3.13), and C ðzÞ is a truncation of the extended canonical system of left root functions CðzÞ in (3.10). Moreover the initial values can be chosen so that v(t) does not appear in (4.8).
Note that (i) the cointegrating relations coincide with the truncated left root functions of C(z), in this case chosen as c j ðzÞ, (ii) the order of integration of a cointegrating relation is equal to the corresponding partial multiplicity, and (iii) that the cointegration structure of X t coincides with the truncation of an extended canonical system of left root functions of C(z), in this case chosen as CðzÞ.
Setting m ¼ 1 in Theorem 4.3 one finds Theorem 4.2 in Johansen (1996), as reported in the following corollary.
Corollary 4.4 (Cointegration properties of I(1) AR processes). The AR process X t in (4.5) is I(1) if and only if the POLE(1) condition applies to F(z). Write C(z) in (3.2) as CðzÞ ¼ C 0 þ ð1 À zÞC 1 ðzÞ and let Y t :¼ C 1 ðLÞe t ; then X t admits the following Common Trends representation: (0) and v 0 depends on the initial values of X t ; Y t ; e t for t ¼ À 1; 0. The cointegration properties of X t are fully described by the cointegrating relations and the initial values can be chosen so that v 0 does not appear in (4.9).
Similarly, setting m ¼ 2 in Theorem 4.3 one finds Theorem 4.6 in Johansen (1996), as reported in the following corollary.
Corollary 4.5 (Cointegration properties of I(2) AR processes). The AR process X t in (4.5) is I(2) if and only if the POLE(2) condition applies to F(z). Write C(z) in (3.2) as CðzÞ ¼ C 0 þ C 1 ð1 À zÞ þ ð1 À zÞ 2 C 2 ðzÞ and let Y t :¼ C 2 ðLÞe t ; then X t admits the following Common Trends representation: Y t is I(0) and v 0 , v 1 depend on initial values of X t ; Y t ; e t for t ¼ À 2; À 1; 0. The cointegration properties of X t are fully described by the cointegrating relations b 0 (4.10) and the initial values can be chosen so that v 0 , v 1 do not appear in (4.10). Theorem 4.3 leads to a Triangular representation, as shown in the following corollary.
Corollary 4.6 (Triangular representation of AR processes). Let X t in (4.5) satisfy the POLE(m) condition on F(z); then X t is I(d) with d ¼ m and it admits the Triangular representation which coincides with the one in Eq. ð3:2Þ of Stock and Watson (1993).
Note that in the AR case, differently from the MA case, the cointegrating relations do not involve cumulations of X t but exclusively differences.
Comparing the cointegration properties of MA and AR processes in Theorems 4.1 and 4.3, one sees that the two extended canonical system of left root functions UðzÞ and CðzÞ in Definition 3.4 play a symmetric role; the first one is used when starting from a MA form, and the second one when starting from a AR form. Moreover, the order of integration of a cointegrating relation is equal to the corresponding entry in the local Smith form KðzÞ in Definition 3.4.
Remark 4.7 (Left root functions and cointegrating relations). These results show that (i) the concept of cointegrating relation coincides with that of (truncated) left root function, (ii) that the order of integration of a cointegrating relation is equal to the corresponding partial multiplicity and (iii) that the cointegration structure is fully described by an extended canonical system of left root functions, see panels (a-c) in Table 1.

Deterministic terms
This section extends Theorems 4.1 and 4.3 to the case in which deterministic terms l t are added to (4.1) or (4.5) as in (4.12) where l t is in the class P u;p ðtÞ of p-vector polynomials of order u in t. The generic polynomial l t :¼ P u n¼0 c n t n 2 P u;p ðtÞ can be represented as l t ¼ aðLÞt u , where aðLÞ :¼ P u n¼0 a n ð1 À LÞ n is a p Â 1 vector polynomial; this is because Dt u ¼ P u n¼1 u n ð À 1Þ n t u À n 2 P u À 1 ðtÞ, D j t u 2 P u À j ðtÞ for j u and D uþ1 t u ¼ 0, see Lemma A.2. Hence one has l t :¼ X u n¼0 c n t n ¼ a L ð Þt u ; a L ð Þ :¼ X u n¼0 a n 1 À L ð Þ n ; a 0 ¼ c u 6 ¼ 0: (4.13) In the MA case (4.11), applying the S d operator to both sides of (4.11) as in Theorem 4.1, one finds that X t includes the term FðLÞS d l t , which generally is an element of P uþd;p ðtÞ, i.e. a deterministic p Â 1 vector polynomial of order uþd. In the AR case (4.12), the inverse of F(z) is ð1 À zÞ À m CðzÞ and, setting d ¼ m, one obtains the equation D d X t ¼ CðLÞðe t þ l t Þ. By the same reasoning as in the MA case, X t hence includes the term CðLÞS d l t , which generally is an element of P uþd;p ðtÞ, i.e. a deterministic p Â 1 vector polynomial of order uþd.
This general rule applies unless there are cancelations in the leading terms of FðLÞS d l t or CðLÞS d l t , i.e. in the coefficients of the highest powers of t. The highest order trend t dþu is loaded into X t by F 0 a 0 in the MA case and by C 0 a 0 in the AR case. Given that both F 0 and C 0 have reduced rank, see Theorems 4.1 and 4.3, one can have cancelations of these coefficients for appropriate choices of a 0 . Similarly, one can have cancelations of more coefficients, and hence a deterministic polynomial of given order, for appropriate choices of the a n coefficients in (4.13).
The contribution of this section is to describe the conditions on l t that give rise to reductions in the order u þ d of the polynomial trend. In particular, it is shown that the reduction in the order of the trend is at most equal to m, the order of the pole of F(z) at 1. In the analysis, the extended canonical system of right root functions, chosen here as WðzÞ and PðzÞ in Definition 3.4, play a central role; the former is used when starting from a MA form and the latter when starting from a AR form. This is the dual of the role played by the extended canonical system of left root functions, chosen above as UðzÞ and CðzÞ, which were used in Sections 4.1 and 4.2 to characterize the cointegrating relations.
Theorem 4.8 (Cointegration properties with deterministic terms). Let X t $ IðdÞ satisfy (4.11) or (4.12), where l t is defined in (4.13), let w j ðzÞ; p j ðzÞ be as in (3.11) and define w j:m z ð Þ :¼ w j z ð Þ; :::; w m z ð Þ À Á ; p 0:j z ð Þ :¼ p 0 z ð Þ; :::; p j z ð Þ À Á ; (4.14) finally, let j be a fixed integer in the range 0 j m. Then: MAÞ A necessary and sufficient condition for i:1Þ X t in (4.11) to contain trends in the class P q;p ðtÞ of order q :¼ d þ u À j, and i:2Þ / h ðLÞ 0 X t $ I nc ðd À hÞ to contain trends in the class P s h ;r h ðtÞ of order s h d þ u À h for j<h m and s j ¼ d þ u À j for h ¼ j is that: where u :¼ ðu 0 j ; :::; u 0 m Þ 0 ; u j 6 ¼ 0, is partitioned conformably with the block of right root functions w j:m ðzÞ in (4.14) and w ðuÞ j:m ðzÞ is the truncation of order u of w j:m ðzÞ. ARÞ Similarly, a necessary and sufficient condition for ii:1Þ X t in (4.12) to contain trends in the class P q;p ðtÞ of order q :¼ u þ j, and ii:2Þ c h ðLÞ 0 X t $ I nc ðhÞ to contain trends in the class P s h ;r h ðtÞ of order s h u þ h for 0 h<j and s j ¼ u þ j for h ¼ j is that: (4.16) where u :¼ ðu 0 0 ; :::; u 0 j Þ 0 ; u j 6 ¼ 0, is partitioned conformably with the block of right root functions p 0:j ðzÞ in (4.14) and p ðuÞ 0:j ðzÞ is the truncation of order u of p 0:j ðzÞ.
Remark 4.9 (Limited-cumulation deterministic terms and right root functions). Equations (4.15) and (4.16) characterize deterministic components that have a given controlled degree of cumulation; in Table 1, they are indicated as "limited-cumulation" deterministic terms. These results show that the structure of the deterministic terms is fully described by an extended canonical system of right root functions, see panels (a,b) in Table 1.

Jordan forms
This subsection deals with the connection with the Jordan form approach, in which the order of integration is given by the maximal size of the Jordan blocks corresponding to the eigenvalue at 1.
The following additional notation is needed here; let J :¼ ðj : r j > 0Þ be the ordered set that contains the w þ 1 :¼ #J indexes j that correspond to nonzero ranks r j . Indicate the elements of J by ðj 1 ; j 2 ; :::; j wþ1 Þ and fix the reverse ordering m ¼ j 1 > j 2 > ÁÁÁ > j w > j wþ1 ¼ 0. Next let J þ be the ordered set that contains only the positive elements of J , i.e. J þ :¼ J n f0g ¼ ðj 1 ; j 2 ; :::; j w Þ. Note that the index set J þ contains at least one element (equal to m), and at most m elements, J þ ¼ ðm; m À 1; :::; 1Þ, and hence 1 w m. Finally let K be the ordered set that contains each j 2 J þ repeated r j times and indicate its elements by ðk 1 ; k 2 ; :::; k p À r 0 Þ :¼ K, i.e. ; k r j 1 þ1 ; :::; k r j 1 þr j 2 |fflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflffl} ¼j 2 ; :::k P w À 1 i¼1 r j i þ1 ; :::; k p À r 0 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} ¼j w Þ: Note that the index set K contains P j2J þ r j ¼ p À r 0 elements. In the following diagða j Þ j2J þ indicates a block diagonal matrix with a j 1 ; :::; a j w on the main diagonal.
Given the extended canonical system of left root functions UðzÞ in (3.10) and the index set K, one can construct a Jordan pair of F(z) at z ¼ 1 as follows. 8 Theorem 4.10 (Jordan pair at z ¼ 1). Let u i;n be the ith column of U n in the extended canonical system of left root functions UðzÞ ¼ P 1 n¼0 U n ð1 À zÞ n in (3.10), and let k i be the ith element in the index set K; for i ¼ 1; :::; p À r 0 , define respectively of dimension p Â k i and k i Â k i . Then the columns of U i form a Jordan chain of maximal length k i and J k i is the corresponding Jordan block. Collecting the Jordan chains and the Jordan blocks respectively in 8 Similar results apply to CðzÞ; WðzÞ and PðzÞ.
one has that (U, J) is a Jordan pair of F(z) at z ¼ 1. This theorem contains the results in D' Autume (1992), Archontakis (1998), and Bauer and Wagner (2012) as special cases. In fact, take for example the companion matrix of an AR process; the Jordan blocks of this companion matrix corresponding to the eigenvalue at 1 are collected in the matrix J in Theorem 4.10; this follows, e.g. from Corollary 1.21 in Gohberg et al. (1982). Hence the characterization of the order of integration as the maximal size of the Jordan blocks of the companion matrix corresponding to the eigenvalue at 1 is easily obtained by the ELRF.

Fractional integration orders
The present results also apply to the cases of noninteger d of the ARFIMA type. This can be seen by choosing d 2 R in the MA form (4.1), or replacing X t with ð1 À LÞ s X t , s 2 R, in the AR form (4.5), i.e. FðLÞð1 À LÞ s X t ¼ e t ; s 2 R. The present analysis applies as well to the class of fractionally integrated processes defined in Johansen (2008a,b), see Eq. (3.1) in Franchi (2010). In fact, one can replace L with L b :¼ 1 À ð1 À LÞ b ; b 2 R and consider the fractional version of (4.5)

Conclusions
The present results show that the concepts of root functions and partial multiplicities in the local spectral theory are central for the representation theory of cointegrated systems. In particular, the concept of cointegrating relation coincides with that of left root function and the order of integration of a cointegrating relation is equal to the corresponding partial multiplicity. Moreover, the impact of deterministic terms on the process is shown to be determined by the characteristics of right root functions and the corresponding partial multiplicities.
The general inversion results deliver both left and right extended canonical system of root functions and the partial multiplicities as recursive expressions of the coefficients of the matrix function to be inverted. The inversion theorem is based on the ELRF, which consists in performing a finite sequence of rank factorizations of matrices that involve the derivatives of the matrix function evaluated at the point around which the inversion is performed. The present results unify and clarify existing representation results in the literature, and extend them to any integer order. The present derivations carry over to fractionally integrated processes and they can be applied to any (stationary, unit, explosive) root, which characterize seasonal cointegration and common cyclical features.
These three expressions show that n n ðTÞ is a polynomial in T of order n þ 1, n n ðTÞ 2 P nþ1 ðTÞ. Next, use (A.1), or (A.2) and (A.3) as the definition of n n ðTÞ as a polynomial in T of order n þ 1 for T 2 Z; then, for all non-negative s ¼ 1; 2; 3; :::, one has n n À s ð Þ ¼ À1 ð Þ nþ1 n n s À 1 ð Þ: Proof. Equations (A.1) and (A.2) follow from Anderson (1971) Exercises 5 and 6, solving for n n ðTÞ and n 2q ðTÞ using the fact that n 0 ðTÞ :¼ P T t¼1 1 ¼ T. In order to prove (A.3), note that Hence one sees that C equals P Tþ1 t¼2 t 2q À P T À 1 t¼1 t 2q ¼ ðT þ 1Þ 2q þ T 2q À 1. This implies (A.3). Formula (A.1) can be used to show that n n ðTÞ is a polynomial in T of order n þ 1, by induction over n. Start from n ¼ 1, for which (A.1) gives n 1 ðTÞ ¼ ððT þ 1Þ 2 À ðT þ 1ÞÞ=2 ¼ TðT þ 1Þ=2. Next assume that n k ðTÞ is a polynomial of order k þ 1 for k ¼ 1; 2; :::; n À 1, and observe that (A.1) for n n ðTÞ begins with a polynomial of order n þ 1. This (or alternatively Eqs. (A.2) and (A.3) along similar lines) shows that n n ðTÞ is a polynomial in T of order n þ 1.
Further properties of D and S are stated in the following lemma. Let here 1 n;t :¼ S n 1 for n ¼ 0; 1; :::, where 1 is the constant process. Note that this implies S n u ¼ 1 n;t u, when u is a constant vector. One has 1 1;t ¼ t and 1 n;t for n ¼ 1; 2; ::: is a polynomial in t of order n, 1 n;t 2 P n ðtÞ. More in general, for p u ðtÞ 2 P u ðtÞ, S n p u t ð Þ 2 P uþn t ð Þ (A.9) D s p u t ð Þ 2 P u À s t ð Þ; 0<s u (A.10) D uþj p u t ð Þ ¼ 0; j > 0: (A.11) Moreover, for t 2 Z one has n¼s À h 1 n;t D h À sþn v 0 ; 0<h s: (A.12) Taking h ¼ s in (A.12), one finds as a special case Proof. Linearity of D, S follows by definition. Next consider 1 n;t . For n ¼ 1, using the definition (2.1) one finds that 1 1;t :¼ S1 ¼ t, a polynomial of order 1 in t, because S1 :¼ 1 ðt!1Þ P t i¼1 1 À 1 ðt À 1Þ P 0 i¼tþ1 1 ¼ 1 ðt!1Þ jtj À 1 ðt À 1Þ jtj ¼ signðtÞjtj ¼ t: Next proceed by induction over n, assuming that 1 n À 1;t 2 P n À 1 ðtÞ, with form 1 n À 1;t ¼ P n À 1 i¼0 a i t i , and showing that 1 n;t 2 P n ðtÞ, where 1 n;t ¼ S1 n À 1;t ¼ P n À 1 i¼0 a i St i . The proof follows by showing that St i 2 P iþ1 ðtÞ, where the order of 1 n;t comes from St n À 1 2 P n ðtÞ. For t ! 1; St i ¼ P t k¼1 k i ¼ n i ðtÞ which is in P iþ1 ðtÞ by Lemma A.1. For t ¼ 0, one has S0 ¼ 0. Finally for t À 1 one has that St i :¼ À P 0 k¼tþ1 k i ¼ ð À 1Þ iþ1 P jtj À 1 h¼1 h i ¼ ð À 1Þ iþ1 n i ðjtj À 1Þ ¼ n i ð À jtjÞ ¼ n i ðtÞ by (A.4). Hence St i ¼ n i ðtÞ 2 P iþ1 ðtÞ for all values of t 2 Z. This completes the proof that 1 n;t :¼ S n 1 2 P n ðtÞ. This derivation also shows (A.9). Direct application of the definitions imply (A.10) and (A.11). Next consider Eq. (A.12). For 0<h s one finds: this proves (A.12) and (A.13).
Proof of Theorem 3.3. This is a restatement of Lemma 3.1, Theorems 3.4, 3.5 and Corollary 3.6 in Franchi and Paruolo (2016). Hence the proof is omitted.
Proof of Theorem 3.5. Proof of (3.14). Write the identity FðzÞFðzÞ À 1 ¼ I as the following linear system in the F n , C n matrices (A.14) In the following, equations in system (A.14) are indexed according to the highest value of the subscript of C n ; for instance F 0 C 0 ¼ 0 is referred to as equation 0. Remark that the identity appears in equation m, which is the order of the pole. Lemma 3.1 in Franchi and Paruolo (2016) shows that equation n ! j ¼ 0; :::; m in system (A.14) implies where a j , b j , a j , and F jþ1;k are as in Definition 3.1 and follows by applying definition (3.6). Pre-multiplying (A.15) by a 0 j and rearranging one thus finds b 0 j C h À j À a 0 j X h À j k¼1 F jþ1;k C h À j À k ¼ a 0 j H jþ1;h À j ; h ! j ¼ 0; :::; m: (A.17) Next define c j ðzÞ 0 :¼ P 1 n¼0 c j;n 0 ð1 À zÞ n , where c 0 j;0 :¼ b 0 j and c 0 j;n :¼ À a 0 j F jþ1;n for n ! 1, and consider CðzÞ ¼ P 1 n¼0 C n ð1 À zÞ n in (3.2). Writing c j ðzÞ 0 CðzÞ ¼ P 1 n¼0 f j;n 0 ð1 À zÞ n , where f 0 j;n :¼ P n k¼0 c j;k 0 C n À k is found by convolution, one has f 0 j;n ¼ b 0 j C n À a 0 j X n k¼1 F jþ1;k C n À k ¼ a 0 j H jþ1;n ; n ! 0; j ¼ 0; :::; m; where the last equality follows by setting n ¼ h À j in (A.17). Moreover, setting n ¼ h À j in (A.16) one finds and hence one has j H jþ1;n for n > m À j : 8 > <
Proof of Theorem 4.1. Write (3.1) as FðzÞ ¼ P d À 1 n¼0 F n ð1 À zÞ n þ ð1 À zÞ d F d ðzÞ, where F d ðzÞ :¼ P 1 n¼d F n ð1 À zÞ n À d is analytic for all jzj<1 þ d; d > 0. Hence the coefficients of the expansion F d ðzÞ ¼ P 1 n¼0 F Ã n z n are geometrically decreasing and the process Y t :¼ F d ðLÞe t is stationary. Substituting in (4.1) one has Pre-multiply both sides of (A.20) by S d ; by (A.13) one has S d D d X t ¼ X t À v x;t , v x;t :¼ P d À 1 n¼0 c n;t D n X 0 , and S d D d Y t ¼ Y t À v y;t , v y;t :¼ P d À 1 n¼0 c n;t D n Y 0 . Moreover, by (A.12) one has S d D j e t ¼ S d À j e t À X d À 1 n¼d À j 1 n;t D j À dþn e 0 ; 0<j d; and hence P d À 1 j¼0 F j S d D j e t ¼ P d À 1 j¼0 F j S d À j;t À v e;t , v e;t :¼ P d À 1 j¼0 F j P d À 1 n¼d À j 1 n;t D j À dþn e 0 . Hence the solution of (A.20) is j¼0 F j S d À j;t þ Y t þ v d À 1;t ; v d À 1;t :¼ v x;t À v y;t À v e;t ; where v d À 1;t ¼: P d À 1 n¼0 v n t n is a polynomial of order d À 1 in t whose coefficients depend on initial values, see the definitions of v x;t ; v y;t and v e;t . This completes the proof of the first part of the statement. Pre-multiplying (4.1) by / j ðLÞ 0 , see (3.9), and using / j ðLÞ 0 FðLÞ ¼ D j c j ðLÞ 0 , see (3.16), one finds D d / j L ð Þ 0 X t ¼ D j c j L ð Þ 0 e t ; j ¼ 0; :::; m: (A.21) Because c j ð1Þ 0 ¼ b 0 j has full row rank, this shows that for j ¼ 0; :::; m one has D d / j ðLÞ 0 X t $ I nc ð À jÞ, i.e. / j ðLÞ 0 X t $ I nc ðd À jÞ, see Definition 2.2. Next it is shown that the same holds for the truncated version / ðj À 1Þ j ðLÞ 0 X t $ I nc ðd À jÞ; j ¼ 1; :::; m. Substituting / j ðzÞ 0 ¼ / ðj À 1Þ j ðzÞ 0 þ ð1 À zÞ j / ? j ðzÞ 0 in (A.21) and rearranging one finds ð Þ 0 e t À / ? j L ð Þ 0 D d X t ; j ¼ 1; :::; m; and thus, substituting D d X t ¼ FðLÞe t , Using c j ð1Þ 0 ¼ b 0 j ; / ? j ð1Þ 0 ¼ / 0 j;j ¼ a 0 j H jþ1;m and Fð1Þ ¼ a 0 b 0 0 , one finds c j ð1Þ 0 À / ? j ð1Þ 0 Fð1Þ ¼ b 0 j À a 0 j H jþ1;m a 0 b 0 0 . Because ðc j ð1Þ 0 À / ? j ð1Þ 0 Fð1ÞÞ b j ¼ I rj ; c j ð1Þ 0 À / ? j ð1Þ 0 Fð1Þ has full row rank. This shows that for j ¼ 1; :::; m one has D d / ðj À 1Þ j ðLÞ 0 X t $ I nc ð À jÞ, i.e. / ðj À 1Þ j ðLÞ 0 X t $ I nc ðd À jÞ, and completes the proof of the second part of the statement. Grouping D d À j / ðj À 1Þ j ðLÞ 0 X t $ I nc ð0Þ together and using KðzÞ defined in (3.13), one finds (4.4), where U c ð1Þ ¼ À ð a 0 ; :::; a m Þ 0 is square and nonsingular. This completes the proof of the statement.
Proof. Proof of Theorem 4.3. By Theorem 3.3, FðzÞ À 1 ¼ ð1 À zÞ À m CðzÞ with Cð1Þ 6 ¼ 0 if and only if the pole(m) condition on F(z) hold, i.e. one has D m X t ¼ CðLÞe t with Cð1Þ 6 ¼ 0, which shows that X t $ IðdÞ, d ¼ m. Proceeding along the lines of the proof of Theorem 4.1 one finds the statement.
Proof. Proof of Corollary 4.4. Setting m ¼ 1 in Theorem 3.3 one has C 0 ¼ À b 1 a 0 1 and setting m ¼ 1 in Theorem 4.3 one has and hence the statement.
Proof of Corollary 4.6. Use (4.8) in Theorem 4.3. AR, Necessity. Assume now that ii:1Þ holds. This implies that in D m X t ¼ CðLÞe t þ CðLÞaðLÞt u one has CðLÞaðLÞ ¼ D m À j bðLÞ for some b(L) with bð1Þ 6 ¼ 0, i.e. that a(L) is a right root function of C(L) of order m-j. This implies that a(L) can be expressed as linear combinations of the right root functions p s ðLÞ in (3.17) of order equal to j or lower, i.e. that aðzÞ ¼ p ðuÞ 0:j ðzÞu, where for the order to be j, the coefficient of p ðuÞ j ðzÞ needs to be nonzero. This implies (4.16). A similar derivation applies to show necessity assuming ii:2Þ holds.
Proof of Corollary 4.10. Direct consequence of Theorem 3.5 and the definition of Jordan pairs in Gohberg et al. (1993).