Computing a compact local Smith McMillan form

We define a compact local Smith-McMillan form of a rational matrix $R(\lambda)$ as the diagonal matrix whose diagonal elements are the nonzero entries of a local Smith-McMillan form of $R(\lambda)$. We show that a recursive rank search procedure, applied to a block-Toeplitz matrix built on the Laurent expansion of $R(\lambda)$ around an arbitrary complex point $\lambda_0$, allows us to compute a compact local Smith-McMillan form of that rational matrix $R(\lambda)$ at the point $\lambda_0$, provided we keep track of the transformation matrices used in the rank search. It also allows us to recover the root polynomials of a polynomial matrix and root vectors of a rational matrix, at an expansion point $\lambda_0$. Numerical tests illustrate the promising performance of the resulting algorithm.


Introduction
Finding poles and zeros of a rational matrix R(λ) ∈ C(λ) m×n with coefficients in the field of complex numbers C is one of the basic problems in linear system theory.Such a rational matrix describes the input/output behaviour of a general system of differential or difference equations [7,15].Its poles correspond to the natural frequencies of the dynamical system, while its zeros correspond to the frequencies that are blocked by the system [9].When R(λ) does not have full row or column rank over the field C(λ) of rational functions, the rational matrix R(λ) has also a non trivial left (respectively, right) null space [3], which yields additional information on the initial conditions and degrees of freedom related to the response of the system [3,7,9,15].A rational matrix can have multiple poles and zeros in a point λ 0 ∈ C and even coalescent poles and zeros.The finer structure of the response of the dynamical system at the frequency λ 0 that is such a pole/zero is then described by the local Smith-McMillan form of the rational matrix.The latter is a diagonal matrix that associates with each pole/zero a number of structural indices that reflect the structure of the response of the system at that frequency.
Finding poles and zeros and their structural indices is therefore an important problem in the analysis of a dynamical system.However, the response of the system also depends on particular directions.These are input vectors that either excite one of the system's poles or are blocked by one of the system's zeros, of a particular multiplicity given by each structural index.In the latter case, these root vectors [2,6,11,12,13,14] arise naturally when one wants to describe the solution set of particular matrix equations involving rational matrices [7,8] or appear as expansion vectors in tangential interpolation problems of high order [5].Root vectors can be viewed as a generalization of an eigenvector for a first order system of differential or difference equations modeled by the eigenvalue problem λx − Ax = 0.The structural indices at that zero are then linked to its Jordan structure.In this paper we show how to compute such a local Smith-McMillan form at a pole/zero λ 0 of R(λ) by applying the Toeplitz rank search algorithm [16] to a block Toeplitz matrix built on the Laurent expansion of R(λ) around λ 0 ∈ C. We also link this method to the calculation of root polynomials of a polynomial matrix, or root vectors of a rational matrix, at an expansion point λ 0 [2,11,13,14].
The paper is organized as follows.In Section 2, we recall the basic definitions and background material for the rest of the paper and we introduce the compact local Smith-McMillan form.In Section 3 we recall the rank properties of triangular Toeplitz matrices defined from the Laurent expansion at a point λ 0 ∈ C and show their relation to the computation of the structural indices of a rational matrix.Section 3 contains the main new results of the paper: it shows that the Toeplitz rank search also constructs a local Smith form of a polynomial matrix and a local Smith-McMillan form of a rational matrix.In Section 4, we show some numerical experiments indicating that the accuracy of the algorithm is very satisfactory.Finally, we give some concluding remarks in Section 5.

The local Smith-McMillan form
We denote the field of rational functions with complex coefficients by C(λ) and the ring of polynomials with complex coefficients by C[λ].The structure at a finite point λ 0 ∈ C which is a pole or a zero of an m × n rational matrix R(λ) ∈ C(λ) m×n is defined via its local Smith-McMillan form at the point λ 0 ∈ C [10]: where M(λ) and N(λ) are rational, well defined and such that M(λ 0 ), N(λ 0 ) are defined and invertible (that is, λ 0 is neither a zero nor a pole of M(λ), N(λ)), r is the normal rank of R(λ), i.e. the rank of R(λ) over the field C(λ), and the integers σ i are called structural indices of R(λ) at the point λ 0 , and are ordered non-decreasingly, i.e., σ (see e.g.[10]).We point out that the local form can be derived from the standard Smith-McMillan form via a one-sided extraction of the factors (λ − λ 0 ) σ i from the invariant factors e i (λ)/f i (λ), which implies that we can choose one of the two matrices M(λ) and N(λ) in the local form (1) to be polynomial and unimodular.The classical computation of the standard Smith-McMillan decomposition is based on the Euclidean algorithm and Gaussian elimination over the ring of polynomials, which precludes numerical pivoting techniques and is therefore numerically unreliable [16].For this reason it has been suggested as a better alternative to compute it via linearizations [13,14].In this paper we show that the local decomposition can also be obtained from the Laurent expansion around the point λ 0 .

Null spaces and their minimal indices
When defining the structure of a general m × n rational matrix, one typically includes the structure of its right null space ker R(λ) and left null space ker R(λ) T , which are rational vector spaces over the field C(λ).Their characterization is based on particular polynomial bases, for which we need the following definition.
Definition 2.1 A matrix polynomial N(λ) ∈ C[λ] n×p of normal rank p is called a minimal polynomial basis if the sum of the degrees of its columns, called the order of the basis, is minimal among all polynomial bases of the range of N(λ), i.e., the vector space of all C(λ)linear combinations of the columns of N(λ).Its ordered column degrees are called the minimal indices of the basis.
It is known [3] that the ordered list of indices is independent of the choice of minimal basis of the space.One can define the right null space ker R(λ) and left null space ker R(λ) T of an m × n rational matrix R(λ) of normal rank r as the vector spaces of rational vectors x(λ) and y(λ) annihilated by R(λ) on the respective sides: Then, the minimal indices of any minimal polynomial basis for these spaces, are called the right and left minimal indices of R(λ).The dimensions of ker R(λ) and ker R(λ) T are, respectively, n − r and m − r and the right and left minimal indices are denoted by {ǫ 1 , . . ., ǫ n−r }, {η 1 , . . ., η m−r }.
It is known [3] that for any minimal basis N(λ), the constant matrix N(λ 0 ) has full column rank for all λ 0 ∈ C and the highest column degree matrix of N(λ) also has full column rank.

The compact local Smith-McMillan form and the compact local Smith form
The local Smith-McMillan form not only contains information on the structural indices σ i , i = 1, . . ., r, but the invertible matrices M(λ) and N(λ) in (1) also contain bases for the left and right null spaces of the rational matrix R(λ).It follows indeed from (1) that ker R(λ) = Im N(λ) 0 These block columns are invertible bases, in the sense of [12], of the modules, ker R(λ ), respectively, because they have full column rank for all finite λ = λ 0 ∈ C (see [12,Corollary 4.2] for an analogous argument in the case of analytic matrix functions), but they are not necessarily minimal polynomial bases.
The following more compact equation discards the information of the null spaces and focuses only on the structural indices at λ 0 .We call the diagonal matrix containing the nonzero local invariant factor a compact local Smith-McMillan form, in analogy to the compact SVD of a matrix A, which also discards the singular values and vectors related to the left and right null space of the constant matrix A. We say that a rational matrix R(λ) is left (resp.right) invertible at λ 0 if λ 0 is not a pole and the constant matrix R(λ 0 ) is left (resp.right) invertible.
Proof.We only give a proof for the first form (2) since the form ( 3) is dual to it.We start from the decomposition (1) where we can make the choice that N(λ) is unimodular, and M(λ) is rational and such that M(λ 0 ) is invertible.If we then multiply it on the left with M (λ) := M −1 (λ) and define then the result follows, since the property of N r (λ) follows from the unimodularity of N(λ), and the existence and left invertibility of Mr (λ 0 ) follows from the invertiblity of M(λ 0 ).If R(λ) happens to be polynomial, then Mr (λ) and Nℓ (λ) can also be chosen to be polynomial, which then yields a compact local Smith form, as stated in the following theorem.

Connection with root vectors and root polynomials
The Smith form and the Smith-McMillan form, respectively, are closely related to the concepts of (left and right) root polynomials of a polynomial matrix [2,6,11,14] and (left and right) root vectors of a general rational matrix [13].The definition for the right vectors requires an invertible basis [12], i.e., an arbitrary polynomial basis for the right null space of the matrix that, upon evaluation at λ = λ 0 , spans the same subspace as a minimal basis.For instance, N(λ) 0 I n−r , where N(λ) is as in (1), is such a basis; indeed it is an example of what was called an invertible basis of a module in [12] as even though it may not be a minimal basis, it nevertheless has full rank upon evaluation at any finite point.It is then useful to introduce some notation to denote those column vectors of the matrices N r (λ) and Mr (λ), in Theorems 2.2 and 2.3, that correspond to the positive structural indices : The following properties of the vectors x i (λ) follow from the compact local Smith form at the zero λ 0 of a polynomial matrix P (λ).The vectors x i (λ) satisfy the equations where the matrices have full column rank; in particular the rank of the matrix on the left of ( 6) is equal to dim ker P (λ 0 ).These properties of the vectors x i (λ) are precisely the defining properties of a complete set of root polynomials of P (λ) at λ 0 , and they follow directly from the local compact Smith form at λ 0 .Moreover, one can also show that such vectors are maximal sets of root polynomials [2].
For a rational matrix R(λ), one again looks only at the positive structural indices σ i ≥ 1 and the same definition holds for the column vectors x i (λ) and v i (λ), and again we have where still the full rank conditions of (6) hold.These properties also follow directly from the compact local Smith-McMillan form.Again, one can show that the x i (λ) are maximal sets of root vectors, see [13].We point out in particular that ker R(λ 0 ) can still be defined even when λ 0 is a pole [13, Definition 3.8], and it does not contain the directions in which R(λ) tends to infinity when λ → λ 0 .
The link with root polynomials and root vectors is one of the main motivations to construct a compact Smith-McMillan form.While in the proof of Theorem 2.2 N r (λ) and Mr (λ) were constructed starting from a full local Smith-McMillan form, algorithmically it is more efficient to compute a compact local Smith-McMillan form directly, as opposed to computing the full one and only later discard some columns.This leads to the question of whether the rightmost columns of a matrix N r (λ) satisfying (2) are still a complete set of root vectors, i.e., satisfy the rank condition in (6).The following Lemma implies that they do.
To conclude this section, we note that one can give definitions for the left root vectors or root polynomials that are dual to those of the right vectors, and use the left compact local decompositions instead.Details are therefore left out.

Constructing a compact local Smith-McMillan form
In this section, we describe an algorithm to compute the compact local Smith-McMillan form as in Theorem 2.2.

Retrieving the structural indices
We first recall here an important connection between the structural indices {σ i , i = 1, . . ., r} of a pole/zero λ 0 ∈ C of a general rational matrix R(λ) and its Laurent expansion around that point [16].Let us assume that the pole λ 0 is of order ℓ, and that it is possibly also a zero.Then R(λ) has a Laurent expansion about the point λ 0 , with leading coefficient R −ℓ : The following theorem derives the structural indices at λ 0 from the expansion (7).
Let their ranks and rank increments be denoted by r k := rank T λ 0 ,k (R), and where we set r k = 0 for k < −ℓ.Then the number e i of indices σ j that are equal to i, is given by e Moreover, the surplus ranks ρ k form a non-decreasing sequence and d ′ := max i (σ i ) is the smallest index k for which ρ k = r, the normal rank of R(λ).
For simplicity, when no ambiguity arises we will denote the Toeplitz matrices T λ 0 ,k (R) by just T k (R) or T k .It follows from the above theorem that one only has to compute the ranks of the sequence {T k , −ℓ ≤ k ≤ d ′ }, and hence, one only needs to know the coefficients is not known in advance, we will see that this index is also discovered by the algorithm provided the normal rank of R(λ) is known.The latter can be estimated, for example, by evaluating the rank of the transfer function in some randomly generated points.

Toeplitz rank search
In [16] a Toeplitz rank search algorithm was proposed to compute the rank increments of Theorem 3.1, while exploiting the block Toeplitz structure of the matrices T k (R).In this paper we slightly modify this algorithm so that it also constructs a compact local Smith-McMillan decomposition, by keeping track of the intermediate transformations.
To simplify the derivation, we first consider the case where R(λ) does not have a pole at the finite point λ 0 but only a zero.Then R(λ) has a Taylor expansion at that point and the corresponding Toeplitz matrices then have the leading coefficient R 0 on diagonal Note that polynomial matrices are a special case of such rational matrices having no poles at λ 0 , since all their poles are at infinity.Moreover, the Toeplitz rank search for the polynomial matrix P (λ) obtained by truncating the Taylor expansion of the rational matrix R(λ) after its first d ′ + 1 coefficients {R i , 0 ≤ i ≤ d ′ } has the same structural indices, according to Theorem 3.1.We therefore focus first on polynomial matrices.We recall the algorithm derived in [16] for computing the structural indices of a rational matrix in a pole/zero at λ 0 from its Laurent expansion.We apply it here to the expansion about (λ − λ 0 ) of a polynomial matrix To simplify our notation, we will assume in this section that λ 0 = 0.This does not affect the generality of the results.This algorithm computes a rank factorization of the Toeplitz matrices T k given in (8).It operates on the stacked array of coefficients P i using a sequence of invertible transformations, followed by shifts of sub-blocks : . . .
where N 0 is an invertible column transformation compressing the columns of P 0 , L (0) 0 has full column rank r 0 = ρ 0 , and nullity n 0 = ν 0 := n − ρ 0 , and If we apply the same invertible transformation to block columns of the Toeplitz matrix and after permuting the This shows that rank T d (P ) = ρ 0 +rank T d−1 (P (0) ), and is the basis of a recursive computation of the successive ranks of a block Toeplitz matrix T d and its submatrices.We repeat this on the polynomial matrix P (0) (λ) and its corresponding Toeplitz matrix T d−1 (P (0) ) which turns out to be a submatrix of the left hand side of ( 9).This induction step is repeated on the subsequent polynomial matrices P (k) (λ), and shows that we finally compress the column space of T d by induction using an invertible transformation N that is a product of the individual invertible block-diagonal transformations and permutations : where the suffixes (i) refer to the iteration step i of the Toeplitz rank search.Here each "diagonal" block L (i) 0 has rank ρ i and These rank inequalities follow easily from the above algorithmic construction since L (i+1) 0 is a column compression of the compound matrix [L and it was shown in [16] that In order to link this to a compact local Smith form, we write these operations as a polynomial matrix equation (i.e.where P (0) (λ) is polynomial as well) : We will show that the first ρ 0 columns of (10) already match those of the compact local Smith form.Let us now look at the factorization after the next step, yielding It follows from (10) and (11) that This clearly goes in the right direction, provided the matrix Λ −1 0 (λ)N 1 Λ 0 (λ) is unimodular.In [16] the Toeplitz rank search was implemented with unitary transformations N i in order to guarantee good numerical stability properties.This allowed to reconstruct the partial multiplicities σ j at the considered root, but if one also wants to reconstruct a compact local Smith form, then one also needs to satisfy the different conditions described in Theorems 2.2 and 2.3.Therefore one needs to constrain the rank search to a special set of transformations, as explained below.The column rank compression ] has full rank ρ i ≥ ρ i−1 , can be implemented as a factored transformation with a simple inverse where is the least squares solution of L (i−1) 0 and Q i is a unitary transformation compressing the columns of (I 0+ that, by construction, are also independent from those of lies in the span of L (i−1) 0 , then ρ i = ρ i−1 and the matrix Q i = I ν i−1 .Note also that in the special case i = 0 the equations also hold with R (−1) 1 = P 0 , ρ −1 = 0 and ν −1 = n.For this particular choice of transformations, we now have the following result.
Theorem 3.2 The choice of transformations N i given in (12) for the Toeplitz rank search algorithm produces the factorization ) where N(λ) is unimodular, e i := ρ i − ρ i−1 for i ≥ 0, and the constant matrix 0 , 0 has rank r and nullity n − r.
Proof.It follows from the recursive rank search algorithm that it stops as soon as with 0 , 0 having rank r.If we multiply both sides with Λ(λ), then we also obtain where for i > 0, the matrices Ni (λ

Extracting compact decompositions
It is now easy to see that if we discard the last n − r columns of the factorization (13) and replace again λ by (λ − λ 0 ) then we obtain the form has full rank r.This is the desired compact local Smith form described in Theorem 2.3.
If we apply the Toeplitz rank search algorithm to a rational matrix R(λ) ∈ C(λ) m×n without any poles at λ 0 and hence with a Taylor expansion then we only need the leading terms R 0 , . . ., R d ′ of the expansion to obtain the same factorization as in (13) except that the polynomial matrix P (d ′ ) (λ) is now replaced by a rational matrix R (d ′ ) (λ) whose Taylor expansion starts with the constant matrix R (d ′ ) (λ 0 ) = L (d ′ ) 0 , 0 .This leads to the following decomposition for a rational matrix R(λ) which has zeros at λ 0 but no poles : ] is a rational matrix, and the constant matrix Mr (λ 0 ) = L (d ′ ) 0 has full rank r.This is the desired compact local Smith-McMillan form described in Theorem 2.2 for a rational matrix with a zero at λ 0 which is not a pole.
We finally consider the case of a coalescent pole/zero.As pointed out in Theorem 3.1, the Toeplitz matrices T λ 0 ,k (R) constructed with the coefficients of the Laurent expansion around the point λ 0 yields the structural indices σ i , i = 1, . . ., r of R(λ) at the pole/zero λ 0 .The way to reduce this to the rational case without a pole at λ 0 is to consider the scaled rational matrix R(λ) := (λ − λ 0 ) ℓ R(λ).It is obvious that the structural indices σ i , i = 1, . . ., r of R(λ) and σ i , i = 1, . . ., r of R(λ) are related by a constant shift The Toeplitz rank search applied to R(λ) then becomes a Taylor expansion of R(λ), to which we can apply the results of the previous sections.After dividing R(λ) and Λ (r) (λ) again by (λ − λ 0 ) ℓ , this leads to the following local decomposition for a general rational matrix R(λ) : ] is a rational matrix, and the constant matrix Mr (λ 0 ) = L (d ′ ) 0 has full rank r.This is the desired compact local Smith-McMillan form described in Theorem 2.2 for a general rational matrix with a pole/zero at λ 0 .

Numerical examples
In this section we give a number of numerical results for the computation of the compact local Smith form at the eigenvalue λ 0 = 0, computed using the algorithm1 described in Section 3. The test matrices were polynomial matrices of dimensions 4 × 5 of normal rank 3 and with given invariant factors λ 0 , λ 1 , λ 3 at the eigenvalue 0. The matrices were then constructed using the product where M(λ) and N(λ) are random polynomial matrices of respective dimensions 4 × 4 and 5 × 5, and of degree 2, which implies that the polynomial matrix P (λ) has degree 8.The coefficients of the matrices M(λ) and N(λ) were generated using the i-th power of randn, the random generator of Matlab with normal distribution.As a consequence, the dynamical range of the coefficients is growing and the norm of the matrix P (λ) is typically growing as well with the power i.We used the Frobenius norm of a polynomial matrix P (λ) of degree d, which is defined as follows : In Table 1 we show the results of our algorithm applied to the polynomial matrices P (λ) generated for the powers i going from 1 to 10. Column 2 and 4 give the Frobenius norms of the polynomial matrix P (λ) and the unimodular matrix N(λ).The accuracy of the computations is then verified using the Frobenius norm of the residual equation ResP (λ) = P (λ)N r (λ) − Mr (λ)Λ (r) .
Column 3 gives the relative norm ResP / P .The structural indices were recovered correctly for all the test examples.It can be observed from these results that the accuracy of the algorithm is quite satisfactory, even for matrices with large dynamical range in the coefficients.The loss of accuracy, observed in some cases, is probably due the non-orthogonal Gram-Schmidt elimination step of our algorithm.But this could perhaps be improved by a single step of iterative refinement [1].
In the second experiment, we check the robustness of our algorithm against polynomial matrices and Smith forms of high degree.We generated 10 matrices with local Smith form for k = 1 : 10, and transformation matrices M(λ) and N(λ) of degree 10.The degree of the polynomial matrices is therefore 22 + k.The relative precision of the obtained decomposition is again verified using the ratio ResP / P of the Frobenius norm of the residual equation and the Frobenius norm of the matrix P (λ).The structural indices were again recovered correctly for all the test examples.This experiment shows that the method has remarkable stability properties, even for large degree polynomial matrices.

Conclusions
In this paper we revisited the Toeplitz rank search algorithm developed in [16] and showed that an appropriately modified variant also constructs a compact local Smith-McMillan Proof.Since N has full column rank we can write it as N = U T 0 where U ∈ R m×m is unimodular and T ∈ R r×r (e.g. using the Hermite normal form [4]).Moreover, T is invertible over F because rank(T ) = rank(N) = r.Hence, but then r ≥ rank(T ) + rank(C 1 ) ⇒ C 1 = 0.By Lemma A.1, this implies in turn that C 0 = T Y for some Y ∈ R r×(n−r) .However, Lemma 2.4 then follows by applying Theorem A.2 to the matrix π(λ)R(λ)N r (λ) π(λ)R(λ) Ñ(λ) .

1 ≤
. . .≤ σ r .The negative indices refer to poles of the transfer function and the positive indices refer to zeros of the transfer function.An index σ j = 0 is not associated with any dynamical behaviour and corresponds to neither a pole nor a zero.The standard Smith-McMillan form has the same structure, but the transformation matrices M(λ) and N(λ) are then unimodular and the diagonal elements are then the so-called invariant factors e i (λ)/f i (λ) ∈ C(λ) where the polynomials e i (λ) ∈ C[λ] and f i (λ) ∈ C[λ] are monic and satisfy the divisibility chains e 1 (λ)|e 2 (λ)| . . .|e r (λ), and f r (λ)| . . .|f 2 (λ)|f 1 (λ)

3 ⇒ 1 2
Let R be an elementary divisor domain with field of fractions F, and suppose that the Smith forms over R of N C and N 0 are the same where N ∈ R m×r has full column rank and C ∈ R m×(n−r) .Then, C = NY for some Y ∈ R r×(n−r) .

Table 1 :
Recovery of the local Smith form for increasing powers i of the random elements.

Table 2 :
Recovery of the local Smith form for increasing degrees of P (λ).