Decision-oriented two-parameter Fisher information sensitivity using symplectic decomposition

The eigenvalues and eigenvectors of the Fisher information matrix (FIM) can reveal the most and least sensitive directions of a system and it has wide application across science and engineering. We present a symplectic variant of the eigenvalue decomposition for the FIM and extract the sensitivity information with respect to two-parameter conjugate pairs. The symplectic approach decomposes the FIM onto an even-dimensional symplectic basis. This symplectic structure can reveal additional sensitivity information between two-parameter pairs, otherwise concealed in the orthogonal basis from the standard eigenvalue decomposition. The proposed sensitivity approach can be applied to naturally paired two-parameter distribution parameters, or decision-oriented pairing via re-grouping or re-parameterization of the FIM. It can be utilised in tandem with the standard eigenvalue decomposition and offer additional insight into the sensitivity analysis at negligible extra cost.


Introduction 1.Background
Sensitivity analysis is an integral part of mathematical modelling, and in particular a crucial element of decision making in presence of uncertainties.The Fisher information, first introduced by Fisher [1] and is widely used for parameter estimation and statistical inference, has found increasing application in many area of science and engineering for probabilistic sensitivity analysis.For example, the Fisher Information Matrix (FIM) has been applied to the parametric sensitivity study of stochastic biological systems [2]; the FIM is used to study sensitivity, robustness, and parameter identifiability in stochastic chemical kinetics models [3]; through link with relative entropy, the Fisher information is used to assess the most sensitive directions for climate change given a model for the present climate [4]; used in conjunction with the principle of Max Entropy, the FIM is used to identify the pivotal voters that could perturb the collective voting outcomes in social systems [5]; and more recently in [6] the Fisher information have been proposed as one of the process-tailored sensitivity metrics for engineering design.Despite the wide scope, the applications mentioned above all utilise the spectral analysis of the FIM, i.e., the eigenvalues and eigenvectors of the FIM reveal the most and least sensitive directions of the system.
In this paper, we apply a symplectic spectral analysis of the FIM, and demonstrate that the resulted symplectic eigenvalues and eigenvectors are oriented towards a better decision support by extracting sensitivity information with respect to two-parameter pairs (e.g., the mean and standard deviation of a Normal distribution).Consider a general function y = h(x), the probabilistic sensitivity analysis characterise the uncertainties of the output y that is induced by the random input x [7,8].When the input can be described by parametric probability distributions, i.e., x ∼ p(x|b), the FIM can then be estimated as the covariance matrix of the random gradient vector ∂ ln p(y|b)/∂b, with the jk th entry of the Fisher Information Matrix (FIM) as (e.g., [6]): The eigenvalues of the FIM represent the magnitudes of the sensitivities with respect to simultaneous variations of the parameters b, and the relative magnitudes and directions of the variations are given by the corresponding eigenvectors.
The FIM depends on the parametrization used.Suppose b j = g j (θ i ), i = 1, 2, . . ., s, then the FIM with respect to the parameter θ is [9]: where J is the Jacobian matrix with J ji = ∂b j /∂θ i .
It should be noted that the sensitivity analysis based on FIM is fundamentally different from the commonly used variance based analysis [10].The Fisher sensitivity examines the perturbation of the entire joint probability density function (PDF) of the output, more specifically, the entropy of output uncertainty.Moreover, the sensitivity measure from the Fisher analysis are the eigenvectors, which can be regarded as the principal directions for a simultaneous variation of the input parameters.This is in contrast to variance based ranking where it is assumed that the uncertainty of the input factors can be completely reduced to zero [7].As pointed out in [11], using principal sensitivity directions is based on a pragmatic view that given a finite budget to change the parameters, maximizing the impact on the output follows the principal sensitivity directions, which tend to be a simultaneous variation of the parameters because their effect on the output are likely to be correlated.The constrained maximization view also leads to the symplectic eigenvectors in a symplectic basis as discussed in Section 3.1.
The Fisher sensitivity is based on partial derivatives, but it is different from the derivative based global sensitivity measure [12] which is defined as the integral of the squared derivatives of the function output.The Fisher information, on the other hand, is defined as the variance of the partial derivatives of the log probability of the uncertain function output, as seen in Eq 1.And this differentiation is with respect to the distribution parameters of the uncertain input, not with respect to the uncertain variables themselves.Therefore, the Fisher sensitivity examines the impact of the perturbation of the input probability distribution, and as the input distributions are often estimated from data, it is equivalent to assess which uncertain dataset to be focused on.Sensitivity index based on modification of the input PDF has been proposed in [13] for reliability sensitivity analysis, where the input perturbation is derived from minimising the probability divergence under constraints.In contrast, we consider parametric uncertain inputs in this paper to form the Fisher information matrix (FIM) and the resulted eigenvectors provide the principal directions for the input perturbation.
Many widely applied parametric distributions are in the two-parameter families, e.g., the location-scale families including the Normal distribution and Gamma distribution.Although the Fisher sensitivity is with respect to these distribution parameters b, the quantities of interest for decision-making are ultimately the uncertain variables x themselves, e.g., to rank the relative importance of x.We will demonstrate in this paper that the symplectic decomposition of the FIM identifies the influential two-parameter pairs, or equivalently the corresponding variables, and can be used in tandem with the standard eigenvalue decomposition for a better decision support.

A motivating example
As a motivating example, we consider an engineering design problem under uncertainties.Consider a simple cantilever beam where the Young's modulus E and the length L are uncertain, i.e., x = (E, L), and the uncertainties can be described by Normal distributions with E ∼ N (µ 1 = 69e9, σ 2 1 = 11.5e9 2 ) and L ∼ N (µ 2 = 0.45, σ 2 2 = 0.045 2 ).To keep it analytically tractable for this motivating example, we assume a trivial function y = x (a random vibration problem considered in Section 5).Assuming the two random variables are independent, the Fisher information matrix in this case is a diagonal matrix [14]: The eigenvalues, the diagonal entries of the FIM in Eq 3 in this case, and the corresponding eigenvectors then provide the sensitivity information of the uncertain output y with respect to (w.r.t) the input parameter vector b = (µ 1 , σ 1 , µ 2 , σ 2 ).More specifically, the sensitivity of the entropy of the random output as to be discussed in Section 3.
For practical utilisation of the Fisher sensitivity information, there are two issues that need to be addressed and that motivates our research in this paper.First, the FIM needs to be normalised.On one hand, the un-normalised FIM given in Eq 3 tends to be ill conditioned.For example, the conditional number is in the order of 10 22 given that σ 1 = 11.5e9 and σ 2 = 0.045.On the other hand, as the Young's modulus E and the length L are of different units, the FIM needs to be normalised so that the sensitivities w.r.t the different parameters are comparable.One option is to consider sensitivity w.r.t a percentage change of the parameter and this is called proportional [6] or logarithmic [15] normalised FIM.
Normalization is equivalent to a re-parametrization.In the case of proportional normalization, the change of parameter is b j = bj θ j with bj the nominal value for normalization, and the Jacobian matrix in Eq 2 is just a diagonal matrix with bj on the diagonal.
However, the proportional normalisation might provide unrealistic sensitivity information for practical applications.For example, unless the probability distribution of the input variables is far from the real distribution, it is most likely that the change of the mean should be within one or two standard deviations.The FIM can instead be normalised by the standard deviations, which implies that the allowable range of the mean value is limited to a local region and it is quantified by the standard deviation.Normalising the FIM from Eq 3 by the standard deviations, i.e., bj equal to the corresponding σ, we have the normalised FIM as: where it is evident the condition number of the normalised FIM is much smaller.However, the F nor in Eq 4 has repeated eigenvalues, and as a result, the corresponding eigenvectors are not unique.Although the situation with repeated eigenvalues might seem extreme, as to be seen with more examples, the eigenvalues of the normalised FIM tend to be of similar magnitudes.In other words, the sensitivity information has been compressed by normalization (in exchange for better conditioning).As we shall see, the symplectic decomposition of the FIM has a unique symplectic structure, and that tends to mitigate this issue by making the sensitivity information for different variables more distinctive (by pairing the parameters).
The second issue with the Fisher sensitivity with respect to the distribution parameters is the gap to decision making.The purpose of the sensitivity analysis is to identify the influential variables so that informed decisions can be made.
Although it is possible to make changes to the mean and standard deviation independently, the quantities of interest are ultimately the variables themselves, i.e.E and L in this case.As to be demonstrated, the symplectic approach would naturally put parameters in pairs, e.g., (µ, σ) as a conjugate pair for random input with Normal distribution, and provide more direct support for decision making.It is noted in passing that even the true distribution of the uncertain input is not Normal, a common practice is still to use mean and standard deviation as the summary statistic for the dataset at hand.As a result, the two-parameter pair sensitivity proposed in this paper still applies.

Summary and paper outline
In summary, the use of the Fisher information as a sensitivity measure has wide application across science and engineering.Nevertheless, practical issues can hinder the translation of the sensitivity information into actionable decisions.In this paper, we propose a new approach using the symplectic decomposition to extract the Fisher sensitivity information.The symplectic decomposition utilises Williamson's theorem [16,17] which is a key theorem in Gaussian quantum information theory [18].Originated from Hamiltonian mechanics, the symplectic transformations preserve Hamilton's equations in phase space [19].In analogy to the conjugate coordinates for the phase space, i.e. position and momentum, we regard the input parameters as conjugate pairs and use symplectic matrix for the decomposition of the Fisher information matrix (FIM).The resulted symplectic eigenvalues of large magnitude, and the corresponding symplectic eigenvectors of the FIM, then reveal the most sensitive two-parameter pairs.
It should be noted that the proposed symplectic decomposition is only applicable for parameter space of even dimensions, i.e., b ∈ R 2n , and the requirement that the parameters can be regarded as two-parameter pairs.For two-parameter family of probability distributions, such as the widely used location-scale families, there is a natural pairing of the parameters.For other cases, a decision-oriented pairing might be needed.For example, a re-parametrization with respect to (w.r.t) the mean and standard deviation, or two moments of the random variables, using Eq 2 would transform the Fisher information matrix (FIM) into even dimensions.Once the FIM is obtained w.r.t parameters of even dimensions, it is envisaged that the proposed symplectic decomposition is best utilised in tandem with the standard eigenvalue decomposition for sensitivity analysis using the FIM.This offers additional insight into the sensitivity analysis, and as the main computational burden is often at estimating the FIM, at negligible extra cost.
In what follows, we will first review the approach of symplectic decomposition using the Williamson's theorem in Section 2. The details of finding the symplectic eigenvalues and eigenvectors are given in the Supplementary Material together with the corresponding Matlab script.In Section 3, we give a theoretical comparison between the symplectic decomposition and the standard eigenvalue decomposition, in terms of the sensitivity of entropy and also from optimization point of view using trace maximization.A benchmark study is conducted in Section 4, where the similarity and difference between the Fisher sensitivity and the main effects indices used in variance based analysis are discussed.In Section 5, a numerical example using a simple cantilever beam is used to demonstrate the effect of symplectic decomposition.Concluding remarks are given in Section 6.

Symplectic decomposition
From elementary linear algebra, we know that a real symmetric matrix F can be diagonalized by orthogonal matrices: where Q is the orthogonal eigenvector matrix, i.e.Q T = Q −1 , and Λ = diag(λ 1 , λ 2 , . . . ) contains the real eigenvalues.
And the solution to Eq 5 can be solved using the standard eigenvalue equation: The Williamson's theorem provides us with a symplectic variant of the results above.Let F ∈ R 2n×2n be a symmetric and positive definite matrix, the Williamson's theorem says that F can be diagonalized using symplectic matrices [16,20]: where ) is a diagonal matrix with positive entries (d j maybe zero if F is semidefinite).The d j , j = 1, 2, . . ., n are said to be the symplectic eigenvalues of matrix F [21] and are in general not equal to the eigenvalues given in Eq 5.The matrix S = [u 1 , . . ., u n , v 1 , . . ., v n ] is a real symplectic matrix that satisfies the condition: The matrix I n is the identity matrix of size n and the matrix J is itself a symplectic matrix.From Eq 8, we have the following expression for S − T , i.e. the inverse transpose: Substitute the expression for S − T into Eq 7, we have: where the identity J −1 = −J is used.Taking the analogy with Eq 6, the matrix S is called the symplectic eigenvector matrix of F. From Eq 10, we can see that each symplectic eigenvalue d j corresponds to a pair of eigenvectors u j , v j ∈ R 2n : These eigenvector pairs can be normalized so that they form an orthonormal basis for the symplectic vector space: From Eq 10, the symplectic decomposition of F can also be written as: where the expression in Eq 9 is substituted for S −1 in the second to last step.From Eq 13, it is clear that the symmetric form of the matrix F is preserved.In addition, as the determinant of symplectic matrices are always one, from Eq 7, the determinant of the FIM can be expressed as the product of its squared symplectic eigenvalues: As the determinant of the FIM is also equal to the product of its standard eigenvalues, i.e., det F = Π 2n j=1 λ j , Eq 14 implies the total sensitivity volume is conserved in the symplectic decomposition.The volume is in terms of the relative entropy and more details will be given in Section 3.2.It should be noted that the symplectic eigenvectors from Eq 10 cannot be solved using the standard eigenvalue algorithms directly.One approach uses the Schur form of skew-symmetric matrices [22] and the details are given in the Supplementary Material together with the corresponding Matlab script.

Discussion
The procedure given in the previous section tells us that there exist symplectic matrices that can decompose the Fisher information matrix (FIM).Taking analogy with the standard eigenvalue problem, in this section, we first show that the symplectic eigenvector matrix maximises a matrix trace subject to a symplectic constraint, and then demonstrate that the symplectic eigenvalues of the FIM indicate the sensitivity of the Kullback-Leibler (K-L) divergence in a symplectic basis.

Constrained maximization
The standard eigenvalue equation can be obtained from a trace maximization problem subject to an orthogonal constraint: To solve the constrained optimization in Eq 15, the method of Lagrange Multiplier can be used.In this case, the Lagrangian and its first derivative are: where the matrix Λ is the Lagrange Multiplier and we have assumed the matrix A is symmetric.In addition, since the constraint X T X is symmetric, the Lagrange multiplier Λ is also symmetric [23].Setting the first order optimality condition for the Lagrangian, the orthogonality constraint leads to the standard eigenvalue problem AX = XΛ, as Eq (6) when the Fisher information matrix F is the symmetric matrix of interest.
Taking the analogy, the maximization problem can be formulated subject to a symplectic constraint [22] : where both matrix J and Λ are skew symmetric, i.e.J T = −J.The optimality condition then results the following symplectic eigenvalue problem: which has exactly the same form as Eq 10.
The standard and the symplectic eigenvectors thus provide the directions to maximise the matrix trace in an orthogonal and a symplectic basis respectively, and the corresponding eigenvalues indicate the sensitivities.A special case with the Fisher information matrix (FIM) links its eigenvalues to the sensitivities of the Kullback-Leibler (K-L) divergence, aka relative entropy, and this is described in the next section below.

Sensitivity of entropy
As mentioned in the introduction, consider a general function y = h(x), the probabilistic sensitivity analysis characterise the uncertainties of the output y that is induced by the random input x.When the joint probability distribution of the output is known, the entropy of the uncertainty can be estimated as [14]: The perturbation of the entropy, defined as a relative entropy quantified using the K-L divergence, can be approximated by a quadratic form in the Fisher information matrix [6]: where the perturbed probability is approximated using its second order Taylor expansion (e.g., see the appendix of [24]).It is noted in passing that, even without the quadratic approximation to entropy, the Fisher information can be used to quantify the distribution perturbation in its own right [25].
Consider the standard eigenvalue decomposition of the FIM and substitute Eq 5 into the expression for the relative entropy in Eq 22: where ξ j = (Q −1 ∆b) j and it is clear that the eigenvalues λ j indicate the magnitude of the entropy sensitivity.It can be seen from Eq 23 that the relative entropy in this quadratic form can be regarded as an ellipsoid geometrically, i.e., λ j ξ 2 j = 1 .This is a consequence of the semi-positive definiteness of the FIM and the ellipsoid is proper when the FIM is positive-definite.The eigenvectors of the FIM define the principal axes and the inverse of the square roots of the corresponding eigenvalues, i.e., 1/ λ j , are the principal radii of the ellipse.Since the principal axes are orthogonal to each other, there is no direct relationship between any pair of coordinates, say (ξ j , ξ n+j ), even they are dominated by the two-parameter pairs for the same variable of interest as discussed in the introduction.Similarly, the relative entropy in the symplectic basis can be expressed as: where α j = (S −1 ∆b)) j and β j = (S −1 ∆b)) j+n .In contrast to Eq 23, it can be seen that the coordinate pair (α j , β j ) is now forced to form a circle with radius 1/ d j .The consequence is that if (α j , β j ) corresponds to the two-parameter pairs of interest, they are symplectically equivalent, in analogy to the conjugate pair, position and momentum, in Hamiltonian mechanics.

Benchmark study
The Fisher information has been introduced in [6] for sensitivity analysis with respect to distribution parameters.A benchmark study for Fisher sensitivity, using a linear function with decreasing coefficients and a product function with constant coefficients, has been conducted in [11].In this section, we apply the Fisher sensitivity analysis, using both standard eigenvalue decomposition and the proposed symplectic decomposition, to a high dimensional function: This function has a 15 dimensional input vector x and has been used in [7] for variance based sensitivity analysis.This function's coefficients, a 1 , a 2 and a 3 , are chosen so that first five of the input variables have almost no effect on the output variance, x 6 to x 10 have a much larger effect, and the remaining five contribute significantly to the output variance.All input variables are assumed to be independent and from a standard Gaussian distribution, i.e., x ∼ N(0, 1).
The results from the standard eigenvalue analysis of the Fisher information matrix are shown in Figure 1 and Figure 2.
The eigenvalue spectrum has been computed using different number of Monte Carlo samples for convergence check and the eigenvector results in Figure 2 are from 20000 samples.Only the 1st eigenvector is shown in Figure 2 as the eigenvalues corresponding to the rest of the eigenvectors are of much smaller amplitudes as seen in Figure 1.The sensitivity to the mean parameters of the input variables in Figure 2 clearly indicates that there are there importance groups, x 11 to x 15 being the most important and x 1 to x 5 being the least important.This is in good agreement to [7] from a variance-based sensitivity analysis.The sensitivity to the standard deviations, on the other hand, does not show a clear clustered trend, although it is clear that the first few variables have almost no effect.Different from the variance-based analysis where only the amplitudes of the importance are measured, the Fisher sensitivity vectors also provide the relative phases of the sensitivity to the distribution parameters.For example, in Figure 2, it is clear that the effects of the input mean parameters on the output PDF uncertainty are in opposite directions to the effects due to the perturbation of the standard deviations.Note that the absolute sign of the eigenvector is arbitrary.
The symplectic analysis of the FIM, for the benchmark function in Eq 25, is shown in Figure 3 and Figure 4, in similar format to the results presented above for the standard analysis.Different from the standard eigenvalue results, the symplectic eigenvalue spectrum has a dimension half of the standard one but the symplectic eigenvectors always come in pairs.The symplectic sensitivity results in Figure 4 present a similar picture as the standard results, especially that the sensitivity to the standard deviations for u1 vector indicate clear group importance as discussed above.However, in this case, the symplectic results do not provide any new insights.This is because the FIM is dominated by its first eigenvector in this case.For this benchmark function, there is no normalisation required for the sensitivity analysis as all input variables are equivalent.As a result, there is no compression of the sensitivity information as discussed in the motivating example and the engineering example to be studied in the Section 5.For the purpose of benchmarking, the sensitivity vectors from the Fisher analysis can be further summarised for the individual variables and the results are compared to the true main effect indices given in [7].The sensitivity vectors from the FIM provide the principal directions for a simultaneous variation of the input parameters.To look at the effect of individual parameters, Eq 23 and Eq 24 can be used.For example, assuming only parameter b k is varied, from Eq 23: Only the dominant first pair, as seen in Figure 3, is shown here where q jk is the j th element of the eigenvector q k .The term inside the square bracket can be regarded as the contribution to the entropy change due to perturbation of b k alone.And the contributions from the mean and the standard deviation parameter can be further aggregated for the corresponding variables, assuming the the perturbations are independent.
The resulted relative importance of the variables from the Fisher analysis, using the dominant first eigenvector (Fisher Eig) and the first pair of symplectic eigenvector (Fisher S-Eig), can then be compared to the variance based main effects and the comparison is shown in Figure 5.Although there are small deviations, the relative importance of the three group of variables and the order of difference are clearly identified from the Fisher sensitivity analysis.Furthermore, the ratio of the first eigenvalue to the sum of all eigenvalues, as seen in Figure 1, is about 0.75 in this case and that can be regarded as the contribution of the first eigenvector to the entropy change.Although not directly comparable, 0.75 is similar to the 72% main effects contribution to the output variance as reported in [7].This offers plausible suggestion that, in this case, the dominant first eigenvector of the FIM corresponds to the main effects from variance based sensitivity analysis.It should be noted although the contributions from perturbation of individual parameters are useful for benchmarking against variance based main effects, the purpose of the Fisher sensitivity analysis is to look at the simultaneous variations of the input parameters.Contrast to the total effects from variance based analysis, the eigenvectors and symplectic eigenvectors of FIM provide principal sensitivity directions based on the impact on the output PDF, or more specifically the entropy of the output uncertainty.Not only do the eigenvectors indicate the relative amplitude, they also provide the relative phase information of the input parameter variations as discussed earlier.
In the next section, we will consider an engineering example where the input variables are normally of different units.
As their value tend to be of different order of magnitude, normalisation is required for the Fisher sensitivity analysis.In addition, different from the scalar output from this benchmark function, engineering problems tend to have multiple outputs as the example given below.5 Demonstrating application to an engineering example In this section, we consider an engineering design example where the Fisher information is used for parametric sensitivity analysis of a cantilever beam.The beam is subject to a white noise excitation at the middle span position, see Figure 6, where the excitation is bandlimited and only the first three modes are excited.In this case, the quantities of interest are the peak r.m.s responses, i.e., the maximum response along the beam, for both acceleration and strain (output y in Eq 21 is 2-dimensional).The frequency response functions for both acceleration and strain responses, at different positions along the beam, are obtained via modal summation and the modal damping is assumed to be 0.1 for all modes, see the Supplementary Material for details.
It is assumed that five input variables are random, x = (E, ρ, L, w, t), and can be described by Normal distributions, b = (µ m , σ m ), m = 1, 2, . . ., 5, as listed in Table 1.Two cases with different parameter values for the standard deviations of the input random variables are considered, case-1 with small variance and case-2 with big variance, with the mean values same for both cases.To estimate the Fisher information matrix (FIM) in Eq 1, an efficient numerical method based on Monte Carlo sampling and the Likelihood Ratio method is use here.The Likelihood Ratio (aka score function) method obtains a gradient estimation of a performance measure w.r.t continuous parameters in a single simulation run.More details of the method can be found in [6].
For the numerical results below, the FIM is normalised by the standard deviations (for completeness, the results with proportionally normalised FIM are given in the Supplementary Material): ) where j, k = 1, 2, . . ., 10 and m = j/2, n = k/2 when j, k are even, and m = (j + 1)/2, n = (k + 1)/2 when j, k are odd numbers.As discussed in the introduction, the normalisation is necessary for practical applications as the input variables are of different units and often differ by orders of magnitude.Moreover, it largely improves the condition number of the FIM.For example, in this case study, the condition number of the FIM in the order of 10 27 for both case-1 and case-2, and it reduces to the order of 10 2 for both cases after normalisation.
Once the FIM is estimated and normalised, the standard approach is to compute the eigenvalues and eigenvectors of the FIM for sensitivity analysis [6].As discussed, the eigenvalues of the FIM represent the magnitudes of the sensitivities with respect to simultaneous parameter variations, and the most sensitive directions are given by the eigenvectors corresponding to largest eigenvalues.The standard eigenvalues and eigenvectors are denoted as 'EigValue' and 'EigVector' and are shown in Figure 7, Figure 8 (a), Figure 9 (a).In Figure 7, the eigenvalues are ordered from large to small, with the first four of similar magnitude but much larger than the rest.Note that the spectrum here is quite different from the benchmark case shown in Figure 1 where one eigenvalue dominates.The corresponding first four eigenvectors are displayed in Figure 8 and Figure 9.The results for case-1 in Figure 8 (a) are relatively straightforward as it can be seen that the influential parameters are σ L , σ t , µ L , µ t from the first to the fourth eigenvectors.As the first eigenvalue is almost two times of the second one, this implies we can focus on the parameter σ L to change the entropy the most, as discussed in Section 3.2.However, the sensitivity information from the eigenvectors for case-2 in Figure 9 (a) are less clear.This is because the four eigenvalues in this case are of very similar magnitude.Furthermore, there are three different parameters that are important for the 2nd eigenvector, µ E , µ ρ , µ t , and the 4th eigenvector, σ E , σ ρ , σ t .
If we take a closer look at the eigenvector results in Figure 8 (a) and Figure 9 (a), it seems that there is a split phenomenon between the mean and standard deviation of the same variables.In Figure 8 (a), the 1st and 2nd eigenvectors point us to the standard deviations of the variables L and t, while it is the mean values of the two variables that are important for the 3rd and 4th eigenvectors.Similarly, in Figure 9 (a) for case-2, σ L and µ L , the mean and standard deviation of the variable L, dominates the 1st and the 3rd eigenvector respectively, while the mean and standard deviation of E, ρ, t are the influential parameters for the 2nd and 4th eigenvector.In other words, the dominance of the sensitivity to the mean and the standard deviation of the same variable splits into different eigenvectors, e.g., σ L dominates the 1st eigenvector while µ L dominates the 3rd.This split phenomenon can be understood as a consequence of the normalisation as mentioned for the motivating example in Section 1.2.The normalisation compresses the relative magnitudes between the eigenvalues so that the Fisher information matrix is better conditioned.This makes the ellipsoid for the relative entropy (c.f.Section 3.2) closer to a sphere.The orthogonality of the principal axes could then result a split between the mean and standard deviation parameters of the same variable as their influences are similar.As a result, it is difficult to identify the most influential variables.On the contrary, the symplectic decomposition enforces a symplectic structure that tends to mitigate this issue by making the sensitivity information more distinctive by pairing the parameters of the same variable.
As described in Section 2, the same Fisher information matrix (FIM) can also be decomposed onto a symplectic basis.The symplectic eigenvalues and eigenvectors are named as 'S-EigValue' and 'S-EigVector' and are shown in Figure 7   the standard eigenvalues.The symplectic eigenvectors come in pairs, (u 1 , v 1 ) and (u 2 , v 2 ) , as shown in Figure 8 (b), Figure 9 (b), and each pair corresponds to the same symplectic eigenvalue.As compared to the standard eigenvectors, the split parameters are grouped together in the symplectic eigenvectors.For example, for case-1 results in Figure 8 (b), the 1st symplectic eigenvector pairs identify L as the influential variable, with its mean and standard deviation dominates u 1 and v 1 respectively.The same can be found for the variable t for 2nd pair of symplectic eigenvectors (u 2 , v 2 ) in Figure 8 (b).Similar conclusion can be found for the case-2 results in Figure 9 (b).The grouping of the parameters is a consequence of the symplectic structure, where parameters are regarded as two-parameter pairs, e.g., (µ, σ) in this case.This is really pertinent in the sensitivity analysis as it makes the influential variables, or two-parameter pairs, very distinctive.
It is interesting to note that in both cases, the square of the 1st symplectic eigenvalue is almost the same as the product of the two standard eigenvalues that split.For example, for case-1, d 2 1 = 1.96 and that is about the same as the product λ 1 × λ 3 = 1.97, which corresponds to the 1st and 3rd eigenvectors that are dominated by σ L and µ L respectively.This is a consequence of Eq 14, where when two of the standard eigenvectors splits, the product of their eigenvalues tends to be conserved in the corresponding symplectic decomposition.This also occurs for decision oriented pairings to be presented in Figure 10.For case-1, the 1st and 2nd eigenvectors are dominated by σ L and σ t .When these two parameters are paired together, as to be seen in Figure 10 (a), d 2 1 = 2.56 is very similar to the product λ 1 × λ 2 = 2.6 in Figure 8 (a).Although in this simple example the parameter split found from the standard eigenvalue analysis can be easily identified, it will get more difficult with a larger number of parameters.On the contrary, the parameter pairing structure is enforced by the symplectic decomposition.The same conclusion can also be made for the proportionally normalised FIM as presented in the Supplementary Material.
In addition, contrary to the standard eigenvalue decomposition where the sensitivity information is fixed for a given FIM, the symplectic variant takes account of user input for the pairing decisions.As an example, two different pairing decisions are considered for the same FIM from case 1 presented in Figure 8.The symplectic eigenvectors are shown in Figure 10, with the rows and columns of the FIM rearranged as per the pairing requirement.It should be noted that while the standard eigenvalue analysis is invariant with respect to the row/column operation, the symplectic spectra are different as shown in Figure 10.
Instead of using the mean and stand deviation as natural pairs for the same variables, we consider pairing the mean and standard deviation parameters for two different variables.In Figure 10 (a), we pair L and t, i.e., (µ L ,µ t ) and (σ L ,σ t ) in pairs, while in Figure 10 (b), we pair L and w, i.e., (µ L ,µ w ) and (σ L ,σ w ) in pairs.It is noted in passing that although mainly for demonstrating purposes, these pairing decisions can arise in practice where the actions to reduce the uncertainties of two independent variables can impact both.For example, modifying of the production line can have the same effect on the uncertainties of the length L and the thickness t, and this would prompt a decision-oriented sensitivity analysis with respect to the parameter pairs.It is clear from Figure 10 that the sensitivity to the paired parameters are grouped together as before.For example, the sensitivity to the pair (σ L , σ t ) dominates the first group of the symplectic eigenvectors in Figure 10 (a), and (σ L , σ w ) are grouped together in the 2nd symplectic eigenvector pair in Figure 10 (b).It is interesting to note that the 1st symplectic eigenvector pair in Figure 10 (b) is very similar to the 2nd symplectic eigenvector pair in Figure 8 (b).This is mostly because the pairing for E, ρ and t are the same for the two figures.However, for L and w pairing in Figure 10 (b), the dominance of L seen in Figure 8 disappears and t becomes the most sensitive variable for the case described in Figure 10 (b).This demonstrates that the symplectic decomposition is decision oriented, as even for the same FIM, it extracts different sensitivity information according to different pairing strategies.
While only an engineering design example is considered here, the benefits of the symplectic decomposition are expected for general decision problems, whenever the spectral analysis of the FIM is used for sensitivity analysis.As the additional computation cost is negligible once the Fisher information matrix is obtained, the symplectic decomposition can be used in tandem with the standard eigenvalue decomposition to extract more useful sensitivity information.

Conclusions
A new probabilistic sensitivity metric has been proposed based on the symplectic spectral analysis of the Fisher information matrix (FIM).Contrasting to the standard eigenvalue decomposition, the symplectic decomposition of the FIM naturally identifies the sensitivity information with respect to two-parameter pairs, e.g., mean and standard deviation of a random input.The resulted symplectic eigenvalues of large magnitude, and the corresponding symplectic eigenvectors of the FIM, then reveal the most sensitive two-parameter pairs.Through an engineering design example using a simple cantilever beam, it is observed that the normalisation of the FIM tends to compress the relative magnitudes between the eigenvalues.Geometrically the relative entropy ellipsoid becomes near-spherical (c.f.Section 3.2) due to the normalisation, and this results a split phenomenon of different distribution parameters of the same variable.It is demonstrated that the proposed symplectic decomposition can reveal the concealed sensitivity information between the parameter pairs.Contrary to the standard eigen decomposition where the sensitivity information is fixed for a given FIM, the symplectic variant takes account of user input for the pairing decisions.As the additional computation cost is negligible once the Fisher information matrix is obtained, the symplectic decomposition can thus be used in tandem with the standard eigenvalue decomposition to gain more insight into the sensitivity information, and orient towards a better decision support under uncertainties.
The proposed symplectic decomposition is only applicable for parameter space of even dimensions.For distribution parameters that belong to two-parameter family of probability distributions, such as the widely used location-scale families, there is a natural pairing of the parameters.For more general cases, a decision-oriented two-parameter re-parametrization of the Fisher information matrix is necessary and that is one of the future research to be explored.

Figure 1 :
Figure 1: Eigenvalue spectrum of the FIM, for the sensitivity of the benchmark function in Eq 25. Results are given for different number of MC samples for convergence check.

Figure 2 :
Figure 2: The first eigenvector of the FIM for the benchmark function Eq 25 with respect to distribution parameters, mean and standard deviation (Std Dev) in this Gaussian input case, of the 15 input variables.Only the dominant first eigenvector, as seen in Figure 1, is shown here.

Figure 3 :
Figure 3: The symplectic eigenvalue spectrum of the FIM (S-Eig), for the sensitivity of the benchmark function in Eq 25. Results are given for different number of MC samples for convergence check.Note that the dimension of symplectic spectrum is 15, which is half of the size of the standard eigenvalue spectrum shown in Figure 1.

Figure 4 :
Figure 4: The first pair of symplectic eigenvectors (S-EigVector) of the FIM for the benchmark function Eq 25 with respect to distribution parameters, mean and standard deviation (Std Dev) in this Gaussian input case, of the 15 input variables.Only the dominant first pair, as seen in Figure 3, is shown here

Figure 5 :
Figure 5: Variable importance ranking using three different indices: the 1st set of S-EigVector, the 1st Eigvector and the true main effect indices given in [7].The results are normalised by the largest value.

Figure 6 :
Figure 6: A cantilever beam subject to white noise excitation of unit amplitude; the responses consists of peak r.m.s acceleration and strain response.'peak' indicates the maximum response along the beam for each sample of the random input.The two type of responses are normalised by the maximum values across the ensemble of the random samples.

Figure 8 :Figure 9 :
Figure 8: Eigenvectors (EigVector) and symplectic eigenvectors (S-EigVector) of the FIM for Case-1.The symplectic eigenvectors come in pairs, u 1 , v 1 and u 2 , v 2 , and each pair corresponds to the same symplectic eigenvalue (a) S-EigVector (L and t pairing) (b) S-EigVector (L and w pairing)

Table 1 :
Mean (µ)and Coefficient of Variation (CoV) for the random variables.CoV different for the two cases considered.20000 Monte Carlo samples are used for both Case-1 and Case-2.