Skip to Main Content
2,090
Views
17
CrossRef citations to date
Altmetric

ABSTRACT

We present an offline, iterated particle filter to facilitate statistical inference in general state space hidden Markov models. Given a model and a sequence of observations, the associated marginal likelihood L is central to likelihood-based inference for unknown statistical parameters. We define a class of “twisted” models: each member is specified by a sequence of positive functions ψ and has an associated ψ-auxiliary particle filter that provides unbiased estimates of L. We identify a sequence ψ* that is optimal in the sense that the ψ*-auxiliary particle filter’s estimate of L has zero variance. In practical applications, ψ* is unknown so the ψ*-auxiliary particle filter cannot straightforwardly be implemented. We use an iterative scheme to approximate ψ* and demonstrate empirically that the resulting iterated auxiliary particle filter significantly outperforms the bootstrap particle filter in challenging settings. Applications include parameter estimation using a particle Markov chain Monte Carlo algorithm.

1. Introduction

Particle filtering, or sequential Monte Carlo (SMC), methodology involves the simulation over time of an artificial particle system (ξit; t ∈ {1, …, T}, i ∈ {1, …, N}). It is particularly suited to numerical approximation of integrals of the form (1) Z:=XTμ1(x1)g1(x1)t=2Tft(xt-1,xt)gt(xt)dx1:T,(1) where X=Rd for some dN, TN, x1: T ≔ (x1, …, xT), μ1 is a probability density function on X, each ft a transition density on X, and each gt is a bounded, continuous, and nonnegative function. Algorithm 1 describes a particle filter, using which an estimate of (1) can be computed as (2) ZN:=t=1T1Ni=1Ngt(ξti).(2)

Algorithm 1

A Particle Filter

1.

Sample ξi1 ∼ μ1 independently for i ∈ {1, …, N}.

2.

For t = 2, …, T, sample independently ξtij=1Ngt-1(ξt-1j)ft(ξt-1j,·)j=1Ngt-1(ξt-1j),i{1,,N}.

Particle filters were originally applied to statistical inference for hidden Markov models (HMMs) by Gordon, Salmond, and Smith (1993), and this setting remains an important application. Letting Y=Rd' for some d'N, an HMM is a Markov chain evolving on X×Y, (Xt,Yt)tN, where (Xt)tN is itself a Markov chain and for t ∈ {1, …, T}, each Yt is conditionally independent of all other random variables given Xt. In a time-homogeneous HMM, letting P denote the law of this bivariate Markov chain, we have (3) P(X1:TA,Y1:TB):=A×Bμ(x1)g(x1,y1)t=2Tf(xt-1,xt)g(xt,yt)dx1:Tdy1:T,(3) where μ:XR+ is a probability density function, f:X×XR+ a transition density, g:X×YR+ an observation density and A and B measurable subsets of XT and YT, respectively. Statistical inference is often conducted upon the basis of a realization y1: T of Y1: T for some finite T, which we will consider to be fixed throughout the remainder of the article. Letting E denote expectations w.r.t. P, our main statistical quantity of interest is L:=E[t=1Tg(Xt,yt)], the marginal likelihood associated with y1: T. In the above, we take R+ to be the nonnegative real numbers, and assume throughout that L > 0.

Running Algorithm 1 with (4) μ1=μ,ft=f,gt(x)=g(x,yt),(4) corresponds exactly to running the bootstrap particle filter (BPF) of Gordon, Salmond, and Smith (1993), and we observe that when (4) holds, the quantity Z defined in (1) is identical to L, so that ZN defined in (2) is an approximation of L. In applications where L is the primary quantity of interest, there is typically an unknown statistical parameter θ ∈ Θ that governs μ, f, and g, and in this setting the map θ↦L(θ) is the likelihood function. We continue to suppress the dependence on θ from the notation until Section 5.

The accuracy of the approximation ZN has been studied extensively. For example, the expectation of ZN, under the law of the particle filter, is exactly Z for any NN, and ZN converges almost surely to Z as N → ∞; these can be seen as consequences of Del Moral (2004, Theorem 7.4.2). For practical values of N, however, the quality of the approximation can vary considerably depending on the model and/or observation sequence. When used to facilitate parameter estimation using, for example, particle Markov chain Monte Carlo (Andrieu, Doucet, and Holenstein 2010), it is desirable that the accuracy of ZN be robust to small changes in the model and this is not typically the case.

In Section 2 we introduce a family of “twisted HMMs,” parameterized by a sequence of positive functions ψ:=(ψ1,,ψT). Running a particle filter associated with any of these twisted HMMs provides unbiased and strongly consistent estimates of L. Some specific definitions of ψ correspond to well-known modifications of the BPF, and the algorithm itself can be viewed as a generalization of the auxiliary particle filter (APF) of Pitt and Shephard (1999). Of particular interest is a sequence ψ* for which ZN = L with probability 1. In general, ψ* is not known and the corresponding APF cannot be implemented, so our main focus in Section 3 is approximating the sequence ψ* iteratively, and defining final estimates through use of a simple stopping rule. In the applications of Section 5, we find that the resulting estimates significantly outperform the BPF, and exhibit some robustness to both increases in the dimension of the latent state space X and changes in the model parameters. There are some restrictions on the class of transition densities and the functions ψ1, …, ψT that can be used in practice, which we discuss.

This work builds upon a number of methodological advances, most notably the twisted particle filter (Whiteley and Lee 2014), the APF (Pitt and Shephard 1999), block sampling (Doucet, Briers, and Sénécal 2006), and look-ahead schemes (Lin et al. 2013). In particular, the sequence ψ* is closely related to the generalized eigenfunctions described in Whiteley and Lee (2014), but in that work the particle filter as opposed to the HMM was twisted to define alternative approximations of L. For simplicity, we have presented the BPF in which multinomial resampling occurs at each timestep. Commonly employed modifications of this algorithm include adaptive resampling (Kong, Liu, and Wong 1994; Liu and Chen 1995) and alternative resampling schemes (see, e.g., Douc, Cappé, and Moulines 2005). Generalization to the time-inhomogeneous HMM setting is fairly straightforward, so we restrict ourselves to the time-homogeneous setting for clarity of exposition.

2. Twisted Models and the ψ-Auxiliary Particle Filter

Given an HMM (μ, f, g) and a sequence of observations y1: T, we introduce a family of alternative twisted models based on a sequence of real-valued, bounded, continuous, and positive functionsψ:=(ψ1,ψ2,,ψT). Letting, for an arbitrary transition density f and function ψ, f(x,ψ):=Xf(x,x')ψ(x')dx', we define a sequence of normalizing functions(ψ˜1,ψ˜2,,ψ˜T) on X by ψ˜t(xt):=f(xt,ψt+1) for t ∈ {1, …, T − 1}, ψ˜T1, and a normalizing constant ψ˜0:=Xμ(x1)ψ1(x1)dx1. We then define the twisted model via the following sequence of twisted initial and transition densities: (5) μ1ψ(x1):=μ(x1)ψ1(x1)ψ˜0,ftψ(xt-1,xt):=f(xt-1,xt)ψt(xt)ψ˜t-1(xt-1),t{2,,T},(5) and the sequence of positive functions (6) g1ψ(x1):=g(x1,y1)ψ˜1(x1)ψ1(x1)ψ˜0,gtψ(xt):=g(xt,yt)ψ˜t(xt)ψt(xt),t{2,,T},(6) which play the role of observation densities in the twisted model. Our interest in this family is motivated by the following invariance result. We denote by 1 the sequence of constant functions equal to 1 everywhere.

Proposition 1.

If ψ is a sequence of bounded, continuous and positive functions, and Zψ:=XTμ1ψ(x1)g1ψ(x1)t=2Tftψ(xt-1,xt)gtψ(xt)dx1:T,then Zψ=L.

Proof.

We observe that μ1ψ(x1)g1ψ(x1)t=2Tftψxt-1,xtgtψxt=μ(x1)ψ1(x1)ψ˜0g1x1ψ˜1x1ψ1x1ψ˜0·t=2Tfxt-1,xtψtxtψ˜t-1xt-1gt1xtψ˜txtψtxt=μx1g11x1t=2Tfxt-1,xtgt1xt,and the result follows.

From a methodological perspective, Proposition 1 makes clear a particular sense in which the L.H.S. of (1) is common to an entire family of μ1, (ft)t ∈ {2, …, T} and (gt)t ∈ {1, …, T}. The BPF associated with the twisted model corresponds to choosing (7) μ1=μψ,ft=ftψ,gt=gtψ,(7) in Algorithm 1; to emphasize the dependence on ψ, we provide in Algorithm 2 the corresponding algorithm and we will denote approximations of L by ZψN. We demonstrate below that the BPF associated with the twisted model can also be viewed as an APF associated with the sequence ψ, and so refer to this algorithm as the ψ-APF. Since the class of ψ-APF’s is very large, it is natural to consider whether there is an optimal choice of ψ, in terms of the accuracy of the approximation ZψN: the following proposition describes such a sequence.

Algorithm 2

ψ-Auxiliary Particle Filter

1.

Sample ξ1iμψ independently for i ∈ {1, …, N}.

2.

For t = 2, …, T, sample independently ξtij=1Ngt-1ψ(ξt-1j)ftψ(ξt-1j,·)j=1Ngt-1ψ(ξt-1j),i{1,,N}.

Proposition 2.

Let ψ*:=(ψ1*,,ψT*), where ψ*T(xT) ≔ g(xT, yT), and (8) ψt*xt:=gxt,ytEp=t+1TgXp,yp|Xt=xt,xtX,(8) for t ∈ {1, …, T − 1}. Then, Zψ*N=L with probability 1.

Proof.

It can be established that g(xt,yt)ψ˜t*(xt)=ψt*(xt),t{1,,T},xtX,and so we obtain from (6) that g1ψ*ψ˜0* and gtψ*1 for t ∈ {2, …, T}. Hence, ZNψ*=t=1T1Ni=1Ngtψ*ξti=ψ˜0*,with probability 1. To conclude, we observe that ψ˜0*=Xμx1ψ1*x1dx1=Xμx1Et=1TgXt,yt|X1=x1dx1=Et=1TgXt,yt=L.

Implementation of Algorithm 2 requires that one can sample according to μ1ψ and ftψ(x,·) and compute gtψ pointwise. This imposes restrictions on the choice of ψ in practice, since one must be able to compute both ψt and ψ˜t pointwise. In general models, the sequence ψ* cannot be used for this reason as (8) cannot be computed explicitly. However, since Algorithm 2 is valid for any sequence of positive functions ψ, we can interpret Proposition 2 as motivating the effective design of a particle filter by solving a sequence of function approximation problems.

Alternatives to the BPF have been considered before (see, e.g., the “locally optimal” proposal in Doucet, Godsill, and Andrieu 2000 and the discussion in Del Moral 2004, Section 2.4.2). The family of particle filters we have defined using ψ are unusual, however, in that gtψ is a function only of xt rather than (xt − 1, xt); other approaches in which the particles are sampled according to a transition density that is not f typically require this extension of the domain of these functions. This is again a consequence of the fact that the ψ-APF can be viewed as a BPF for a twisted model. This feature is shared by the fully adapted APF of Pitt and Shephard (1999), when recast as a standard particle filter for an alternative model as in Johansen and Doucet (2008), and which is obtained as a special case of Algorithm 2 when ψt( · ) ≡ g( ·, yt) for each t ∈ {1, …, T}. We view the approach here as generalizing that algorithm for this reason.

It is possible to recover other existing methodological approaches as BPFs for twisted models. In particular, when each element of ψ is a constant function, we recover the standard BPF of Gordon, Salmond, and Smith (1993). Setting ψt(xt) = g(xt, yt) gives rise to the fully adapted APF. By taking, for some kN and each t ∈ {1, …, T}, (9) ψtxt=gxt,ytEp=t+1(t+k)TgXp,yp|Xt=xt,xtX,(9) ψ corresponds to a sequence of look-ahead functions (see, e.g., Lin et al. 2013) and one can recover idealized versions of the delayed sample method of Chen, Wang, and Liu (2000) (see also the fixed-lag smoothing approach in Clapp and Godsill 1999), and the block sampling particle filter of Doucet, Briers, and Sénécal (2006). When kT − 1, we obtain the sequence ψ*. Just as ψ* cannot typically be used in practice, neither can the exact look-ahead strategies obtained by using (9) for some fixed k. In such situations, the proposed look-ahead particle filtering strategies are not ψ-APFs, and their relationship to the ψ*-APF is consequently less clear. We note that the offline setting we consider here affords us the freedom to define twisted models using the entire data record y1: T. The APF was originally introduced to incorporate a single additional observation, and could therefore be implemented in an online setting, that is, the algorithm could run while the data record was being produced.

3. Function Approximations and the Iterated APF

3.1. Asymptotic Variance of the ψ-APF

Since it is not typically possible to use the sequence ψ* in practice, we propose to use an approximation of each member of ψ*. To motivate such an approximation, we provide a central limit theorem, adapted from a general result due to Del Moral (2004, Chap. 9). It is convenient to make use of the fact that the estimate ZψN is invariant to rescaling of the functions ψt by constants, and we adopt now a particular scaling that simplifies the expression of the asymptotic variance. In particular, we let ψt(x):=ψt(x)EψtXtY1:t-1=y1:t-1,ψt*(x):=ψt*(x)Eψt*XtY1:t-1=y1:t-1.

Proposition 3.

Let ψ be a sequence of bounded, continuous, and positive functions. Then NZψNZ-1dN(0,σψ2),where (10) σψ2:=t=1TEψt*XtψtXtψt*XtψtXtψt*XtψtXtY1:T=y1:T-1.(10)

We emphasize that Proposition 3, whose proof can be found in the Appendix, follows straightforwardly from existing results for Algorithm 1, since the ψ-APF can be viewed as a BPF for the twisted model defined by ψ. For example, in the case ψ consists only of constant functions, we obtain the standard asymptotic variance for the BPF σ2=t=1TEψt*XtY1:T=y1:T-1.From Proposition 3, we can deduce that σψ2 tends to 0 as ψ approaches ψ* in an appropriate sense. Hence, Propositions 2 and 3 together provide some justification for designing particle filters by approximating the sequence ψ*.

3.2. Classes of f and ψ

While the ψ-APF described in Section 2 and the asymptotic results just described are valid very generally, practical implementation of the ψ-APF does impose some restrictions jointly on the transition densities f and functions in ψ. Here we consider only the case where the HMM’s initial distribution is a mixture of Gaussians and f is a member of F, the class of transition densities of the form (11) fx,·=k=1Mck(x)N·;akx,bkx,(11) where MN, and (ak)k ∈ {1, …, M} and (bk)k ∈ {1, …, M} are sequences of mean and covariance functions, respectively and (ck)k ∈ {1, …, M} a sequence of R+-valued functions with ∑Mk = 1ck(x) = 1 for all xX. Let Ψ define the class of functions of the form (12) ψ(x)=C+k=1MckNx;ak,bk,(12) where MN, CR+, and (ak)k ∈ {1, …, M}, (bk)k ∈ {1, …, M} and (ck)k ∈ {1, …, M} are a sequence of means, covariances, and positive real numbers, respectively. When fF and each ψt ∈ Ψ, it is straightforward to implement Algorithm 2 since, for each t ∈ {1, …, T}, both ψt(x) and ψ˜t-1(x)=f(x,ψt) can be computed explicitly and ftψ(x,·) is a mixture of normal distributions whose component means and covariance matrices can also be computed. Alternatives to this particular setting are discussed in Section 6.

3.3. Recursive Approximation of ψ*

The ability to compute f( ·, ψt) pointwise when fF and ψt ∈ Ψ is also instrumental in the recursive function approximation scheme we now describe. Our approach is based on the following observation.

Proposition 4.

The sequence ψ* satisfies ψ*T(xT) = g(xT, yT), xTX and (13) ψt*(xt)=g(xt,yt)fxt,ψt+1*,xtX,t{1,,T-1}.(13)

Proof.

The definition of ψ* provides that ψ*T(xT) = g(xT, yT). For t ∈ {1, …, T − 1}, gxt,ytfxt,ψt+1*=gxt,ytXfxt,xt+1Ep=t+1TgXp,ypXt+1=xt+1dxt+1=gxt,ytEp=t+1TgXp,ypXt=xt=ψt*xt.

Let (ξ1: N1, …, ξT1: N) be random variables obtained by running a particle filter. We propose to approximate ψ* by Algorithm 3, for which we define ψT + 1 ≡ 1. This algorithm mirrors the backward sweep of the forward filtering backward smoothing recursion which, if it could be calculated, would yield exactly ψ*.

Algorithm 3

Recursive function approximations

For t = T, …, 1:

1.

Set ψitgti, yt)fit, ψt + 1) for i ∈ {1, …, N}.

2.

Choose ψt as a member of Ψ on the basis of ξ1: Nt and ψ1: Nt.

One choice in Step 2 of Algorithm 3 is to define ψt using a nonparametric approximation such as a Nadaraya–Watson estimate (Nadaraya 1964; Watson 1964). Alternatively, a parametric approach is to choose ψt as the minimizer in some subset of Ψ of some function of ψt, ξ1: Nt and ψ1: Nt. Although a number of choices are possible, we focus in Section 5 on a simple parametric approach that is computationally inexpensive.

3.4. The Iterated Auxiliary Particle Filter

The iterated auxiliary particle filter (iAPF), Algorithm 4, is obtained by iteratively running a ψ-APF and estimating ψ* from its output. Specifically, after each ψ-APF is run, ψ* is reapproximated using the particles obtained, and the number of particles is increased according to a well-defined rule. The algorithm terminates when a stopping rule is satisfied.

Algorithm 4

An iterated auxiliary particle filter with parameters (N0, k, τ)

1.

Initialize: set ψ0 to be a sequence of constant functions, l ← 0.

2.

Repeat:

(a)

Run a ψl-APF with Nl particles, and set Z^lZψlNl.

(b)

If l > k and sd (Z^l-k:l)/ mean (Z^l-k:l)<τ, go to 3.

(c)

Compute ψl+1 using a version of Algorithm 3 with the particles produced.

(d)

If Nlk = Nl and the sequence Z^l-k:l is not monotonically increasing, set Nl + 1 ← 2Nl. Otherwise, set Nl + 1Nl.

(e)

Set ll + 1 and go back to 2a.

3.

Run a ψl-APF and return Z^:=ZψNl

The rationale for Step 2(d) of Algorithm 4 is that if the sequence Z^l-k:l is monotonically increasing, there is some evidence that the approximations ψl-k:l are improving, and so increasing the number of particles may unnecessarily increase computational cost. However, if the approximations Z^l-k:l have both high relative standard deviation in comparison to τ and are oscillating then reducing the variance of the approximation of Z and/or improving the approximation of ψ* may require an increased number of particles. Some support for this procedure can be obtained from the log-normal CLT of Bérard, Del Moral, and Doucet (2014): under regularity assumptions, logZψN is approximately a N(-δψ2/2,δψ2) random variable and so P(Zψ'NZψN)1-Φ([δψ'2-δψ2]/[2δψ2+δψ'2]), which is close to 1 when δψ'2δψ2.

4. Approximations of Smoothing Expectations

Thus far, we have focused on approximations of the marginal likelihood, L, associated with a particular model and data record y1: T. Particle filters are also used to approximate so-called smoothing expectations, that is, π(φ):=E[φ(X1:T){Y1:T=y1:T}] for some φ:XTR. Such approximations can be motivated by a slight extension of (1), γ(φ):=XTφ(x1:T)μ1x1g1x1t=2Tftxt-1,xtgtxtdx1:T,where ϕ is a real-valued, bounded, continuous function. We can write π(ϕ) = γ(ϕ)/γ(1), where 1 denotes the constant function x↦1. We define below a well-known, unbiased, and strongly consistent estimate γN(ϕ) of γ(ϕ), which can be obtained from Algorithm 1. A strongly consistent approximation of π(ϕ) can then be defined as γN(ϕ)/γN(1).

The definition of γN(ϕ) is facilitated by a specific implementation of step 2. of Algorithm 1 in which one samples At-1i Categorical gt-1(ξt-11)j=1Ngt-1(ξt-1j),,gt-1(ξt-1N)j=1Ngt-1(ξt-1j),ξtift(ξt-1At-1i,·),for each i ∈ {1, …, N} independently. Use of, for example, the Alias algorithm (Walker 1974, 1977) gives the algorithm O(N) computational complexity, and the random variables (Ait; t ∈ {1, …, T − 1}, i ∈ {1, …, N}) provide ancestral information associated with each particle. By defining recursively for each i ∈ {1, …, N}, BiTi and Bt-1i:=At-1Bti for t = T, …, 2, the {1, …, N}T-valued random variable Bi1: T encodes the ancestral lineage of ξiT (Andrieu, Doucet, and Holenstein 2010). It follows from Del Moral (2004, Theorem 7.4.2) that the approximation γN(φ):=1Ni=1NgT(ξTi)φ(ξ1B1i,ξ2B2i,,ξTBTi)×t=1T-11Ni=1Ngt(ξti),is unbiased and strongly consistent, and a strongly consistent approximation of π(ϕ) is (14) πN(φ):=γN(φ)γN(1)=1i=1NgT(ξTi)×i=1Nφξ1B1i,ξ2B2i,,ξTBTigT(ξTi).(14) The ψ*-APF is optimal in terms of approximating γ(1) ≡ Z and not π(ϕ) for general ϕ. Asymptotic variance expressions akin to Proposition 3, but for πψN(φ), can be derived using existing results (see, e.g., Del Moral and Guionnet 1999; Chopin 2004; Künsch 2005; Douc and Moulines 2008) in the same manner. These could be used to investigate the influence of ψ on the accuracy of πψN(φ) or the interaction between ϕ and the sequence ψ which minimizes the asymptotic variance of the estimator of its expectation.

Finally, we observe that when the optimal sequence ψ* is used in an APF in conjunction with an adaptive resampling strategy (see Algorithm 5), the weights are all equal, no resampling occurs and the ξit are all iid samples from P(Xt·{Y1:T=y1:T}). This at least partially justifies the use of iterated ψ-APFs to approximate ψ*: the asymptotic variance σψ2 in (10) is particularly affected by discrepancies between ψ* and ψ in regions of relatively high conditional probability given the data record y1: T, which is why we have chosen to use the particles as support points to define approximations of ψ* in Algorithm 3.

5. Applications and Examples

The purpose of this section is to demonstrate that the iAPF can provide substantially better estimates of the marginal likelihood L than the BPF at the same computational cost. This is exemplified by its performance when d is large, recalling that X=Rd. When d is large, the BPF typically requires a large number of particles to approximate L accurately. In contrast, the ψ*-APF computes L exactly, and we investigate below the extent to which the iAPF is able to provide accurate approximations in this setting. Similarly, when there are unknown statistical parameters θ, we show empirically that the accuracy of iAPF approximations of the likelihood L(θ) are more robust to changes in θ than their BPF counterparts.

Unbiased, nonnegative approximations of likelihoods L(θ) are central to the particle marginal Metropolis–Hastings algorithm (PMMH) of Andrieu, Doucet, and Holenstein (2010), a prominent parameter estimation algorithm for general state space hidden Markov models. An instance of a pseudo-marginal Markov chain Monte Carlo algorithm (Beaumont 2003; Andrieu and Roberts 2009), the computational efficiency of PMMH depends, sometimes dramatically, on the quality of the unbiased approximations of L(θ) (Andrieu and Vihola 2015; Lee and Łatuszyński 2014; Sherlock et al. 2015; Doucet et al. 2015) delivered by a particle filter for a range of θ values. The relative robustness of iAPF approximations of L(θ) to changes in θ, mentioned above, motivates their use over BPF approximations in PMMH.

5.1. Implementation Details

In our examples, we use a parametric optimization approach in Algorithm 3. Specifically, for each t ∈ {1, …, T}, we compute numerically a regularized version of (15) mt*,Σt*,λt*=argminm,Σ,λi=1NNξti;m,Σ-λψti2,(15) and then set (16) ψt(xt):=Nxt;mt*,Σt*+c(N,mt*,Σt*),(16) where c is a positive real-valued function, which ensures that ftψ(x,·) is a mixture of densities with some nonzero weight associated with the mixture component f(x, ·). This is intended to guard against terms in the asymptotic variance σψ2 in (10) being very large or unbounded. We chose (15) for simplicity and its low computational cost, and it provided good performance in our simulations. For the stopping rule, we used k = 5 for the application in Section 5.2, and k = 3 for the applications in Sections 5.3 and 5.4. We observed empirically that the relative standard deviation of the likelihood estimate tended to be close to, and often smaller than, the chosen level for τ. A value of τ = 1 should therefore be sufficient to keep the relative standard deviation around 1 as desired (see, e.g., Doucet et al. 2015; Sherlock et al. 2015). We set τ = 0.5 as a conservative choice for all our simulations apart from the multivariate stochastic volatility model of Section 5.4, where we set τ = 1 to improve speed. We performed the minimization in (15) under the restriction that Σ was a diagonal matrix, as this was considerably faster and preliminary simulations suggested that this was adequate for the examples considered.

We used an effective sample size based resampling scheme (Kong, Liu, and Wong 1994; Liu and Chen 1995), described in Algorithm 5 with a user-specified parameter κ ∈ [0, 1]. The effective sample size is defined as ESS(W1, …, WN) ≔ (∑Ni = 1Wi)2/∑i = 1N(Wi)2, and the estimate of Z is ZN:=tR{T}1Ni=1NWti,R:=t{1,,T-1}: ESS (Wt1,,WtN)κN,where R is the set of “resampling times.” This reduces to Algorithm 2 when κ = 1 and to a simple importance sampling algorithm when κ = 0; we use κ = 0.5 in our simulations. The use of adaptive resampling is motivated by the fact that when the effective sample size is large, resampling can be detrimental in terms of the quality of the approximation ZN.

Algorithm 5

ψ-Auxiliary Particle Filter with κ-adaptive resampling

1.

Sample ξ1iμ1ψ independently, and set W1ig1ψ(ξ1i) for i ∈ {1, …, N}.

2.

For t = 2, …, T:

(a)

If ESS(W1t − 1, …, Wt − 1N) ⩽ κN, sample independently ξtij=1NWt-1jftψ(ξt-1j,·)j=1NWt-1j,i{1,,N},and set Wtigtψ(ξti), i ∈ {1, …, N}.

(b)

Otherwise, sample ξtiftψ(ξt-1i,·) independently, and set WtiWt-1igtψ(ξti) for i ∈ {1, …, N}.

5.2. Linear Gaussian Model

A linear Gaussian HMM is defined by the following initial, transition, and observation Gaussian densities: μ(·)=N(·;m,Σ), f(x,·)=N(·;Ax,B), and g(x,·)=N(·;Cx,D), where mRd, Σ,A,BRd×d, CRd×d' and DRd'×d'. For this model, it is possible to implement the fully adapted APF (FA-APF) and to compute explicitly the marginal likelihood, filtering and smoothing distributions using the Kalman filter, facilitating comparisons. We emphasize that implementation of the FA-APF is possible only for a restricted class of analytically tractable models, while the iAPF methodology is applicable more generally. Nevertheless, the iAPF exhibited better performance than the FA-APF in our examples.

Relative Variance of Approximations of Z When d is Large

We consider a family of linear Gaussian models where m = 0, Σ = B = C = D = Id and Aij = α|ij| + 1, i, j ∈ {1, …, d} for some α ∈ (0, 1). Our first comparison is between the relative errors of the approximations Z^ of L = Z using the iAPF, the BPF, and the FA-APF. We consider configurations with d ∈ {5, 10, 20, 40, 80} and α = 0.42 and we simulated a sequence of T = 100 observations y1: T for each configuration. We ran 1000 replicates of the three algorithms for each configuration and report box plots of the ratio Z^/Z in Figure 1.

Figure 1. Boxplots of Z^/Z for different dimensions using 1000 replicates. The crosses indicate the mean of each sample.

ForFigure 2, Figure 3all the simulations, we ran an iAPF with N0 = 1000 starting particles, a BPF with N = 10, 000 particles and an FA-APF with N = 5000 particles. The BPF and FA-APF both had slightly larger average computational times than the iAPF with these configurations. The average number of particles for the final iteration of the iAPF was greater than N0 only in dimensions d = 40 (1033) and d = 80 (1142). For d > 10, it was not possible to obtain reasonable estimates with the BPF in a feasible computational time (similarly for the FA-APF for d > 20). The standard deviation of the samples and the average resampling count across the chosen set of dimensions are reported in Tables 1 and 2.

Table 1. Empirical standard deviation of the quantity Z^/Z using 1000 replicates.

Table 2. Average resampling count for the 1000 replicates.

Fixing the dimension d = 10 and the simulated sequence of observations y1: T with α = 0.42, we now consider the variability of the relative error of the estimates of the marginal likelihood of the observations using the iAPF and the BPF for different values of the parameter α ∈ {0.3, 0.32, …, 0.48, 0.5}. In Figure 2, we report box plots of Z^/Z in 1000 replications. For the iAPF, the length of the boxes are significantly less variable across the range of values of α. In this case, we used N = 50, 000 particles for the BPF, giving a computational time at least five times larger than that of the iAPF. This demonstrates that the approximations of the marginal likelihood L(α) provided by the iAPF are relatively insensitive to small changes in α, in contrast to the BPF. Similar simulations, which we do not report, show that the FA-APF for this problem performs slightly worse than the iAPF at double the computational time.

Figure 2. Boxplots of Z^/Z for different values of the parameter α using 1000 replicates. The crosses indicate the mean of each sample.

Particle Marginal Metropolis–Hastings.

We consider a Linear Gaussian model with m = 0, Σ = B = C = Id, and D = δId with δ = 0.25. We used the lower-triangular matrix A=0.900000.30.70000.10.20.6000.40.10.10.300.10.20.50.20,and simulated a sequence of T = 100 observations. Assuming only that A is lower triangular, for identifiability, we performed Bayesian inference for the 15 unknown parameters {Ai, j: i, j ∈ {1, …, 5}, ji}, assigning each parameter an independent uniform prior on [ − 5, 5]. From the initial point A1 = I5, we ran three Markov chains ABPF1: L, AiAPF1: L, and AKalman1: L of length L = 300, 000 to explore the parameter space, updating one of the 15 parameters components at a time with a Gaussian random walk proposal with variance 0.1. The chains differ in how the acceptance probabilities are computed, and correspond to using unbiased estimates of the marginal likelihood obtain from the BPF, iAPF or the Kalman filter, respectively. In the latter case, this corresponds to running a Metropolis–Hastings (MH) chain by computing the marginal likelihood exactly. We started every run of the iAPF with N0 = 500 particles. The resulting average number of particles used to compute the final estimate was 500.2. The number of particles N = 20, 000 for the BPF was set to have a greater computational time, in this case ABPF1: L took 50% more time than AiAPF1: L to simulate.

In Figure 3, we plot posterior density estimates obtained from the three chains for 3 of the 15 entries of the transition matrix A. The posterior means associated with the entries of the matrix A were fairly close to A itself, the largest discrepancy being around 0.2, and the posterior standard deviations were all around 0.1. A comparison of estimated Markov chain autocorrelations for these same parameters is reported in Figure 4, which indicates little difference between the iAPF-PMMH and Kalman-MH Markov chains, and substantially worse performance for the BPF-PMMH Markov chain. The integrated autocorrelation time of the Markov chains provides a measure of the asymptotic variance of the individual chains’ ergodic averages, and in this regard the iAPF-PMMH and Kalman-MH Markov chains were practically indistinguishable, while the BPF-PMMH performed between 3 and 4 times worse, depending on the parameter. The relative improvement of the iAPF over the BPF does seem empirically to depend on the value of δ. In experiments with larger δ, the improvement was still present but less pronounced than for δ = 0.25. We note that in this example, ψ* is outside the class of possible ψ sequences that can be obtained using the iAPF: the approximations in Ψ are functions that are constants plus a multivariate normal density with a diagonal covariance matrix, while the functions in ψ* are multivariate normal densities whose covariance matrices have nonzero, off-diagonal entries.

Figure 3. Linear Gaussian model: density estimates for the specified parameters from the three Markov chains.

Figure 4. Linear Gaussian model: autocorrelation function estimates for the BPF-PMMH (crosses), iAPF-PMMH (solid lines), and Kalman-MH (circles) Markov chains.

5.3. Univariate Stochastic Volatility Model

A simple stochastic volatility model is defined by μ(·)=N(·;0,σ2/(1-α)2), f(x,·)=N(·;αx,σ2) and g(x,·)=N(·;0,β2exp(x)), where α ∈ (0, 1), β > 0 and σ2 > 0 are statistical parameters (see, e.g., Kim, Shephard, and Chib 1998). To compare the efficiency of the iAPF and the BPF within a PMMH algorithm, we analyzed a sequence of T = 945 observations y1: T, which are mean-corrected daily returns computed from weekday close exchange rates r1: T + 1 for the pound/dollar from 1/10/81 to 28/6/85. These data have been previously analyzed using different approaches, for example, in Harvey, Ruiz, and Shephard (1994) and Kim, Shephard, and Chib (1998).

We wish to infer the model parameters θ = (α, σ, β) using a PMMH algorithm and compare the two cases, where the marginal likelihood estimates are obtained using the iAPF and the BPF. We placed independent inverse Gamma prior distributions IG(2.5,0.025) and IG(3,1) on σ2 and β2, respectively, and an independent Beta(20, 1.5) prior distribution on the transition coefficient α. We used (α0,σ0,β0)=(0.95,0.02,0.5) as the starting point of the three chains: XiAPF1: L, XBPF1: L and XL' BPF '. All the chains updated one component at a time with a Gaussian random walk proposal with variances (0.02, 0.05, 0.1) for the parameters (α, σ, β). XiAPF1: L has a total length of L = 150, 000 and for the estimates of the marginal likelihood that appear in the acceptance probability we use the iAPF with N0 = 100 starting particles. For XBPF1: L and X1:L' BPF ' we use BPFs: XBPF′1: L is a shorter chain with more particles (L = 150, 000 and N = 1000), while X1:L' BPF ' is a longer chain with fewer particles (L = 1, 500, 000, N = 100). All chains required similar running time overall to simulate. Figure 5 shows estimated marginal posterior densities for the three parameters using the different chains.

Figure 5. Stochastic volatility model: PMMH density estimates for each parameter from the three chains.

In Table 3, we provide the adjusted sample size of the Markov chains associated with each of the parameters, obtained by dividing the length of the chain by the estimated integrated autocorrelation time associated with each parameter. We can see an improvement using the iAPF, although we note that the BPF-PMMH algorithm appears to be fairly robust to the variability of the marginal likelihood estimates in this particular application.

Table 3. Sample size adjusted for autocorrelation for each parameter from the three chains.

Since particle filters provide approximations of the marginal likelihood in HMMs, the iAPF can also be used in alternative parameter estimation procedures, such as simulated maximum likelihood (Lerman and Manski 1981; Diggle and Gratton 1984). The use of particle filters for approximate maximum likelihood estimation (see, e.g., Kitagawa 1998; Hürzeler and Künsch 2001) has recently been used to fit macroeconomic models (Fernández-Villaverde and Rubio-Ramírez 2007). In Figure 6, we show the variability of the BPF and iAPF estimates of the marginal likelihood at points in a neighborhood of the approximate MLE of (α, σ, β) = (0.984, 0.145, 0.69). The iAPF with N0 = 100 particles used 100 particles in the final iteration to compute the likelihood in all simulations, and took slightly more time than the BPF with N = 1000 particles, but far less time than the BPF with N = 10, 000 particles. The results indicate that the iAPF estimates are significantly less variable than their BPF counterparts and may therefore be more suitable in simulated maximum likelihood approximations.

Figure 6. Log-likelihood estimates in a neighborhood of the MLE. Boxplots correspond to 100 estimates at each parameter value given by three particle filters, from left to right: BPF (N = 1000), BPF (N = 10, 000), iAPF (N0 = 100).

5.4. Multivariate Stochastic Volatility Model

We consider a version of the multivariate stochastic volatility model defined for X=Rd by μ(·)=N(·;m,U), f(x,·)=N(·;m+ diag (ϕ)(x-m),U) and g(x,·)=N(·;0,exp( diag (x))), where m,ϕRd and the covariance matrix URd×d are statistical parameters. The matrix U is the stationary covariance matrix associated with (φ, U). This is the basic MSV model in Chib, Omori, and Asai (2009, Sec. 2), with the exception that we consider a nondiagonal transition covariance matrix U and a diagonal observation matrix.

We analyzed two 20-dimensional sequences of observations y1: T and y1:T'', where T = 102 and T′ = 90. The sequences correspond to the monthly returns for the exchange rate with respect to the US dollar of a range of 20 different international currencies, in the periods 3/2000–8/2008 (y1: T, pre-crisis) and 9/2008–2/2016 (y1:T'', post-crisis), as reported by the Federal Reserve System (available at http://www.federalreserve.gov/releases/h10/hist/). We infer the model parameters θ = (m, φ, U) using the iAPF to obtain marginal likelihood estimates within a PMMH algorithm. A similar study using a different approach and with a set of six currencies can be found in Liu and West (2001).

The aim of this study is to showcase the potential of the iAPF in a scenario where, due to the relatively high dimensionality of the state space, the BPF systematically fails to provide reasonable marginal likelihood estimates in a feasible computational time. To reduce the dimensionality of the parameter space we consider a band diagonal covariance matrix U with nonzero entries on the main, upper, and lower diagonals. We placed independent inverse Gamma prior distributions with mean 0.2 and unit variance on each entry of the diagonal of U, and independent symmetric triangular prior distributions on [ − 1, 1] on the correlation coefficients ρR19 corresponding to the upper and lower diagonal entries. We place independent Uniform(0, 1) prior distributions on each component of φ and an improper, constant prior density for m. This results in a 79-dimensional parameter space. As the starting point of the chains, we used φ0 = 0.95 · 1, diag(U0) = 0.2 · 1 and for the 19 correlation coefficients we set ρ0 = 0.25 · 1, where 1 denotes a vector of 1s whose length can be determined by context. Each entry of m0 corresponds to the logarithm of the standard deviation of the observation sequence of the relative currency.

We ran two Markov chains X1: L and X1: L, corresponding to the data sequences y1: T and y1:T'', both of them updated one component at a time with a Gaussian random walk proposal with standard deviations (0.2 · 1, 0.005 · 1, 0.02 ·  1, 0.02 · 1) for the parameters (m, φ, diag(U), ρ). The total number of updates for each parameter is L = 12, 000 and the iAPF with N0 = 500 starting particles is used to estimate marginal likelihoods within the PMMH algorithm. In Figure 7, we report the estimated smoothed posterior densities corresponding to the parameters for the Pound Sterling/U.S. Dollar exchange rate series. Most of the posterior densities are different from their respective prior densities, and we also observe qualitative differences between the pre- and post-crisis regimes. For the same parameters, sample sizes adjusted for autocorrelation are reported in Table 4. Considering the high-dimensional state and parameter spaces, these are satisfactory. In the later steps of the PMMH chain, we recorded an average number of iterations for the iAPF of around 5 and an average number of particles in the final ψ-APF of around 502. To compare this with the BPF, we found that with N = 107 particles, at a cost of roughly 200 times the iAPF-PMMH per step, a BPF-PMMH Markov chain accepted only 4 out of 1000 proposed moves, each of which attempted to update a single parameter.

Figure 7. Multivariate stochastic volatility model: density estimates for the parameters related to the Pound Sterling. Pre-crisis chain (solid line), post-crisis chain (dashed line), and prior density (dotted line). The prior densities for (a) and (b) are constant.

Table 4. Sample size adjusted for autocorrelation.

The aforementioned qualitative change of regime seems to be evident looking at the difference between the posterior expectations of the parameter m for the post-crisis and the pre-crisis chain, reported in Figure 8. The parameter m can be interpreted as the period average of the mean-reverting latent process of the log-volatilities for the exchange rate series. Positive values of the differences for close to all of the currencies suggest a generally higher volatility during the post-crisis period.

Figure 8. Multivariate stochastic volatility model: differences between post-crisis and pre-crisis posterior expectation of the parameter m for the 20 currencies.

6. Discussion

In this article, we have presented the iAPF, an offline algorithm that approximates an idealized particle filter, whose marginal likelihood estimates have zero variance. The main idea is to iteratively approximate a particular sequence of functions, and an empirical study with an implementation using parametric optimization for models with Gaussian transitions showed reasonable performance in some regimes for which the BPF was not able to provide adequate approximations. We applied the iAPF to Bayesian parameter estimation in general state-space HMMs by using it as an ingredient in a PMMH Markov chain. It could also conceivably be used in similar, but inexact, noisy Markov chains; Medina-Aguayo, Lee, and Roberts (2015) showed that control on the quality of the marginal likelihood estimates can provide theoretical guarantees on the behavior of the noisy Markov chain. The performance of the iAPF marginal likelihood estimates also suggests they may be useful in simulated maximum likelihood procedures. In our empirical studies, the number of particles used by the iAPF was orders of magnitude smaller than would be required by the BPF for similar approximation accuracy, which may be relevant for models in which space complexity is an issue.

In the context of likelihood estimation, the perspective brought by viewing the design of particle filters as essentially a function approximation problem has the potential to significantly improve the performance of such methods in a variety of settings. There are, however, a number of alternatives to the parametric optimization approach described in Section 5.1, and it would be of particular future interest to investigate more sophisticated schemes for estimating ψ*, that is, specific implementations of Algorithm 3. We have used nonparametric estimates of the sequence ψ* with some success, but the computational cost of the approach was much larger than the parametric approach. Alternatives to the classes F and Ψ described in Section 3.2 could be obtained using other conjugate families, (see, e.g., Vidoni 1999). We also note that although we restricted the matrix Σ in (15) to be diagonal in our examples, the resulting iAPF marginal likelihood estimators performed fairly well in some situations, where the optimal sequence ψ* contained functions that could not be perfectly approximated using any function in the corresponding class. Finally, the stopping rule in the iAPF, described in Algorithm 4 and which requires multiple independent marginal likelihood estimates, could be replaced with a stopping rule based on the variance estimators proposed in Lee and Whiteley (2015). For simplicity, we have discussed particle filters in which multinomial resampling is used; a variety of other resampling strategies (see Douc, Cappé, and Moulines 2005, for a review) can be used instead.

Related Research Data




Acknowledgment

The authors would like to thank the Editor, Associate Editor, and two anonymous referees for comments that helped improve the article.

    References

  • Andrieu, C., Doucet, A., and Holenstein, R. (2010), “Particle Markov Chain Monte Carlo Methods,Journal of the Royal Statistical Society, Series B, 72, 269342. [Crossref][Google Scholar]
  • Andrieu, C., and Roberts, G. O. (2009), “The Pseudo-Marginal Approach for Efficient Monte Carlo Computations,The Annals of Statistics, 697725. [Crossref], [Web of Science ®][Google Scholar]
  • Andrieu, C., and Vihola, M. (2015), “Convergence Properties of Pseudo-Marginal Markov Chain Monte Carlo Algorithms,” The Annals of Applied Probability, 25, 10301077. [Crossref], [Web of Science ®][Google Scholar]
  • Beaumont, M. A. (2003), “Estimation of Population Growth or Decline in Genetically Monitored Populations,Genetics, 164, 11391160. [PubMed], [Web of Science ®][Google Scholar]
  • Bérard, J., Del Moral, P., and Doucet, A. (2014), “A Lognormal Central Limit Theorem for Particle Approximations of Normalizing Constants,” Electronic Journal of Probability, 19, 128. [Crossref], [Web of Science ®][Google Scholar]
  • Chen, R., Wang, X., and Liu, J. S. (2000), “Adaptive Joint Detection and Decoding in Flat-Fading Channels Via Mixture Kalman Filtering,Information Theory, IEEE Transactions on, 46, 20792094. [Crossref][Google Scholar]
  • Chib, S., Omori, Y., and Asai, M. (2009), “Multivariate Stochastic Volatility,,” in Handbook of Financial Time Series, eds. T. G. Andersen, R. A. Davis, J.-P. Kreiss, and T. V. Mikosch, New York: Springer, pp. 365400. [Crossref][Google Scholar]
  • Chopin, N. (2004), “Central Limit Theorem for Sequential Monte Carlo Methods and Its Application to Bayesian Inference,Annals of Statistics, 32, 23852411. [Crossref], [Web of Science ®][Google Scholar]
  • Clapp, T. C., and Godsill, S. J. (1999), “Fixed-Lag Smoothing Using Sequential Importance Sampling,” Bayesian Statistics 6: Proceedings of the Sixth Valencia International Meeting, 6, 743752. [Google Scholar]
  • Del Moral, P. (2004), Feynman–Kac Formulae, New York: Springer. [Crossref][Google Scholar]
  • Del Moral, P., and Guionnet, A. (1999), “Central Limit Theorem for Nonlinear Filtering and Interacting Particle Systems,” The Annals of Applied Probability, 9, 275297. [Crossref], [Web of Science ®][Google Scholar]
  • Diggle, P. J., and Gratton, R. J. (1984), “Monte Carlo Methods of Inference for Implicit Statistical Models,Journal of the Royal Statistical Society, Series B, 46, 193227. [Google Scholar]
  • Douc, R., Cappé, O., and Moulines, E. (2005), “Comparison of Resampling Schemes for Particle Filtering,” in Image and Signal Processing and Analysis, 2005. ISPA 2005. Proceedings of the 4th International Symposium, IEEE, pp. 6469. [Crossref][Google Scholar]
  • Douc, R., and Moulines, E. (2008), “Limit Theorems for Weighted Samples With Applications to Sequential Monte Carlo Methods,” The Annals of Statistics, 36, 23442376. [Crossref], [Web of Science ®][Google Scholar]
  • Doucet, A., Briers, M., and Sénécal, S. (2006), “Efficient Block Sampling Strategies for Sequential Monte Carlo Methods,Journal of Computational and Graphical Statistics, 15, 693711. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Doucet, A., Godsill, S., and Andrieu, C. (2000), “On Sequential Monte Carlo Sampling Methods for Bayesian Filtering,” Statistics and Computing, 10, 197208. [Crossref], [Web of Science ®][Google Scholar]
  • Doucet, A., and Johansen, A. M. (2011), “A Tutorial on Particle Filtering and Smoothing: Fifteen Years Later,” in The Oxford Handbook of Nonlinear Filtering, eds. D. Crisan and B. Rozovsky, Oxford, UK: Oxford University Press, pp. 656704. [Google Scholar]
  • Doucet, A., Pitt, M., Deligiannidis, G., and Kohn, R. (2015), “Efficient Implementation of Markov Chain Monte Carlo When Using an Unbiased Likelihood Estimator,” Biometrika, 102, 295313. [Crossref], [Web of Science ®][Google Scholar]
  • Fernández-Villaverde, J., and Rubio-Ramírez, J. F. (2007), “Estimating Macroeconomic Models: A Likelihood Approach,” The Review of Economic Studies, 74, 10591087. [Crossref], [Web of Science ®][Google Scholar]
  • Gordon, N. J., Salmond, D. J., and Smith, A. F. (1993), “Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation,IEE Proceedings-Radar, Sonar and Navigation, 140, 107113. [Google Scholar]
  • Harvey, A., Ruiz, E., and Shephard, N. (1994), “Multivariate Stochastic Variance Models,” The Review of Economic Studies, 61, 247264. [Crossref], [Web of Science ®][Google Scholar]
  • Hürzeler, M., and Künsch, H. R. (2001), “Approximating and Maximising the Likelihood for a General State-Space Model,” in Sequential Monte Carlo Methods in Practice, eds. A. Doucet, N. de Freitas and N. Gordon, New York: Springer, pp. 159175. [Crossref][Google Scholar]
  • Johansen, A. M., and Doucet, A. (2008), “A Note on Auxiliary Particle Filters,” Statistics & Probability Letters, 78, 14981504. [Crossref], [Web of Science ®][Google Scholar]
  • Kim, S., Shephard, N., and Chib, S. (1998), “Stochastic Volatility: Likelihood Inference and Comparison With Arch Models,” The Review of Economic Studies, 65, 361393. [Crossref], [Web of Science ®][Google Scholar]
  • Kitagawa, G. (1998), “A Self-Organizing State-Space Model,Journal of the American Statistical Association, 93, 12031215. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Kong, A., Liu, J. S., and Wong, W. H. (1994), “Sequential Imputations and Bayesian Missing Data Problems,” Journal of the American Statistical Association, 89, 278288. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Künsch, H. (2005), “Recursive Monte Carlo Filters: Algorithms and Theoretical Analysis,” The Annals of Statistics, 33, 19832021. [Crossref], [Web of Science ®][Google Scholar]
  • Lee, A., and Łatuszyński, K. (2014), “Variance Bounding and Geometric Ergodicity of Markov Chain Monte Carlo Kernels for Approximate Bayesian Computation,” Biometrika, 101, 655671. [Crossref], [Web of Science ®][Google Scholar]
  • Lee, A., and Whiteley, N. (2015), “Variance Estimation and Allocation in the Particle Filter,” arXiv:1509.00394. [Google Scholar]
  • Lerman, S., and Manski, C. (1981), “On the Use of Simulated Frequencies to Approximate Choice Probabilities,” in Structural Analysis of Discrete Data With Econometric Applications, Cambridge, MA: The MIT Press, pp. 305319. [Google Scholar]
  • Lin, M., Chen, R., Liu, J. S., et al. (2013), “Lookahead Strategies for Sequential Monte Carlo,” Statistical Science, 28, 6994. [Crossref], [Web of Science ®][Google Scholar]
  • Liu, J. S., and Chen, R. (1995), “Blind Deconvolution via Sequential Imputations,” Journal of the American Statistical Association, 90, 567576. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Liu, J. S., and West, M. (2001), “Combined Parameter and State Estimation in Simulation-Based Filtering,” in Sequential Monte Carlo Methods in Practice, eds. A. Doucet, N. de Freitas, and N. Gordon, New York: Springer, pp. 197223. [Crossref][Google Scholar]
  • Medina-Aguayo, F. J., Lee, A., and Roberts, G. O. (2015), “Stability of Noisy Metropolis–Hastings,” arXiv:1503.07066. [Google Scholar]
  • Nadaraya, E. A. (1964), “On Estimating Regression,” Theory of Probability & Its Applications, 9, 141142. [Crossref][Google Scholar]
  • Pitt, M. K., and Shephard, N. (1999), “Filtering via Simulation: Auxiliary Particle Filters,” Journal of the American Statistical Association, 94, 590599. [Taylor & Francis Online], [Web of Science ®][Google Scholar]
  • Sherlock, C., Thiery, A. H., Roberts, G. O., and Rosenthal, J. S. (2015), “On the Efficiency of Pseudo-Marginal Random Walk Metropolis Algorithms,” The Annals of Statistics, 43, 238275. [Crossref], [Web of Science ®][Google Scholar]
  • Vidoni, P. (1999), “Exponential Family State Space Models Based on a Conjugate Latent Process,Journal of the Royal Statistical Society, Series B, 61, 213221. [Crossref][Google Scholar]
  • Walker, A. J. (1974), “New Fast Method for Generating Discrete Random Numbers With Arbitrary Frequency Distributions,” Electronics Letters, 10, 127128. [Crossref], [Web of Science ®][Google Scholar]
  • Walker, A. J. (1977), “An Efficient Method for Generating Discrete Random Variables With General Distributions,” ACM Transactions on Mathematical Software, 3, 253256. [Crossref][Google Scholar]
  • Watson, G. S. (1964), “Smooth Regression Analysis,Sankhyā: The Indian Journal of Statistics, Series A, 26, 359372. [Google Scholar]
  • Whiteley, N., and Lee, A. (2014), “Twisted Particle Filters,” The Annals of Statistics, 42, 115141. [Crossref], [Web of Science ®][Google Scholar]

Appendix A: Expression for the Asymptotic Variance in the CLT

Proof of Proposition 3.

We define a sequence of densities by πkψ(x1:T):=μ1ψx1t=2Tftψxt-1,xtt=1kgtψxtXTμ1ψx1t=2Tftψxt-1,xtt=1kgtψxtdx1:T,x1:TXT,for each k ∈ {1, …, T}. We also define πkψ(xj):=πkψ(x1:j-1,xj,xj+1:T)dx-j for j ∈ {1, …, T}, where xj ≔ (x1, …, xj − 1, xj + 1, …, xN). Combining equation (24.37) of Doucet and Johansen (2011) with elementary manipulations provides, σψ2=t=1TXπTψ(xt)2πt-1ψ(xt)dxt-1=t=1TXψt*(xt)ψt(xt)πTψ(xt)dxt·Xψtxtπt-11(xt)dxtXψt*xtπt-11(xt)dxt-1=t=1TEψt*XtψtXt|Y1:T=y1:T×EψtXtY1:t-1=y1:t-1Eψt*XtY1:t-1=y1:t-1-1,and the expression involving the rescaled terms ψt* and ψt then follows.