Probability weighting functions obtained from Hong Kong index option market

In this paper we estimate the pricing kernel from the Hong Kong index option market and obtain the empirical probability weighting functions based on the rank-dependent expected utility. The empirical pricing kernel is estimated semi-parametrically as the ratio of the risk-neutral and objective densities. We employ a twostep estimation procedure to estimate the objective and risk-neutral densities under a consistent parametric framework of the non-affine generalised autoregressive conditional heteroskedasticity (G.A.R.C.H.) diffusion model. In the first step, we develop a continuous particle filters-based maximum likelihood estimation method to estimate the objective parameters of the G.A.R.C.H. diffusion model using the Hang Seng Index (H.S.I.) returns. In the second step of our estimation, we depart from the usual pure calibration approach and use the H.S.I. option prices to estimate the risk-neutral parameters of the G.A.R.C.H. diffusion model by constraining certain parameters to be consistent with the timeseries behaviour of H.S.I. returns. Based on the estimated objective and risk-neutral parameters, the objective and risk-neutral densities are obtained by inverting the corresponding characteristic functions. Empirical results indicate that the empirical pricing kernel estimated from the Hong Kong index option market is non-monotonic and the estimated probability weighting functions are S-shaped, which implies that investors underweight small probability events and overweight large ones. ARTICLE HISTORY Received 22 October 2017 Accepted 9 October 2018


Introduction
The behaviour of market investors has always been in focus in the literature on financial economics. Naturally, it involves the pricing kernel (Rosenberg & Engle, 2002) or stochastic discount factor (Cochrane, 2001). In standard economic theory, the pricing kernel is monotonically decreasing in investor wealth or market return and corresponds to a positive risk aversion function. However, there has been a lot of discussion about the reliability of this theory. In the past decade, a large number of empirical studies have provided evidence of a non-monotonically decreasing pricing kernel, which has been referred to as the 'pricing kernel puzzle' or 'risk aversion puzzle' (Aït-Sahalia & Lo, 2000;Jackwerth, 2000;Rosenberg & Engle, 2002). Beare and Schmidt (2016) and Golubev et al. (2014) provide further empirical evidence of a non-monotonic pricing kernel by conducting formal statistical tests. Using data on an exchange traded fund replicating the S&P 500 index, Figlewski and Malik (2014) also confirm the pricing kernel puzzle. Recently, Cuesdeanu and Jackwerth (2016) use a novel test and forward looking information only to confirm the presence of the pricing kernel puzzle in the S&P 500 index option market.
Many researchers try to explain the pricing kernel puzzle with several approaches, including investors' heterogeneous beliefs (Detlefsen et al., 2007;Bakshi et al., 2010;Ziegler, 2007), misspecification of the underlying state space (Chabi-Yo, 2012;Chabi-Yo et al., 2008;Christoffersen et al., 2013), ambiguity aversion (Gollier, 2011), investors sentiment (Barone-Adesi et al., 2017), etc. In this paper we consider a pricing kernel based on the rank-dependent expected utility of Quiggin (1993), one of the most important generalisations of expected utility theory, with a probability weighting function. We show that the rank-dependent expected utility with the probability weighting function is able to explain the properties of the empirical pricing kernel estimated from the Hong Kong index option market.
In the last decades, there has been a large amount of literature on the estimation of the pricing kernel. A number of earlier papers estimate the pricing kernel using aggregate consumption data (Chapman, 1997;Hansen & Jagannathan, 1991); problems with imprecise measurement of aggregate consumption can weaken the empirical results of these papers. Thus, many authors have used the option prices to estimate the pricing kernel. This approach avoids the use of aggregate consumption data. Among others, Rosenberg and Engle (2002) emphasise the advantages of option prices over consumption data. Based on the option prices, three types of estimation approaches have been proposed, including the parametric approach (Audrino & Meier, 2012;Rosenberg & Engle, 2002), nonparametric approach (Aït-Sahalia & Lo, 2000;Belomestny et al., 2017;H€ ardle et al., 2015;Jackwerth, 2000;Song & Xiu, 2016) and semiparametric approach (Chernov, 2003;Detlefsen et al., 2007). In this paper, we follow a semiparametric approach to derive the pricing kernel and construct the implied probability weighting function by estimating the ratio of the risk-neutral and objective densities. The advantage of the semiparametric approach is that it avoids the use of parametric pricing kernel specification, which imposes a strict structure on the kernel so that it too restrictive to account for the dynamics of the risk preference, and bandwidth selection, which influences the shape of the pricing kernel. The semiparametric approach for estimating the pricing kernel is flexible and simple to implement.
Previous econometrics studies are concerned with deriving the empirical pricing kernel by estimating the objective and risk-neutral densities separately, and mainly relying on the discrete-time generalised autoregressive conditional heteroskedasticity (G.A.R.C.H.) model of Bollerslev (1986Bollerslev ( , 1987 or/and affine stochastic volatility model of Heston (1993). Our estimation procedure is based on the objective and risk-neutral densities and these distributions are derived with a consistent parametric stochastic volatility framework of a non-affine G.A.R.C.H. diffusion model. From these densities we construct the corresponding pricing kernel. The G.A.R.C.H. diffusion model is a non-affine stochastic volatility model, which is a weak limit of discrete-time G.A.R.C.H.(1,1) model and has been found to capture the dynamics of the financial time series much better than the affine Heston stochastic volatility model. A number of recent papers have provided strong evidence for the G.A.R.C.H. diffusion model, not only for returns data but also for options data (Christoffersen et al., 2010;Kaeck & Alexander, 2013;Wu et al., 2012, forthcoming). Thus, the model is well suited for our estimation of the pricing kernel, and hence the probability weighting function.
In this paper, the objective and risk-neutral densities are derived by estimating the objective and risk-neutral parameters of the G.A.R.C.H. diffusion model in a way that maintains the internal consistency of the objective and risk-neutral measures. To achieve this goal, we employ a two-step estimation procedure. In the first step, we develop a continuous particle filters-based maximum likelihood estimation method to estimate the objective parameters of the G.A.R.C.H. diffusion model using the Hong Kong Hang Seng Index (H.S.I.) returns over a long time span. The continuous particle filters-based maximum likelihood estimation method is easy to implement and can be used to estimate the nonlinear models that include unobservable state variables efficiently (Christoffersen et al., 2010;Duan & Fulop, 2009;Malik & Pitt, 2011;Pitt et al., 2014). In the second step of our estimation, we depart from the usual pure calibration approach and use the H.S.I. option prices to estimate the risk-neutral parameters of the G.A.R.C.H. diffusion model by constraining certain parameters to be consistent with the time-series behaviour of H.S.I. returns. More precisely, the volatility of variance and the leverage parameters should be equal under the objective and risk-neutral measures (Broadie et al., 2007). We impose this constraint for both pragmatic and theoretical reasons. First, there is little disagreement in the literature over these parameter values. Second, joint estimation using both option and underlying asset prices is a computationally demanding task. Based on the estimated objective and risk-neutral parameters, the objective and risk-neutral densities can be obtained by inverting the corresponding characteristic functions of the G.A.R.C.H. diffusion model. The pricing kernel is finally obtained as the ratio of the risk-neutral and objective densities.
The probability weighting functions have been studied extensively in the experimental literature over the past decades (Barberis & Huang, 2008;Polkovnichenko, 2005;Prelec, 1998;Shefrin & Statman, 2000;Tversky & Kahneman, 1992). Empirical papers investigating probability weighting functions include Chabi-Yo and Song (2013), Dierkes (2009Dierkes ( , 2013, Kliger and Levy (2009), Polkovnichenko and Zhao (2013) and Wang (2017). In these papers, the authors obtain the probability weighting functions mainly from option prices. The main difference between our paper and theirs lies mainly in the use of a consistent parametric framework of the popular non-affine G.A.R.C.H. diffusion model for estimation. To the best of our knowledge, the G.A.R.C.H. diffusion model has not been used to estimate the probability weighting functions. Also, different from previous empirical studies that investigate probability weighting functions mainly focus on the U.S. S&P 500 index option market, this paper aims to investigate the probability weighting functions empirically for the Hong Kong index option market. We estimate the pricing kernel from the H.S.I. options and obtain the empirical probability weighting functions based on the rankdependent expected utility. We then employ the estimate of the probability weighting functions to examine characteristic of investors' decision weights in the Hong Kong stock market.
This paper contributes to the existing literature in several ways. First, to ensure consistency between the objective and risk-neutral measures which is crucial for obtaining reasonable results, a two-step estimation procedure for the popular nonaffine G.A.R.C.H. diffusion model is developed. Second, the empirical non-monotonic pricing kernel and S-shaped probability weighting functions are obtained from the Hong Kong index option market. Third, it reveals that investors in the Hong Kong stock market underweight small probability events (tail events) and overweight large ones. Finally, the S-shaped probability weighting function with a utility function exhibiting constant relative risk aversion (C.R.R.A.) based on the rand-dependent utility explains the non-monotonicity of the pricing kernel.
The rest of the paper is organised as follows. In Section 2, we describe the theoretical link between the pricing kernel, probability weighting function and riskneutral and objective densities. In Section 3, we present under the objective measure the G.A.R.C.H. diffusion model and derive the corresponding system under the risk-neutral measure, which serves as the basis for the estimation of the objective and risk-neutral densities. We describe the two-step estimation procedure used to estimate the objective and risk-neutral densities in Section 4. Section 5 discusses the empirical results obtained from the H.S.I. option market, and Section 6 concludes.

Pricing kernel and probability weighting function
In this section, we present a theoretical link between the pricing kernel and probability weighting function based on the rank-dependent expected utility. See Polkovnichenko and Zhao (2013) for detailed discussion of the properties of the pricing kernel with probability weighting function.
Consider an index as a proxy for all wealth in the economy, let S T be the future price of the index, and S t be the current price of the index. In the absence of arbitrage, there exists one positive random variable M t;T that prices the index where E P ½ÁjF t is the expectation with respect to the objective measure P conditional on the information set F t available at time t, and M t;T is the projection of the pricing kernel into S T , which has the same pricing implications as the original one (Rosenberg & Engle, 2002). According to the risk-neutral valuation principal, the price S t of the index with payoff S T can be equivalently represented as where E Q ½ÁjF t is the expectation with respect to the risk-neutral measure Q conditional on the information set F t available at time t, r is the risk-free interest rate, and s ¼ TÀt: Assuming that p t;T ðS T Þ is the objective density (the probability density function under the objective measure P) of S T ; and q t;T ðS T Þ is the risk-neutral density (the probability density function under the risk-neutral measure Q) of S T : From Equation (2), we have Comparing Equations (1) and (3), we get It is obvious from Equation (4) that we can obtain the empirical pricing kernel by estimating the ratio of the risk-neutral and objective densities. Reasonable estimates for the pricing kernel in Equation (4) should be non-increasing in S T ; thereby implying a risk-averse investor. However, many studies document a humpy pricing kernel that it might be increasing in some range of the market returns (Bakshi et al., 2010;Rosenberg & Engle, 2002). Under expected utility theory, it is difficult to understand the reasons for the shape of the pricing kernel. In this paper, we relax the assumption of expected utility theory, and attempt to explain the non-monotonic pricing kernel (or pricing kernel puzzle) under the rank-dependent expected utility of Quiggin (1993), one of the most important extensions of expected utility theory.
Under rank-dependent expected utility, instead of the true cumulative distribution function, P t;T ðS T Þ; the representative investor uses the following distorted one to make an investment decisionP where wðÁÞ : ½ 0; 1 ! ½ 0; 1 is the probability weighting function, satisfies wð0Þ ¼ 0 and wð1Þ ¼ 1: Moreover, w is nonlinear, differentiable, continuous, and non-decreasing. Given the distribution function P t;T ; the rank-dependent expected utility is calculated as where zðP t;T Þ ¼ w 0 ðP t;T Þ: Following Polkovnichenko and Zhao (2013), under the assumption of complete markets, we have It is obvious from Equation (7) that the pricing kernel M t;T is linked to the derivative of the probability weighting function zðP t;T Þ ¼ w 0 ðP t;T Þ via the marginal utility u 0 ðS T Þ: Considering the influence of the initial wealth, we use R T S T =S t as a proxy for the gross return on the total investor wealth and assume that the initial wealth is one, we can rewrite the Equations (4) and (7) as and where p t;T and q t;T now are the objective and risk-neutral densities of R T ; and P t;T now is the cumulative distribution function of R T and P 0 t;T ¼ p t;T : For any given return R 0 T ; assume that P t;T ðR 0 T Þ ¼ P 0 t;T ; according to Equations (8) and (9), we have is the normalising constant such that wð1Þ ¼ 1:

The model
We adopt the non-affine G.A.R.C.H. diffusion model to characterise the dynamics of the underlying asset prices (H.S.I.), and form the basis for the estimation of the objective and risk-neutral densities. In the G.A.R.C.H. diffusion model, the dynamics under the objective measure P of the underlying asset price, S t , and the associated variance, V t , are assumed to be given by where l is the mean of the underlying asset returns, h P is the long-run mean of variance, j P is the mean reversion rate of variance, r is the volatility of variance, and W P 1;t and W P 2;t are two standard Brownian motions with Corr t ðdW P 1;t ; dW P 2;t Þ ¼ q: The correlation parameter, q, is typically found to be negative, which captures the wellknown 'leverage effect' originated by Black (1976). That is, when asset prices decrease or returns are negative, the firm becomes more risky due to an increase in its debtequity ratio, leading to an increase in its volatility (see also Christie (1982)). The G.A.R.C.H. diffusion model in Equations (11) and (12) has attracted a great deal of attention in recent years in the financial econometrics literature. A number of papers have shown that the model can provide realistic volatility dynamics and good option valuation performance (Christoffersen et al., 2010;Kaeck & Alexander, 2013;Wu et al., 2012;Yang & Kanniainen, 2017).
Following Chernov and Ghysels (2000), we assume that the dynamics of the underlying asset prices have the same form under the risk-neutral measure Q as under the objective measure P; and the dynamics of (S t , V t ) under the risk-neutral measure Q are of the form where r is the risk-free interest rate, W Q 1;t and W Q 2;t are two standard Brownian motions under the risk-neutral measure with Corr t ðdW Q 1;t ; dW Q 2;t Þ ¼ q: As prior studies constrain j Q h Q ¼ j P h P ; here we specify a more flexible risk-neutral dynamics in that we allow j Q h Q 6 ¼ j P h P ; which implies a more flexible variance risk premium that enhances the model flexibility to fit the market option prices. According to Wu et al. (2012), the characteristic function for the log price X T ¼ log S T under the objective measure P is given by where The characteristic function for X T under the risk-neutral measure Q denoted by f Q t;T is analogous to the objective characteristic function f P t;T ; which can be obtained by replacing the objective parameters in Equation (16), l, j P and h P ; with the risk-neutral parameters, r, j Q and h Q : Then the objective and risk-neutral densities for R T ¼ S T =S t can be obtained by inverting the characteristic functions f P t;T and f Q t;T ; respectively. Specifically, we have The integrals in Equations (19) and (20) can be easily computed by using some numerical methods. Based on the theoretical links between the pricing kernel, probability weighting function and objective and risk-neutral densities in Equations (8) and (10), the pricing kernel and probability weighting function can finally be obtained.

Estimation methodology
For computing the objective and risk-neutral densities by Equations (17) and (18), we still need to estimate the objective and risk-neutral parameters of the G.A.R.C.H. diffusion model. In this section, we describe how to estimate the objective and risk-neutral parameters of the G.A.R.C.H. diffusion model in a way that maintains the internal consistency of the objective and risk-neutral measures. We employ a twostep estimation procedure.
First, we take the stabilising transformation X t ¼ log S t ; h t ¼ log V t : By Itô's lemma, we have In the empirical literature, the above continuous-time model must be discretised to facilitate the parameter estimation. A simple Euler discretisation leads to the following discrete-time stochastic processes where y t ¼ X t ÀX tÀ1 is the log return of underlying asset, Dt is the time interval, 1 and e t ¼ ðW P 1;t ÀW P 1;tÀ1 Þ= ffiffiffiffiffi Dt p ; g t ¼ ðW P 2;t ÀW P 2;tÀ1 Þ= ffiffiffiffiffi Dt p : It can be shown that e t and g t are independent and identically distributed (i.i.d.) standard normal random variables with Corr t ðe t ; g t Þ ¼ q: It can be seen that Equations (23) and (24) constitute a nonlinear and non-Gaussian state-space model that cannot be estimated using the standard Kalman filter. To overcome this problem, we adopt the continuous particle filters-based maximum likelihood estimation method to estimate the model parameters (objective parameters). The log likelihood of the model is given by log L H P ð Þ ¼ log p y 1 ; . . . ; y T jH P À Á ¼ X TÀ1 t¼0 log p y tþ1 jF t ; H P À Á via prediction decomposition, where H P ¼ ðl; j P ; h P ; r; qÞ 0 are the objective parameters of the G.A.R.C.H. diffusion model. In the above equation, the predictive density (likelihood) pðy tþ1 jF t ; H P Þ can be written as The expression in Equation (26) is crucial for the maximum likelihood estimation via particle filters. In fact, the predictive density pðy tþ1 jF t ; H P Þ can be approximated by using Monte Carlo method, that is, where h i tþ1 ; i ¼ 1; . . . ; N; sampled from the predictive density pðh tþ1 jF t ; H P Þ via particle filters, which is a sequential Monte Carlo technique using simulated samples to represent prediction and filtering distributions. Updating from the prediction distribution to the filtering distribution is carried out by using the Bayesian rule. Specifically, we have p h tþ1 jF tþ1 ; H P À Á / p y tþ1 jh tþ1 ; H P À Á p h tþ1 jF t ; H P À Á where pðh tþ1 jF tþ1 ; H P Þ is the filtering density. The principle of Bayesian updating implies that the density of the state conditional on all available information can be constructed by combining a prior with a likelihood, recursive implementation of which forms the basis for particle filtering. The particle filtering algorithm thus propagates and updates the samples or 'particles' via Equation (28), which can be approximated bŷ where h i t ; i ¼ 1; . . . ; N; are the equally weighted samples or 'particles' from the density pðh t jF t ; H P Þ: In order to sample from the density of Equation (29), we use the sampling importance resampling (S.I.R.) algorithm of Gordon et al. (1993). However, the resampling step in the standard S.I.R. algorithm creates discontinuity in the likelihood function that is not conducive to numerical optimisation and statistical inference, and makes estimation by maximum likelihood problematic. To overcome this problem, we adopt the continuous S.I.R. (C.S.I.R.) scheme proposed by Malik and Pitt (2011) to compute the likelihood function and conduct the maximum likelihood estimation.
Based on the above C.S.I.R. algorithm, the prediction likelihood may be estimated aŝ where x i tþ1 ; i ¼ 1; . . . ; N; are simply the unnormalised weights calculated in Step 2 of C.S.I.R. algorithm. The resulting likelihood function in Equation (31) is a smooth function of parameters. Then, the log likelihood can be estimated by Note that the log likelihood in Equation (32) will not be unbiased. According to Malik and Pitt (2011), we may correct the original (biased) log likelihood as unbiased one by Nl 2 where Given the log likelihood approximation logL; the maximum likelihood estimates of the model parameters (objective parameters) can be obtained bŷ Based on the above estimated parameters, the spot variances can be obtained using the C.S.I.R. filtering algorithm.
Next, we turn to the estimation of risk-neutral parameters H Q ¼ ðj Q ; h Q ; r; qÞ 0 : We impose consistency between the objective and risk-neutral measures in estimation. Specifically, we let the volatility of variance, r, and the leverage parameters, q, be equal under the objective and risk-neutral measures. As the parameters r and q have been estimated in the first step of our estimation, there are only two remaining risk-neutral parameters (j Q and h Q ) need to be estimated in the second step of estimation. We optimise over j Q and h Q to fit the observed option prices. Specifically, to obtain the estimates of the parameters j Q and h Q ; we minimise squared differences of model and market option prices, that is, where M is the number of option contracts, C j is the market option price of contract j, and C j ðH Q Þ is the corresponding model option price which can be calculated by using the fast Fourier transform approach based on the riskneutral characteristic function of the G.A.R.C.H. diffusion model (Carr & Madan, 1999).

Empirical results
In contrast to many previous studies that have focused mainly on the S&P 500 index option market, we investigate in this paper the empirical pricing kernel and probability weighting functions by focusing on the H.S.I. option market. The H.S.I. serves as an approximation to the Hong Kong economy, and it can be used as a proxy for market portfolio.

The data
We  Table 1. It can be seen that the H.S.I. returns are skewed and leptokurtic. Jarque-Bera statistics suggests that the assumption of normality is rejected for the H.S.I. return series, which also can be confirmed from the Q-Q plot of the index returns in Figure 1 Table 2. The option data are selected as the most actively traded option contracts with maturity about one month on 4 October 2017. Finally, we use the annualised the 1-month Hong Kong Interbank Offer Rate as a proxy for the risk-free interest rate. All of the data are obtained from the Wind Database of China.

Estimation results
Based upon the data on the H.S.I. returns, the objective parameters of the G.A.R.C.H. diffusion model can be estimated by adopting the C.S.I.R.-based maximum likelihood estimation method described in Section 4. Table 3 reports the estimation results. Our results show that, under the objective measure, the long-run mean of the variance is h P ¼ 0:0522; with a mean-reversion speed of j P ¼ 2:3044: The estimate of the 'leverage effect' parameter q is significantly negative, indicating that the return and the variance processes are negatively correlated during the sample period, a wellknown empirical fact.  Based on the estimated (objective) parameters of the G.A.R.C.H. diffusion model, the spot variances can be estimated via the C.S.I.R. filtering algorithm. Figure 2 shows the filtered variances. Specially, the spot variance on 4 October 2017 is 0.0185.
Using the H.S.I. option data in Table 2, the risk-neutral parameters of the G.A.R.C.H. diffusion model can be estimated. The estimates are reported in Table 3. It can be seen from the table that, under the risk-neutral measure, the long-run mean of the variance is h Q ¼ 0:0180; with a mean-reversion speed of j Q ¼ 0:1990; which both are obviously lower than the corresponding estimates under the objective     (17) and (18) to compute the objective and riskneutral densities of R T S T =S t ; respectively. We plot the estimates of objective and risk-neutral densities for one-month time horizon s ¼ TÀt ¼ 1=12 in Figure 3. It can be seen that there are obvious discrepancies in the estimation results of the objective and risk-neutral densities.
The estimates of objective and risk-neutral densities allow us to estimate the empirical pricing kernel by using Equation (4). Figure 4 displays our estimate of the pricing kernel. It can be seen from the figure that the estimated pricing kernel is not monotonically decreasing, but exhibits a hump around the gross return R T ¼ S T =S t ¼ 1: This is not in accordance with the classical economic theory and referred to as the 'pricing kernel puzzle'.
Using the estimated objective and risk-neutral densities, we can construct the probability weighting function for given utility function. We use the standard C.R.R.A. utility functions uðRÞ ¼ R 1Àc 1Àc for c ¼ 0 (linear utility), c ¼ 1 (logarithmic utility), and c ¼ 2 (power utility). We present estimates of the probability weighting function wðP t;T Þ for one-month time horizon in Figure 5. It can be seen from the figure that the probability weighting function estimates have the S-shaped forms, implying that investors in the Hong Kong stock market underweight small probability events (tail events) and overweight large ones. The probability weighting functions obtained from Hong Kong index option market are different from those obtained from the U.S. index option market, which typically have the inverse-S shape (see Polkovnichenko & Zhao, 2013). The results call for further efforts to integrate the models that can account for S-shaped probability weighting in portfolio theory, asset pricing and risk management.

Conclusion
The study of the probability weighting function has been the focus of the financial economics literature. The probability weighting function has been extensively employed to model investor behaviour in financial markets. It is informative about the tail risk or crash risk of return dynamics and can provide explanation for tail risk premium. It can also be used to construct investor sentiment toward tail events and to construct probability weighting measures of tail events to capture investors' decision weights toward tail events. In addition, the probability weighting function can be used to understand asset price dynamics and equity premium puzzle.
In this paper, we semi-parametrically estimate the pricing kernel from the Hong Kong index option market and obtain the empirical probability weighting functions based on the rank-dependent expected utility, using the non-affine G.A.R.C.H. diffusion model in a way that maintains the internal consistency of the objective and riskneutral measures. Our results show that the empirical pricing kernel estimated from the Hong Kong index option market is non-monotonic, deviating from expected utility theory, and that the estimated probability weighting functions are S-shaped, which implies that investors underweight small probability events (tail events) and overweight large ones. The S-shaped probability weighting function with a utility function exhibiting C.R.R.A. based on the rand-dependent utility can explain the non-monotonicity of the pricing kernel. The results point to theoretical models with S-shaped probability weighting functions as a promising direction to understand asset price dynamics and further to explore implications for many economic issues, such as portfolio choices, asset pricing, and risk management. Note 1. Assuming that there are 250 trading days per year, then the time interval Dt for one trading day is 1/250 year, and in this case the discretisation bias of Euler scheme is expected to be negligible.
In the standard S.I.R. algorithm, the resampling is based on the discontinuous empirical distribution functionF where h i , i ¼ 1; . . . ; N; are sorted in ascending order, and P N i¼1 p i ¼ 1; I(z) is an indicator function, satisfying To produce samples of the state variables in a continuous way, Malik and Pitt (2011) approximateFðhÞ by continuous empirical distribution functionFðhÞ; which is given bỹ . . . ; NÀ1; and G i ðzÞ is a monotonically non-decreasing distribution function on ½0; 1; such that G i z ð Þ ¼ 0; z<0 z; 0 z 1 1; z>1 8 < : The above defined continuous distribution functionFðhÞ is easy to invert and thus sampling from it becomes very simple and quick. It can be shown that as N ! 1;FðhÞ ! FðhÞ ! FðhÞ; with F(h) being the true distribution function. In practice the difference betweeñ FðhÞ andFðhÞ becomes negligible for moderate N, typically we choose N ¼ 500.
Continuous stratified resampling algorithm First, we need to generate a single uniform u$UIDð0; 1Þ; and propagate sorted uniforms given by Then, we sample the index corresponding to the region which are sorted as r 1 ; r 2 ; . . . r N ; and also produce a new set of uniforms, u Ã 1 ; u Ã 2 ; . . . u Ã N ; according to the algorithm given below. set s ¼ 0; j ¼ 1; For the selected regions r 1 ; r 2 ; . . . r N ; we set the new set of uniforms u Ã 1 ; u Ã 2 ; . . . u Ã N as h jÃ ¼ h 1 ; r j ¼ 0 h N ; r j ¼ N h r j þ1 Àh r j ð Þ Â u Ã j þ h r j ; others 8 > <