3

I have a two-factor shot noise process as follows (sorry for the picture of text): enter image description here

I want to simulate from this process. Is it valid to do so in a discretized way? The solution to these SDEs is a compound Poisson process, so I simulated it as follows:

shot_noise <- function(T, n, alpha, lambda, gamma, x0){
  dt = T/n
  dN = rpois(n, lambda*dt)
  x = c(x0)
  j = rep(0, n)
  for (i in 2:(n+1)){
    jumps = sum(rexp(dN[i-1], rate = gamma)*exp(-alpha * runif(dN[i-1], max = dt)))
    j[i-1] = jumps
    x[i] = x[i-1] - alpha*x[i-1] + jumps
  }
  return(x)
}

This would be the code to simulate x_+ and x_-, given the parameters. It seems correct to me, and the output looks like what I think it should. However, the other examples of simulating compound Poisson processes that I see tend not to discretize. I am doing so because this simulation will be used to test whether maximum likelihood estimation for this model is valid, and the observed data is discretized in that case ($\epsilon_t$ at each time-step).

Is the method I've used valid and correct? Is there a better way?

Ferdi
  • 5,179
  • Your simulation seems nearly good, but I think that the "skeleton part" of the transition is $x(t_i) = e^{-\alpha \Delta} x(t_{i-1})$ where $\alpha$ (an inverse time) is $a$ in equations, and $\Delta$ stands for the time interval dt in code. Your question could be much simplified by using only one process $x(t)$ (as in the simulation) and possibly the notation $x(t) = \sum_{{i: ,S_i \leq t}} \eta_i$ where the $S_i$ are the Poisson arrivals. Note that $[x(t)]_t$ is continuous-time Markov, and $[x(t_i)]_i$ is discrete-time Markov both with gamma margin. – Yves Jul 06 '17 at 11:35
  • Sorry I missed the impluse in my formula $x(t) = \sum_{{i: ,S_i \leq t}} e^{-\alpha (t- S_i)} \eta_i$. – Yves Jul 06 '17 at 11:44
  • @Yves Thank you! I modified that line in the code to: x[i] = x[i-1]*exp(-alpha*dt) + jumps

    I should have also mentioned, in my case, the jump times are latent, which is where the uniform RV appears (using EM algorithm)

    – TotalFailure Jul 06 '17 at 14:17
  • Conditional on their number $n_{i-1} := N(t_i) - N(t_{i-1})$, the successive arrivals $S_k$ falling in the interval $(t_{i-1},,t_i)$ are such the r.vs $S_k -t_{i-1}$ have the same distribution as order statistics of a sample of the the uniform distribution on $(0,,\Delta)$. This is a classical property of the PP. Note that your code works when $n_{i-1} =0$ beacuse sum(numeric(0)) == 0. – Yves Jul 06 '17 at 14:55
  • I appreciate your exposition. – TotalFailure Jul 17 '17 at 22:40

1 Answers1

1

Let us derive the simulation. It will be enough to consider one of the two process $x^{+}_t$ and $x^{-}_t$, that we can denote by $x(t)$. It has the representation $$ x(t) = \sum_{\{k: \, T_k \leq t\}} e^{- \alpha [t - T_k ]} \eta_k $$ where $T_k$ are the arrivals of the Poisson Process. For $\Delta > 0$ we have $$ x(t + \Delta) = \sum_{\{k: \, T_k \leq t\}} e^{- \alpha [t + \Delta - T_k ]} \eta_k + \sum_{\{k: \, t < T_k \leq t + \Delta\}} e^{- \alpha [t + \Delta - T_k ]} \eta_k = e^{-\alpha \Delta} x(t) + \varepsilon . $$

In the second sum denoted as $\varepsilon$, the arrivals $T_k$ and the jumps $\eta_k$ are independent of the history $\{x(s);\, s \leq t \}$, and so is $\varepsilon$. So $x(t)$ is a Markov process. Moreover, conditional on the number of arrivals on $(t,\,t+ \Delta)$, say $J$, we know that the r.vs $T_k - t$ have the distribution of the order statistics of a sample of size $J$ of the uniform distribution on $(0,\,\Delta)$. So

\begin{equation} \varepsilon = \sum_{j=1}^J e^{-\alpha U_j} \eta_j' \tag{1} \end{equation}

where the $\eta_j'$ are i.i.d. from the distribution of $\eta_k$ and the $U_j$ are uniform on $(0,\,\Delta)$. The sum is understood as $0$ when $J =0$.

Therefore the simulation of $x(t + \Delta)$ conditional on $\{x(s);\, s \leq t \}$ can be as follows

  • Draw the number of jumps $J$ on $(t,\,t+ \Delta)$ from the Poisson distribution with mean $\lambda \Delta$.

  • Draw $J$ i.i.d r.vs $U_j$ from the uniform on $(0,\,\Delta)$ and $J$ i.i.d r.vs $\eta_j'$ from the jump distribution.

  • Take $x(t + \Delta) = e^{-\alpha \Delta} x(t) + \varepsilon$ where $\varepsilon$ is obtained by (1).

If $t_i^\star$ is an increasing sequence of observation times which is independent of the process $x(t)$, then we can simulate the sequence $x(t_i^\star)$ by simulating $x(t_i^\star)$ conditional on the history at $t = t_{i-1}^\star$ for $i > 0$, as in the code of the question. The sequence $x(t_i^\star)$ is a Markov chain, and it is time-homogeneous if $t_i^\star-t_{i-1}^\star$ is constant.

Interestingly, it can be shown that when the jump distribution is exponential with scale $1/ \gamma$, the stationary distribution of $x(t)$ is gamma with shape $\lambda / \alpha$ and scale $1 / \gamma$. This distribution can be used to draw the initial value $x(t_0^\star)$, and we have an example of Markov chain with gamma margin as in this question. A classical reference for shot noise is section 5.6 in D.R Cox and V. Isham Point Processes, Chapman & Hall (1980).

Yves
  • 5,358
  • Thanks @Yves. Seems like that's more or less what I did, with the suggestion you made earlier. Much appreciated. – TotalFailure Jul 17 '17 at 22:43
  • @ArvindShrivats Yes I just detailed the algorithm that you implemented, up to the autoregression coefficient. – Yves Jul 21 '17 at 10:17