2

I am reading a paper by Rodriguez-Iturbe et al. from 1986 and am confused by the below derivation. The model they are working with is a Poisson process with rate $\lambda$ in which each occurrence in the process corresponds to a storm. These storms each have a random duration $L$ and a random intensity $X$. Thus, the total rainfall intensity at a time $t$ is the sum of the intensities of all storms active at that time. The lengths all follow the same distribution as each other, and the intensities all follow the same distribution as each other, and everything is independent.

In the below derivation, I understand where the first integral comes from, and I understand the definition of $X_{t - u}(u)$, but I have no clue where the expectation formula comes from (and by extension, where the variance and autocovariance formulas come from). Any explanation of any of it would be appreciated.

enter image description here

1 Answers1

3

The integral expression for $Y(t)$ refers to some stochastic calculus involving the Poisson process. However it can be understood as a sum of random infinitesimal contributions and handled as such on an intuitive ground provided that the assumptions are carefully understood and checked in the derivation.

Note that for the specific problem discussed here an equivalent representation for $Y(t)$ relies on the infinite sequences of r.vs: the Poissonian event times $T_i$, the related pulses depth $X_i$ and pulse lengths $L_i$

$$ Y(t) = \sum_i X_i \, 1_{\{T_i < t < T_i + L_i\}}. $$

This expression can be used to find the expectation $$ \mathbb{E}\{Y(t)\} = \sum_i \mathbb{E}[X_i]\, \mathbb{E}[1_{\{T_i < t < T_i + L_i\}}] = \mathbb{E}[X] \, \sum_i \mathbb{E}[1_{\{T_i < t < T_i + L_i\}}]. $$ where the independence of $X_i$ and $[T_i,\,L_i]$ has been used in the first step. Since $T_i$ and $L_i$ are independent, we have $\mathbb{E}[1_{\{T_i < t < T_i + L_i\}}] = \mathbb{E}[1_{\{T_i < t < T_i + L\}}]$. So the sum is the expectation of the number of events $T_i$ falling in the random interval $(t - L, \, t)$, which unsurprisingly can be shown to be $\lambda \mu_L$.

Coming back to the integral expression, we can use the equivalent expression $$ Y(t) = \int_{s = -\infty}^t X_s(t-s) \,\text{d}N_s $$ By exchanging the expectation and integral
$$ \mathbb{E}\{Y(t)\} = \int_{s =-\infty}^t \mathbb{E}\{X_{s}(t - s) \,\text{d}N_{s}\} = \int_{s =-\infty}^t \mathbb{E}\{X_{s}(t - s)\} \, \lambda \text{d}s. $$ To justify the second expression, the expectation in the integral is a product of expectations due to the independence of the sequence $[X_i,\,L_i]$ of pulse variables and the sequence $T_i$ of events; moreover $\mathbb{E}\{\text{d}N_{s}\} = \lambda \text{d}s$ is the expected increment of $N_s$ on the time interval $(s, \, s+\text{d}s)$. Now using the distribution of the r.v. $X_{s}(t - s)$ (with mixed type) as provided in (2.4) and the independence of $X$ and $L$, we get $\mathbb{E}\{X_{s}(t - s)\} = \mathbb{E}[X] \,\mathcal{F}_L(t-s)$. The result then comes from $\int_{u=0}^\infty \mathcal{F}_L(u) \text{d}u = \mathbb{E}[L]$.

The formula for the autocorrelation can be derived by writing the product $Y(t)Y(t + \tau)$ as a double integral and then taking the expectation. We can use the rule $$ \mathbb{E}[\text{d}N_s \text{d}N_{s'}] = \begin{cases} \lambda \, \text{d}s & s= s',\\ 0 & s \neq s' \end{cases} $$ which relates to the independent increments property of $N_t$.

EDIT: As highlighted by comments by @lmnop, the final part of my answer concerning the autocovariance is misleading. See section 9.6 of D.R. Cox and H.D. Miller (1965) The Theory of Stochastic Processes about the linear filtering of a Point process including the Poisson case. The simplest case concerns the linear filtering a Poisson process

$$ Y(t) = \int_{s= -\infty}^{t} g(t- s) \, \text{d}N_s $$

where $g(t)$ is a deterministic function. The covariance of $Y(t)$ and $Y(t+\tau)$ can be obtained by writing a double integral, using the bilinearity of the covariance and

$$ \text{Cov}\{\text{d}N_s,\, \text{d}N_{s'}\} = \begin{cases} \lambda \, \text{d}s & s= s',\\ 0 & s \neq s'. \end{cases} $$

We get

$$ \text{Cov}\{Y(t),\, Y(t+ \tau)\} = \lambda \, \int_{s= -\infty}^t g(t -s) g(t + \tau -s) \,\text{d}s = \lambda \int_{0}^\infty g(u) g(u + \tau)\,\text{d}u $$

Under some assumptions which hold in the application paper cited in the OP, this extends to the case where $g(t)$ is a stochastic process related to the Poisson Process $N_t$, but then we must then use an expectation in the integral, as in

$$ \text{Cov}\{Y(t),\, Y(t+ \tau)\} = \lambda \int_{s= -\infty}^t \mathbb{E}\{g(t-s) g(t+\tau-s)\}\,\text{d}s. $$

See the book for a derivation based on infinitesimal increments. I guess that some more recent references could also be found, in relation to the formal rule $\text{d}N_s \text{d}N_s = \text{d}N_s$. Moreover the autocovariance can be found using discrete sums as done here for the expectation.

Yves
  • 5,358
  • This is wonderfully helpful, thank you so much! –  Feb 18 '20 at 19:43
  • Thank you. Hope this will help at reading this inspiring article. – Yves Feb 19 '20 at 06:26
  • After reading through this more thoroughly, I have a follow-up question. It appears that the covariance formula they cite is actually $E[Y(t) Y(t + \tau)]$, not $E[Y(t)Y(t + \tau)] - E[Y(t)]E[Y(t + \tau)]$. This assumes that Y(t) has mean zero, which it doesn't. Am I missing something? –  Feb 24 '20 at 22:45
  • 1
    The covariance in (2.6) can not be $E[Y(t) Y(t+ \tau)]$; it tends to $0$ for large $\tau$ while the expectation of the product tends to $E[Y(t)]^2$ because $Y(t)$ and $Y(t+ \tau)$ are uncorrelated for large $\tau$. – Yves Feb 25 '20 at 12:29
  • That's true, but feel like the formula should be $$\text{Cov}(Y(t)Y(t + \tau)) = \int_{s = -\infty}^{t} \int_{r = -\infty}^{t + \tau} E[X_s(t - s) X_r(t + \tau - r) dN_s dN_r] - E[Y(t)]^2$$ whereas they use (I think) that $$\text{Cov}(Y(t)Y(t + \tau)) = \int_{s = -\infty}^{t} \int_{r = -\infty}^{t + \tau} E[X_s(t - s) X_r(t + \tau - r) dN_s dN_r]$$ –  Feb 27 '20 at 22:24