40

There is a big body of literature on econometric models like ARIMA, ARIMAX or VAR. Yet to the best of my knowledge practically nobody is making use of that in Quantitative Finance. Yes, there is a paper here and there and sometimes you find an example where stock prices are being used for illustrative purposes but this is far from the main stream.

My question
Is there a good reason for that? Is it just because of tradition and different schools of thought or is there a good technical explanation?

(By the way I was pleased to find an arima tag here... but this is again a case in point: only 8 out of nearly 7,000 questions (~ 0.1% !) use this tag! ...ok, make this 9 now ;-)

vonjd
  • 27,437
  • 11
  • 102
  • 165
  • 8
    You do have a lot of reputations and badges, so I suppose you are well inside the quantitative finance world: do you use arima & co. models in your practice? ;) – simmy May 11 '16 at 06:28
  • 2
    @simmy: Fair question, see my answer: http://quant.stackexchange.com/a/25964/12 – vonjd May 12 '16 at 05:53
  • I'm not in this field. But it's my feeling that econometric methods might be based on too idealised (and hence unrealistic) assumptions. Modelling is generally more expensive and inaccurate than fitting when the system is super complicated. – Vim May 13 '16 at 04:05
  • @Vim: I don't think that it is because of simplistic assumptions as such. Look at Black Scholes, still one of the cornerstones of Quant Finance: It assumes a normal distribution for stock returns which is also a huge (and dangerous) oversimplification. – vonjd May 13 '16 at 20:17
  • 1
    @vonjd and don't forget it is based on returns being i.i.d, no skew, and IV being stable. But hey, why do we need reality when we have models. – drobertson Sep 20 '16 at 18:49

5 Answers5

32

It's an interesting question.

I particularly agree with the $\mathbb{Q}-\mathbb{P}$ dichotomy mentioned by many.

I would add to the other answers that, come to think of it, the Black-Scholes postulated Geometric Brownian Motion could be interpreted as an AR(1) process on the logarithm of the stock price as you discretise the SDE from which it is a solution, which is exactly what you do when running Monte-Carlo simulations (same thing for the Ornstein-Uhlenbeck process as explained here and noted by @Richard).

Actually, when taking the continuous-time limit, many more econometric models can be shown to correspond to stochastic processes frequently used by $\Bbb{Q}$ quants (see this paper for instance and the comment of @Kiwiakos below and discussed here with interesting references).

So why do we, at least on the sell-side, tend to favour (jump-)diffusion models over econometric models, while the latter have the advantage that volatility/variance is an observable quantity and not a hidden variable, making them easier to calibrate on historical time series, that is, information observed under $\mathbb{P}$ ?

Well... essentially because derivatives pricing happens under a risk-neutral measure $\mathbb{Q}$ and not the physical measure $\mathbb{P}$.

When working under $\mathbb{Q}$, we do relative valuation. Voluntarily over-simplifying the situation, we appeal to the absence of arbitrage opportunity to claim that any financial instrument can be priced solely by looking at the prices of other securities (typically listed options) that can be combined to perfectly replicate the former instrument's behaviour (or used as a perfect hedge, which is equivalent).

Therefore, it is not important to have a model which can be easily calibrated to historical time series, hence under $\mathbb{P}$ (which is the key feature of econometric models IMHO) but essential to have a model that leads to nice closed form formulas for the price of simple instruments that could be used as a relative pricing basis under $\mathbb{Q}$ (which is the key feature of most jump-diffusion models used by quants IMHO).

Consider the GARCH pricing model proposed by Duan for instance. True, it is easily calibrable to historical time series, but:

  1. Is the past really useful to understand what will happen in the future, which is the crux of derivatives pricing? Not necessarily, especially since we are in a relative valuation framework: it is the evolution of the market prices at which we can trade the elementary replication blocks that matters, not the historical behaviour of the underlying asset.

  2. You need Monte-Carlo simulations to compute European option prices under this model: think about how much computational resources would be needed to calibrate such a model to 1000 vanilla prices several times a day in a live production environment (especially compared to something like Heston were Fast Fourier Transform techniques can be implemented).

Summarising (again, with a voluntary over-simplification)

  • Econometric models: easily calibrated under $\mathbb{P}$ (discrete time + observable volatility/variance), yet need simulation methods à la Monte Carlo to be able to compute option prices, even for the most basic types of options.

  • (Jump-)Diffusion models: painful to calibrate to time series (continuous time + hidden Markov models) but admittedly lead to (semi-)closed form formulas for many benchmark instruments (or at least the popular models are the ones that do...), making them easy to calibrate/use under $\mathbb{Q}$.

Quantuple
  • 14,622
  • 1
  • 33
  • 69
  • 1
    The discretization is a good point. You could add that the Ornstein-Uhlenbeck process in continuous time is an AR(1) in discrete time. There is a question on QSE about this. – Richi Wa May 11 '16 at 11:46
  • 1
    Thanks Richard, I've added OU as an additional example as you suggested. – Quantuple May 11 '16 at 12:27
  • 6
    The limiting behaviour of discrete time processes and their continuous time counterpart is a very tricky subject. As Nelson showed in his famous 1990 paper the limits are not unique and diffusions spring up from nowhere. For example a Garch(1,1) which has one Gaussian source of uncertainty can converge to a Stoch Vol continuous time process driven by two Brownian motions. One of my favourite interview topics. – Kiwiakos May 11 '16 at 19:19
  • Thanks for this comment which made me delve back into this interesting topic. – Quantuple May 12 '16 at 08:56
  • Funny that you edited it 2 hours ago since I was rereading some of the answers just yesterday evening ;-) – vonjd Mar 03 '17 at 15:23
  • Do you mind to replace "perfectly replicate" with "perfectly statically replicate" above? I think that someone shoudn't expect a "perfect dynamic replication" when the actual price process is an Arithmetic Browinan Motion while his model thinks it is GBM? – zer0hedge Jun 24 '17 at 14:39
  • @zer0hedge I don't understand your point, BS PDE is obtained through a dynamic replication argument. Of course this is purely theoretical but it is what guarantees the existence of a unique pricing measure $\Bbb{Q}$ un that case. – Quantuple Aug 23 '17 at 17:13
  • @Quantuple Only static replication can be perfect as we all know. So we say either "perfectly statically replicate" or "approximately dynamically replicate". I believe you should state that and then see whether your logic works. – zer0hedge Aug 24 '17 at 07:06
  • @zer0hedge - Ah I see, so you're basically unhappy with the "voluntarily over-simplifying" part although it's explicitly written. This level of detail is irrelevant for the question at hand but let me make this clear for you. Static replication => 1 price. Dynamic replication in a complete market => 1 price. Dynamic replication in an incomplete market => not 1 price. Of course when I talk about dynamic replication I'm in the theoretical realm of a market with no frictions, continuous trading, spot prices evolve as per your model, AOA etc. which is at the heart of the derivation of $\Bbb{Q}$. – Quantuple Aug 24 '17 at 07:31
  • @Quantuple When you say "voluntarily over-simplifying" it appears that there is another explanation, which is "complex but correct". In reality, there is no such explanation. Q-world is a pure theory, there are no any logical justifications why, for example, the price of some derivative calculated in Q-world is good for trading, i.e. it is not the 'real' price in any sense. The same even more true for greeks etc. The Q-world in my opinion just don't want to see the inconvenient reality of P-world, so no need to analyze the historical data, to try to forecast the real future etc – zer0hedge Aug 24 '17 at 10:37
  • Congratulations, you've discovered that while P is the only real measure it remains unknown while on the contrary Q doesn't exist per se but can be pinned down mathematically, by making some assumptions. You should maybe write a book about it since you're the first one. You'll probably then realise that "all models are wrong, some models are useful". – Quantuple Aug 24 '17 at 11:57
  • @Quantuple We are back at square one. Suppose the real (unknown) model is arithmetic brownian motion and you are pricing an option using geometric brownian motion. Is this price usuful? I don't think so. Noone is trying to pin down Q-world to the reality unfortunately and that is the difference between P and Q worlds. – zer0hedge Aug 24 '17 at 15:18
  • GBM is not the analog of log-AR(1). It has exploding variance and isn't mean reverting. – Konstantin Oct 21 '21 at 07:57
17

I think you need to differentiate between Q-quants vs P-quants. The former might not use Econometrics, but P-quants use them a lot.

Kiwiakos
  • 4,347
  • 1
  • 15
  • 21
12

Traditional econometric (time series) models are of little or no value in forecasting market prices for purposes of "making money", i.e, generating excess return over a benchmark in an asset management setting. They have some limited value in strategic and tactical asset allocation.

The ineffectiveness of time-series modeling in asset management stems primarily from the non-stationary and non-linear nature of financial markets. There are regime-switching models that can partially address these phenomena, but in my experience they are too simplistic to be of any lasting value. Furthermore, any predictive power of such transparent and easily replicated models would be fleeting in largely efficient markets. Even without such complications, statistically significant estimates of risk premia are virtually impossible -- even if such stable parameters existed.

For example, consider as simple a task as estimating the expected return of a hypothetical asset with price following say a geometric Brownian motion -- where returns over non-overlapping intervals are independent. We have the true return distribution $N(\mu \delta t, \sigma^2 \delta t),$ where $\mu$ and $\sigma$ are the annualized expected return and volatility, respectively. If we observe period returns $r_1,r_2 \ldots, r_N$, sampled over intervals of length $\delta t$, then the unbiased or MLE estimators $\hat{\mu}$ and $\hat{\sigma}$ have sampling distributions

$$\hat{\mu} \delta t \sim N\left(\mu \delta t, \frac{\sigma^2 \delta t}{N}\right)\\ \frac{(N-1)\hat{\sigma}^2 \delta t}{\sigma^2 \delta t} \sim \chi^2(N-1).$$

The relative error in the estimate for the expected return is

$$RE = \frac{\sigma \sqrt{\delta t/N}}{\mu \delta t}= \frac{\sigma}{\mu \sqrt{T}},$$

where $T = N \delta t$ is the total length of the sampling period. For fixed $T$, say 3 years, the relative error cannot be improved by increasing the sampling frequency, regardless of how many additional samples are taken. In other words, in order to improve the accuracy of the estimated return by a factor of 5, we must increase the sampling period by a factor of 25 to 75 years -- clearly problematic.

RRL
  • 3,650
  • 13
  • 18
  • 1
    I think this is the best answer. It's pretty obvious why time-series models are not used in derivatives pricing. It's not so obvious why they aren't used much more in asset management. – Chris Taylor May 12 '16 at 13:03
  • 1
    I could agree with the first part of this answer if "traditional" were replaced with "linear". But the example which follows seems wrong. It is a well-known fact that asset prices do not follow Geometric Brownian Motion. Furthermore, it is also well-known that ARMA is not suitable for modelling of GBM process. – zer0hedge Jun 24 '17 at 14:30
  • I am not asserting that real asset prices do or should follow GBM. The point is even if a price series were truly a sample from such a process then the expected return is virtually impossible to estimate reasonable accuracy unless the history is very long. This conflicts with the fact that price behavior is not likely to be stationary. This is referred to as the "Record Problem" originally described by Merton I believe. – RRL Jun 24 '17 at 17:09
8

My answer is very much in the spirit of Kiwiakos' answer.

E.g. in this paper (where I am one of the coauthors) we use VMA (vector moving average) models (in the multivariate case) and AR models in the univariate case to calculate proper scaling of volatility or its contributions if there are (cross-) auto-correlations.

This happens in the P world due to asynchronous markets. It also happens if you valuate a stock in another currency as stocks have a daily close price and currencies don't (and sometimes you don't get the fx quote of the currency at the same time-stamp as the close of the stock).

I would add that if we relate Q to hedging (risk netural is only possible if I can hedge) then we have the bridge to the P world and similar problems arise and solutions from the P-guys can be helpful.

Richi Wa
  • 13,670
  • 5
  • 38
  • 88
8

Having thought about this I think the following reason is also important and wasn't mentioned so far:

When you look at the inner working of this whole class of econometric models it all boils down to the following: It is possible (under some reasonable assumptions) to express any $MA(q)$ model as an $AR(\infty)$ model (and vice-versa for expressing $AR(p)$ models as $MA(\infty)$ models). So $ARMA(0,\infty)$ and $ARMA(\infty,0)$ are equivalent. (For the exact mathematical details Wold's representation theorem is relevant).

What this means is that in practice you can increase the number of lags in an $AR(p)$ until any moving average components has disappeared from the autocorrelation function (and in practice they usually decay rapidly and btw this has the additional advantage that you can use least squares instead of ML estimation). So all of these models are basically only linear combinations of a certain number of prior data points in the respective time series (in the univariate case!). Said another way: The working of these models is dependant on a stable autocorrelation structure.

Now one of the well known stylized facts at least in the equity space is that there is next to no autocorrelation! So all of these models are bound to fail when you try to use them for forecasting returns (even if the assumptions of stationarity would be met. NB: Going from prices to returns is basically the $I$ part - or parameter $d$ for differencing - in $ARIMA(p,d,q)$).

Another stylized fact is of course that there is autocorrelation in the volatility structure, which gives us the whole class of $ARCH(q)$ models. And these are far more successful and used a lot by quants as we all know!

vonjd
  • 27,437
  • 11
  • 102
  • 165
  • I believe it should be emphasized that ARCH(q), GARCH etc are econometric models too. – zer0hedge Jun 24 '17 at 13:34
  • @zer0hedge: Please read my last paragraph: "Another stylized fact is of course that there is autocorrelation in the volatility structure, which gives us the whole class of ARCH(q) models. And these are far more successful and used a lot by quants as we all know!" – vonjd Jun 25 '17 at 07:31