I was introduced to the issue of long memory at Gresham Investment Management where Benoit Mandelbrot had advised the principals; this was around a decade ago. It took me a decade to understand what long memory in option prices means concretely to produce closed form stochastic volatility models from these extending Heston/Bates and the affine models. Research papers on the topic goes back to 2000-2005 but a concrete closed form model that actually outperforms affine type models with jumps in price and volatility was new. We are now looking into the issue of whether these models are able to consistently and universally make profits from mispricings of the market makers in liquid options. This is not a hard problem if one wants to produce a trading strategy for a single option by hammering together something reasonable. It is a hard problem if one wants to produce a system that is able to profit against the market makers’ mispricing universally.
To this end, I have produced a system that can produce tradeable strategies with a few tunable parameters. The idea of the strategies is the combination of a VOLATILITY/VOLATILITY SURFACE PREDICTION with an ARBITRAGE OF MARKET MAKER MISPRICING.
One can use arbitrarily sophisticated methods for volatility and volatility surface prediction. I use an ARFIMA model to predict the log(return^2) volatility of the underlying and a LASSO prediction of the multivariate time series of the 9 parameters of the Zulf SV model. The LASSO model does not croak when the lookback period is small which is the reason I use it rather than a basic non-regularized linear model; empirically I found that an 80/20 mix of historical average of parameters and LASSO prediction produces better forecasts. This part is not particularly optimal but it is not the central problem.
More important is the problem of the understanding what is the PRICE THRESHOLD of mispricing. Here we have results that are far more nontrivial: we find that the ERROR OF FITTING VOLATILITY SURFACES is the key part of the price threshold we should use. So the objective function used to fit the volatility surfaces is sqrt(sum(ModelCallPx-MarketCallPx)^2/N) which is the average dollar mispricing per surface. The best surface fits produces a minimal error of this type which we then use to decide the mispricing level. This mispricing level seems to be central to actually systematically profiting from the option markets despite a big bid-ask spread!
Let me repeat this a few times so this is clear. The POWER of the ZULF STOCHASTIC VOLATILITY model is not just that it fits the call prices better than other popular models (like Bates which is better than Heston adding jumps to prices and volatility) but that for 2015 for some of the liquid options the ERROR OF FIT can be used to define nontrivial MISPRICING THRESHOLDS with systematic good performance despite wide bid-ask spreads. Let’s take a look at some graphs without any stop losses to appreciate the importance of this result.
Ok these graphs may not look pretty but they have no stoplosses or other artificial smoothing. Now I know from a great deal of experience that it is not hard to produce strategies that are winners 65% of the times. These are strategies with 80-90% winners (which is not as easy).
Both of these are produced by tuning the three parameters; the mispricing threshold is defined as CONST*ErrorOfVolSurfaceFit for the calibration of that day. This is actually what produces the reasonable results above in backtests. Second is a pair of VOLATILITY DIRECTION thresholds which essentially ensures that one does not short the option when the expected volatility is to rise. This is also crucial: we want to only do trades consistent with the volatility prediction. Profitability of a volatility surface prediction strategy in these cases depends on making sure that one does not do ‘full delta hedging’ which means that they are not viable by shorting options when the volatility is expected to rise.
Finally, here is a harder example: EEM where the results are not as clean but you can see that this problem of producing universal results is a tractable problem at least for 2015.
Of course if I put in a stop loss these would look much better but that’s not so useful because we want to understand what’s driving the profits; we want universal results over all liquid options because we believe that the Zulf SV model is the best option pricing model in the world and is much better than whatever the option market makers are doing. My explanation for why this works is precisely because the ERROR OF FITS of the volatility surface is used as thresholds (modulo a constant in [1,5] say). In fact you can play around with fixed constant thresholds which is what I did till I realized the above and find that the results are much less steady. On the problematic side, the universality I would like to see does not come without tuning the constants for each stock.
The code attached shows you the details of what produces these results — the valuation code is in cmlf.pyx and the details of the arbitrage strategy in predVSParamsStrategy.R.
I would like to propose to the world that: STOCHASTIC VOLATILITY MODELS formalized is a SOLVED PROBLEM. A hard nontrivial problem is the problem of optimal implementation of stochastic volatility models to somehow force arbitrage-freeness. In other words thus far, this problem has been in the domain of PRACTICE while in fact, this seems to me to be as nontrivial a theoretical issue as the SV models themselves. Who knows? Maybe this is the equivalent of Google’s search engine problem ….
The attached files besides some of the code (for which you will need actual historical options data though) as well as files marked *-ret.txt which are extracted returns, *-vsps.txt which is the detailed output of the volatility surface prediction strategy that used PRECALIBRATED ZULF MODEL parameters all in the fill All-calibrations.txt. But you can examine the takeArbitragePositions and other functions in the R code to verify that these are serious strategies.