Feeds:
Posts

## BRUTE-FORCE LONG-MEMORY MODELS USING MITTAG-LEFFLER AUTOCORRELATIONS

The fits to autocorrelations of volatility by the model $(a,b,c)\rightarrow E_{a,b}(-ct^a)$ are extremely good, and since wavelet-thresholded ARFIMA models seem to give decent volatility predictions, I thought I’d try to brute-force my way by using only autocorrelations and an autoregressive model with as many coefficients as historical data of training period allows.  The the problem of producing autoregressive models based on a given covariance matrix was addressed by Durbin and Levinson.  Given $z_n = (z_1,\dots,z_n)$ and autocorrelations $latex\Gamma_n = \gamma_{i-j}$ you want to solve $\Gamma_n \phi = (\gamma_1,\dots,\gamma_n)$ and then use the $\phi = (\phi_1,\dots,\phi_n)$ to predict $z_{n+1} = \phi_1 z_1 + \cdots + \phi_n z_n$ (see TimeSeriesAlg-McLeod-Yu-Krougly for fast implementations in the ltsa package).  There is no reason to be fancy here since we have some solid model for autocorrelations and we focus first on checking whether the large autoregressive model predicts the volatility any better than a wavelet-thresholded ARFIMA.  Of course I coded the Mittag-Leffler function in python so I need ‘praw’ and ‘rPython’ to do this in R …

The algorithm is given a training period log-vol series is to fit the autocorrelation model parameters, create a large Toeplitz matrix for the autocorrelation get it into R and invert it using McLeod-Yu-Kroughly algorithms, and predict using the lagged parameters.