Finance as a science is extremely young compared to physics for example. Probably the first significant act that turned finance into a science was Markowitz portfolio theory with the optimal portfolio choice based on mean-variance optimization. This is from the 1950s. The random walk models had then dominated finance since, with the theoretical breakthrough of Black-Scholes for option pricing for a geometric Brownian motion price process for an asset early 1970s. Markovian models are all short memory. But starting with late 1960s when Mandelbrot showed that wheat prices have long memory features, the empirical picture has emerged where it is well-known that while returns themselves are uncorrelated at various lags, the underlying volatility has long memory. Technically long memory can be defined for a process by autocorrelations declining not exponentially (a feature of the Markovian models) but as a power law. This technically implies that all short memory models are misspecified although a rigorous quantitative analysis of the error in misspecification is not really available to my knowledge.

Long memory in stochastic volatility was originally proposed in early 1990s by several groups. Stochastic volatility models model returns as where is i. i. d. . Taking log return-squared, we find in this case so we have to filter out the second term to obtain the unobserved stochastic volatility. If we assume this is approximately Gaussian then we can filter using the Kalman filter because the model for is a short memory ARMA which can be put in state space form. When is modeled using long memory such as the ARFIMA(p,d,q) then the Kalman filter cannot be applied because there is no state space form. Optimal filtering to estimate in that case had been a research question. Fortunately, we can resolve this problem as follows. David Nualart has shown that the fractional Brownian motion has sample paths in a Besov space — fractional Brownian motion is the continuous version of an ARFIMA process. Now optimal filtering of i. i. d. noise in Besov spaces has an almost minimax solution by soft thresholding of wavelet coefficients by results of Donoho-Johnstone and others. This provides us with a method for filtering long memory stochastic volatility.

It is an empirical fact that log return-squared has long memory. Thus the almost minimax optimal filter gives us information for option valuation for example. The larger scientific question of what is volatility really is still unclear at least to me. It would be interesting to find some understanding of whether the long memory of volatility and that of water levels of Nile have more than superficial similarity. I am personally not compelled by the explanation of long memory as some sort of random aggregation of AR(1) processes. Almost by definition, long memory for stochastic volatility implies persistence of shocks. But the analogy of the two phenomena, LMSV and long memory of waver levels or fluctuation of solids etc. suggests a deeper feature of nature that could childishly be approached as collective human emotions responsible for volatility being modeled like levels of liquid.

## Leave a Reply