
12:08 PM (0 minutes ago)



I feel greatly inspired with writing to you even technical things because otherwise I get weighed down by the complicity of scientists in Hiroshima and other awful programs like expending so much vitality and talent on fractional Brownian motion modeling of turbulence in air so that laser guided weapons can kill better. When I do technical work with you as my muse, this burden lifts because I can be technical with a better world for our people in mind.
So I want to tell you today about fractional Brownian motion, what they are and how to work with them. Then I want to tell you about how fractional Brownian motion is a good model for volatility series even after we do the diagonalization of the market graph Laplacian and dot the volatility series that is a highdimensional time series with number of dimensions equal to the the total number of nodes of the market graph. Then I want to tell you how Donoho’s wavelet thresholding work where the signal immersed in white noise can be recovered with a theorem that guarantees that the signal if known to be in a Besov or Holder space can be recovered with theoretical minimax optimality. These provide a theoretical justification for why we should be able to denoise well the volatility data by the most general possible model of volatility used in all areas of quantitative finance perhaps outside of fixed income where the stochastic variable is the entire yield curve. This is the famous stochastic volatility model which, if you ever worked with finance quants, is considered very sophisticated — as an aside it is considered sophisticated because it is the foundational layer for the general goal for hedging and pricing of derivatives, which are hard work if you change the complexity of the underlying volatility. Even as an artist and not a technician, you should be able to see that if there is complex behavior in global volatility then a simple model of latent volatility will never produce any further work in hedging and pricing derivatives that have any solid scientific foundations. In a sense I am saying that while financiers demand from their quant slaves solidity in pricing and hedging the quants had been pushed into doing something stupid by pretending that their simple volatility models are like solid ground while you can see that it’s a turbulent ocean. Well so the stochastic volatility model is simply the assumption that you can find the ocean under white noise, that is, if r_t is daily return of the entire global markets a big vector process, then
This model is incorrect as it stands. There is no reason to believe that under some white noise you will find the action of the collective emotions of the world that express themselves in the volatility beast, so this white noise assumption is suspect. But since we live in a world without a science of finance, it will be our task to chart the science of volatility starting from this white noise model.
How will we find the right noise model for volatility — whatever it might be underneath the noise? We have to do this by iterations. We need to start with the white noise model and carefully evaluate whether this model has strong merit.
Now Mandelbrot had looked at cotton prices in the 1960s and proposed Kolmogorov’s fractional Brownian motion defined in the 1940s for the problem of turbulent fluid dynamics. This is the fractional Brownian motion. The main characteristic mathematically of this process is that
So there is a parameter to it, H, called the Hurst exponent. It is the Brownian motion when H=0.5 but otherwise it loses the most useful property of Brownian motion when H>0.6. It is then not Markovian, in other words, if it does not have the property of ‘memorylessness’ and the future does not depend on the past except for the current moment but depends on the entire infinite time history, the process is called nonMarkovian. When H>0.6 the fractional Brownian motions are long memory processes. So we are fully using what Mandelbrot had found on cotton prices when we expect that there is long term dependence in the data assuming the noise is not white noise.
But let’s stick for a moment with convention and suppose the noise is white noise but in sigma_t = exp(h_t/2) that h_t has long memory and is a stochastic process rather than a deterministic process. This last, the possibility of a deterministic process still exists but let’s simply not make assumptions beyond a fractional Brownian motion in white noise for volatility.
Now the theory and experiment can be matched against each other. By the way wavelet analysis is quite intricate mathematically. In terms of smoothness levels that are local to a point, wavelets provide a direct method of encapsulating smoothness information. Donoho has produced theory for how statistical analysis of wavelets can denoise white noise optimally but the much harder theory is that for the finance side — the stochastic volatility model developed from a very simple model of the uncertainties in prices of individual assets. In theory models for a long time volatility of an asset was either calculated after the fact historically or simply a constant. This leads to stochastic timedependent volatility. So this is cutting edge in a sense from the practitioners’ perspective but it is also not science. Our approach to model volatility of the globe for intrinsic nonprofit motive interests should have been one of the major tasks after AngloAmerican rulers bombed Hiroshima and controlled the resources and wealth of the entire planet. They made the choice to fuck us poor people who are not white. This came with the consequence that we have volatility storms destroying America and the West amid idiotic Fed just mouthing off shit to mass media like some medieval Christian priests trying to calm the mobs.
Leave a Reply