Feeds:
Posts
Comments

Archive for July, 2014

During 2009 I reacted emotionally to the people killed in Gaza. This time around, I reacted with a detached sympathy. I realized that empathy for human suffering is not natural at all. It is a conditioned response. We are programmed by ideology and external factors. The various mythologies of tribes, of ‘us’ versus ‘them’ that allows us to shut off human sympathy are all dependent on manipulation precisely of empathy. It is strange to me that from a naturalistic point of view why there is a discrepancy between the dominant ideologies of the planet today and instillation of universal human sympathy. There are many stories of who the bad guys are but this is a serious question that has not really been answered seriously. I was thinking about the genocide of the native Americans and the various ways in which people have dealt with this. It is quite possible that ‘might makes right’ is the true mantra of nature but I would like to believe that there is more to the story. I just don’t believe that the logical impossibility of a creator God has anything to do with any serious answer.

Read Full Post »

Physicists Challet and Zhang had produced a statistical physics type model ‘Minority Game’ where individual agents choose between two strategies based on past performance.  With their model they could explain long-range correlations in volatility.  Here is a paper: MarsiliMinorityGameModel.  In my opinion, this is a good direction.  The vague analogy for volatility is a sort of temperature and these spin glass type models which introduce micro-interactions and consider thermodynamic limits are consistent with this vague analogy.  A quantitative analogy comes from the fact that fluctuations in solids and long memory in stochastic volatility are modeled with the same tool: Mandelbrot’s fractional Brownian motion.

So the Minority Game is not a full explanation of what volatility is; however, it is an excellent toy model that does give us some insight into how fractional noises could occur.  With this one explanation in mind, we could focus on the quantitative aspects of using long memory correctly:  here it is extremely nice that Donoh0-Johnstone’s wavelet thresholding technique produces minimax guarantees for recovery of Besov functions which allows us to filter SV with IID noise.

I am not sure that the Minority Game is necessarily a correct model that fully captures long memory in stochastic volatility since it makes the restrictive assumption of choice between two strategies but I agree strongly with the idea that the quantitative behavior of stochastic volatility is identical to that for fluctuations in solids or river levels.

Read Full Post »

The main technical issue for long memory stochastic volatility models that I have resolved theoretically is that the Donoho-Johnstone wavelet thresholding method can be used for optimal filtering in a Besov space for the noisy observations of log-stochastic volatility.  This I have implemented using the R package wavethresh.  The functions ‘wd’, ‘threshold.wd’ and ‘wr’ are decomposition, thresholding and reconstruction functions with various thresholding options.  We filter using this method, fit an ARFIMA model and then predict T-days ahead for the ARFIMA and then use the fact that the option price is Black-Scholes with average of the future volatilities from the prediction.   The code is this:

function( histr, P0, K, T ){
histlogr2<-log(histr^2+0.000001)
histlogr2[is.nan(histlogr2)]<-log(0.000001)
n<-length(histr)
z<-djthresh((as.numeric(histlogr2)+1.27)/1.5)
#z<-as.numeric(histlogr2)+1.27
fit<-arfima(z*1.5,order=c(1,0,1))
futsv<-predict(fit,n.ahead=T,bootpred=F)
p<-lmsvcallprice2(futsv[[1]]$Forecast/1.5, P0, K, T )
p
}

The function ‘djthresh’ does the Donoho-Johnstone wavelet thresholding to filter and then lmsvcallprice simply uses the forecasted stocastic volatility to average and use Black-Scholes.  To calibrate we consider a randomly chosen stock ‘MMM’ and choose at date 2009-12-31 with S=82.67 and strike price X=85 with time to expiration of T=74 days.  The last market price for the option is 2.65 while the long-memory price is 3.84 which is close.

The next step is to dig into the finer effects of thresholding strategy on long memory option price strategy.

 

Read Full Post »

Finance as a science is extremely young compared to physics for example.  Probably the first significant act that turned finance into a science was Markowitz portfolio theory with the optimal portfolio choice based on mean-variance optimization.  This is from the 1950s.  The random walk models had then dominated finance since, with the theoretical breakthrough of Black-Scholes for option pricing for a geometric Brownian motion price process for an asset early 1970s.  Markovian models are all short memory.  But starting with late 1960s when Mandelbrot showed that wheat prices have long memory features, the empirical picture has emerged where it is well-known that while returns themselves are uncorrelated at various lags, the underlying volatility has long memory.  Technically long memory can be defined for a process X_t by autocorrelations declining not exponentially (a feature of the Markovian models) but as a power law.  This technically implies that all short memory models are misspecified although a rigorous quantitative analysis of the error in misspecification is not really available to my knowledge.

Long memory in stochastic volatility was originally proposed in early 1990s by several groups.   Stochastic volatility models model returns as r_t = exp(h_t/2) \epsilon_t where \epsilon_t is i. i. d. N(0,1).  Taking log return-squared, we find in this case log(r_t^2)=h_t+log(\epsilon_t^2) so we have to filter out the second term to obtain the unobserved stochastic volatility.  If we assume this is approximately Gaussian then we can filter using the Kalman filter because the model for h_t is a short memory ARMA which can be put in state space form.  When h_t is modeled using long memory such as the ARFIMA(p,d,q) then the Kalman filter cannot be applied because there is no state space form.  Optimal filtering to estimate h_t in that case had been a research question.  Fortunately, we can resolve this problem as follows.  David Nualart has shown that the fractional Brownian motion has sample paths in a Besov space — fractional Brownian motion is the continuous version of an ARFIMA process.  Now optimal filtering of i. i. d. noise in Besov spaces has an almost minimax solution by soft thresholding of wavelet coefficients by results of Donoho-Johnstone and others.  This provides us with a method for filtering long memory stochastic volatility.

It is an empirical fact that log return-squared has long memory.  Thus the almost minimax optimal filter gives us information for option valuation for example.  The larger scientific question of what is volatility really is still unclear at least to me.  It would be interesting to find some understanding of whether the long memory of volatility and that of water levels of Nile have more than superficial similarity.  I am personally not compelled by the explanation of long memory as some sort of random aggregation of AR(1) processes.  Almost by definition, long memory for stochastic volatility implies persistence of shocks.  But the analogy of the two phenomena, LMSV and long memory of waver levels or fluctuation of solids etc. suggests a deeper feature of nature that could childishly be approached as collective human emotions responsible for volatility being modeled like levels of liquid.

Read Full Post »

Given that Donoho-Johnstone had studied their wavelet shrinkage denoising approach for Besov and Treibel spaces and we want to apply their results for long memory in stochastic volatility, it is useful to consider what would happen if the the model were fractional Brownian motion with Hurst exponent H.  In this case, David Nualart has shown more generally that the stochastic integral \int_0^t u_s dW_s^H is in a Besov space.  In case u_s \in L^{\delta,1} for some large $\delta>0$ then the Besov space is B^H_{\infty,\infty}.  Here is the paper:  NualartBesovRegularityFBM.  This is extremely convenient because it brings us closer to a rigorous application of the Donoho-Johnstone denoising theory where they had considered minimax for Besov spaces.  In actual application to long memory stochastic volatility one uses an ARFIMA model rather than fractional Brownian motion but one suspects that the pathwise regularity has similar properties in terms of the Hurst exponent H=d+1/2.

Read Full Post »

If you take log return-squared of a typical asset return you will find that there is long memory by estimating the fractional differencing parameter d>0 which corresponds to Hurst exponent H > 1/2.  Now the stochastic volatility model is

r_t = \exp( h_t/2 ) \epsilon_t

where \epsilon_t \sim N(0,1).  The problem is that the stochastic volatility series h_t is latent but we have used x_t = \log(r_t^2+c_0) that we observe.  So what we have is x_t = h_t + Q_t where Q_t \sim \log(\chi^2) which could be approximated by N(-1.27, \pi^2/2).  It would be excellent if we could use a denoising transformation on the observed x_t to obtain the stochastic volatility h_t.  In the short-memory stochastic volatility model, this is done via the Kalman filter but in the long memory case no state space representation exists.  However, we can still apply the Donoho-Johnstone optimal denoising by shrinking the wavelet transform.  An excellent account of this is in Donoho’s 1992 report:  DonohoDenoiseSoftThreshold.  This allows us to filter out the stochastic volatility which can then be used to do things like calculate option prices etc.

I am rather pleased with this direction because I have solved here a very useful general problem: how does one go about filtering a long memory stochastic volatility model when Kalman filter is not available.

So practically how to implement the Donoho-Johnstone approach?  They use a specific version of wavelet transform due to Cohen-Daubechies-Jawerth-Vial for intervals.  The basic result is that for y_t = x_t + \sigma \epsilon_t with standard white noise $\epsilon_t \sim N(0,1)$ the soft threshold for wavelet coefficients is \sqrt{2 \log(n)} \sigma \gamma_1 /\sqrt{n} where \gamma_1 is determined from the maximum singular value of a matrix that is determined by the pyramid wavelet scheme.  Unfortunately I don’t know of a packaged R function to do this.  In order to proceed, we can use the ‘wavelets’ package functions ‘dwt’ and ‘idwt’.  Let x_t be log return-squared which we transform as (x_t + 1.27)/\sqrt{n} and take dwt say with 3 levels.  The wavelet parameters can be accessed with z@W$W1, z@W$W2, z@W$W3 each of which we can soft threshold appropriately and then apply ‘idwt’ to reconstruct the denoised signal.  One can check that the Hurst parameter increases which is reasonable because the Hurst parameter controls the Holder-continuity of sample paths for the Brownian motion.  Now by the Donoho-Johnstone theory should approximately do the right denoising so the denoised series y_t is an estimate of the latent stochastic volatility without the iid return noise.

 

Read Full Post »

I have been successful to some extent in weaning myself away from geopolitical issues for a short period and then a couple of days ago I found Israel has decided on tens of thousands of airstrikes on Gaza killing more than 120 mostly civilians in retaliation for rocket attacks.  I don’t know the fatalities from the rocket attacks but I would be surprised if there were any at all.  Netanyahu then used the rhetoric of ‘unbowed under international pressure’ as though attacking poor people with jet fighters is some act of heroism.  This is, no less than the Gaza attack of Cast Lead, a totally criminal act in my view.  The fundamental reason it is even accepted in the west is because racism is endemic: it’s okay to kill people so long as they are not white in this brave new world; just consider the ‘international threat’ that would be generated if there were 11,000 airstrikes in a European country with hundreds dead.  Now what was surprising to me is that I saw some footage in a commercial network television which actually gave death toll as well as having a reporter in Gaza for reporting.  This is a slow and welcome change in America, and change that has come through the stuggle of many people.  It is a surprise even if it turns out that it is not very difficult to make the case that Zionists did 9/11 (and for fairly clear geopolitical objectives involving the Greater Israel project).

Of course from my point of view the human race is a single race which implies that there are significant problems with Zionism as well as all other tribalisms and nationalisms.  ‘National defense’ doctrines are mostly without significant content in actual justification of savage barbarous and murderous behavior, whether by Israel or by Palestinians or anyone else.  One thing that had happened during the Second World War is that fascism of Nazis was squelched but fascism–racist fascism–was embraced by Zionists.  Americans generally react badly to oppression and fascism while they have been trained to view all actions of Israel through extremely dense propaganda (of which 9/11 itself is a brilliant centerpiece).  Tribal fervor might explain Zionist enthusiasm for destroying Palestinians and Muslims but it is not any natural tribal fervor: as Shlomo Sand dug into the issue of the ‘Jewish people’ he did not find the Zionist historiography to be extremely trustworthy.  The legitimacy of the Israel’s violent mistreatment of Palestinians lies ultimately in nothing but ‘might is right’.  As a bizarre antiquarian exercise, this might be understandable.  The Huns might have justified their conquest and dominion through similar reasoning but this is criminal behavior for a nuclear power of the twenty-first century.  Of course I condemn the rocket strikes but to be frank these rocket strikes are not particularly lethal, and certainly nowhere comparable to killing hundreds of civilians attacking densely populated areas.

The solution is not very complicated: there should be a single-state democratic solution.  That will stop the ‘defense problems’ for Israelis and produce a positive future for both Israelis and Palestinians.  Zionism has to be dismantled because it is a serious problem for the entire world.  Racist fascists massacring people at will who are not white essentially because they are criminals not being white, is a very bad precedent for the human race.

 

Read Full Post »

« Newer Posts - Older Posts »