Feeds:
Posts

## On the unknown model of possibly small set of parameters that drive global volatility

Dear Professor Donoho,

Since it’s not much effort, I will directly model the volatility process as
v = A*x
where A is a Gaussian vector with n~300 and x is some unknown sparse vector in a much higher dimensional space say 3000, and we try to recover x by convex relaxation.  This is a problem I can solve with cvxopt with a few lines of code.  I will get some results on this small project by tonight.  The expectation is that this will give us ideas for the much more delicate problem of determining the laws governing volatility in the entire unknown market graph.

### Zulfikar Ahmed<wile.e.coyote.006@gmail.com>

1:15 PM (1 minute ago)

 to David
This is basic code for the test using machinery I’d used before for you. This small project will not take much time and I can have some results if not today by tomorrow.  The purpose of this exercise is to get a feel for what sort of sparse model of volatility could be recovered by L1 minimization pretending that volatility is the signal that’s been obscured.  Since L1 minimization likes sparse solutions, and since our b is O(size of market graph) we are hoping that this process leads to some observation of sparse model parameters in a much higher dimensional (N=10*n) space

import cvxopt as opt

import numpy as np
import pandas as pd
def sparsity(x):
return sum(x>0)/len(x)
def l1_solve( A, b):
opt.solvers.lp( A, b )
data = pd.DataFrame.from_csv(‘volatility.csv’)
n = 300
N=n*10
V = opt.matrix( np.array(data)[0,1:n] )
b = opt.matrix(V)
A = opt.matrix(np.randn(n,N))
x0 = l1_solve( A, b)
print sparsity(x0)

## Turbulent fluid dynamics on a graph as an idea

Incompressible fluid dynamics is usually studied in physics in the mathematical setting space and time, $(x,t)$ where velocity $u(x,t)$ and pressure $p(x,t)$ follow a set of transport equations, which are $div(u)=0$ and

$(\partial_t - \Delta)u = - R\langle u, \nabla u\rangle - \partial_t p(x,t)$

which we can simplify with $p==0$.  Here $R$ is the Reynold’s number.  The physicists’ intuition is that laminar flow is possible when $R$ is small but the solutions are turbulent when $R>>1$.

Weak solutions to Navier-Stokes on R^3 were proved to exist in 1933 (attached) and there is also a recent paper.  As a curious novice in this direction, I am still confused about these weak solutions, especially with observed phase transition to turbulence.  In turbulent flow, there does not seem to be any chance of C^infinity type behavior for the fluid and the physics literature provides many metrics.  I know something about elliptic regularity in related context when weak smooth solutions of elliptic equations bootstrap to smoothness.  But what are weak solutions with regularity that are turbulent?  Apologies for airing my confusions.  I will try to clarify for myself what a weak solution is exactly mathematically.  Clearly this is a delicate issue worth investigating.  John Nash, may he rest in peace, had always emphasized physical intuition even in his paper on parabolic kernels of uniformly elliptic operators and as a novice I would like to sharpen mine.

2 Attachments

Preview attachment weak-solutions-navier-stokes-1933.pdf

weak-solutions-navier-stokes-1933.pdf

Preview attachment regularity-weak-solutions-navier-stokes.pdf

regularity-weak-solutions-navier-stokes.pdf

### Zulfikar Ahmed<wile.e.coyote.006@gmail.com>

11:32 AM (1 minute ago)

 to David

Historically, Leray’s introduction to his seminal paper of 1933 mentions that he believed in the irregularity development in finite time for Navier’s equations he calls the equations for velocity and pressure which I had been mangling for simplicity.  These are actually equations for velocity u and also a pressure term

(d/dt u – Laplacian(u)) = – <u,grad(u)> – d/dt pressure + (external forces set to zero)

div(u) == 0  (incompressible)

The useful way according to Chorin to rewrite these is to introduce the ‘Reynold’s number’ R directly and put

d/dt u + R <u,grad u> = Laplacian(u) – d/dt pressure

div(u) = 0

We know generally that turbulence in this system can happen when R is much higher than 1.  If we just set pressure to zero, we have the form that I was using for my exploratory thinking about the case of volatility on a graph instead of u:

(d/dt – Laplacian) u = – R<u,grad u> =: F(u)

In the intuition of physicists, laminar flow should result when R is small and there can be a phase transition to turbulence when R crosses some threshold.  By early 1940s Onsager was using Navier-Stokes with high Reynold’s number to explain the formation of vortices, so I am trying to place the lack of singularity formation examples that Leray alluded to in his 1933 paper to the Onsager model around eight years later.

Navier-Stokes is a favourite of numerical analysis by computers for engineers, so I can look at this problem directly as well.  The motivation for me ultimately is to understand financial market volatility with an exact model of this type if possible.  I feel comfortable that the graph of the market will serve well as an x-variable for this problem now.

Attachments area

Preview attachment chorin-numerical-navier-stokes.pdf

chorin-numerical-navier-stokes.pdf

## data for financial net model and code for graph laplacian and degree distribution

 Inbox x

### Zulfikar Ahmed<wile.e.coyote.006@gmail.com>

4:29 PM (34 minutes ago)

 to David
1.  code for creating and dumping a single large R-analyzable table that works attached along with a snapshot showing that column means seem sensible.  This code needs a minor change pointing to the directory where all the .csv flat files with historical stock price data exists.

I will get you full dataset analyzable with R along with graphs of Laplacian and degree spectrum within the hour.  With R package igraph these are simple exercises once the large matrix of daily volatility are avaliable.
We create a covariance matrix and threshold by a single standard deviation to remove noise and then use the non-zero values to create an adjacency matrix deleting the diagonal!  And then just use eigen() to get the spectrum.  igraph has a degree.distributution function for a graph so that’s trivial.
2 Attachments

### Zulfikar Ahmed<wile.e.coyote.006@gmail.com>

4:43 PM (20 minutes ago)

 to David
Attached is the entire large matric containing closing prices for all the stocks in a large table that one can gunzip and pull into R.  A scan showed that minus expected missing data this looks good.

Here is my analysis code again in R:

# code to generate the fundamental VOLATILITY data

n<-dim(as.matrix(V))[2]
prices<-as.matrix(V)[,2:n]
returns<-diff(log(prices))
eps<-10e-6
noisy.volatility<-log(returns**2+eps)

# we create a graph using volatilities correlated by subtracting diagonal

# then soft thresholding by sd, and then setting all nonzero entries to 1
C<-matrix.cov( noisy.volatility)
C<-C-diag(C)
C<-C-sd(C)
C[C<0]<-0
A<-C
A[A>0]<-1
# now create a graph from these data using A as adjacency matrix
library(igraph)
# plot the Laplace spectrum
H<-graph.laplacian( graph )
E<-eigen(H)$value plot(E) # plot the degree distribution D<-degree.distribution(graph) plot(D) Now we can check these against the degree distribution of power law (which is not happening here) and Laplace spectrum which is quite different from the Erdos-Renyi random graphs and others which is the interesting thing I found in the smaller dataset of 188 nodes. Attachments area Read Full Post » ## The Ultrawealthy are insects who need to be wiped out I’m sick and tired of population reductionists in the west. Go to hell. We should kill the ultrawealthy who are moronic fuckfaces who can make no decisions besides their sexual pleasures properly. Wipe out these lowlife insects. http://www.gutenberg.org/files/833/833-h/833-h.htm Read Full Post » ## Right and wrong reasons for ‘detecting chaos’ in financial volatility Chaos has fascinated academic interest from the late 1980s. I happen to know this because as a poor kid living in Queens New York, I had a subscription to one of these book clubs that sent me among other technical and popular books Edward Witten’s two volumes on Superstring theory that I never understood and James Gleick’s Chaos: The Making of a New Science. Popularity of Mandelbrot and Chaos theory had generated a little cottage industry in academia and in finance too. The problem with it from my very limited honestly perspective about dynamical systems since I have no training at all in these things, is that it is too cool. I mean look at turbulence in actual hydrodynamics. It’s hard science. Onsager worked out the vortex creation problem in the depths of physics. Navier-Stokes equations with low viscosity is not a joke. Onsager, now there’s a man one can trust on this stuff. He actually knew a real system well enough to break new insights. Fluid dynamics is hard science. The fluffiness of the pretty pictures in dynamical systems is probably good for something, but it’s not good for a real science of volatility. Why am I so confident about this direction? Probably just because I am too arrogant and will fail in another mission impossible. Here is my proposal for getting to a true real science of finance: Study phase transitions of high dimensional nonlinear dynamical systems and their behavior with regards to phases of laminar versus turbulent behavior. The specific class of dynamical systems of interest to us is the Navier-Stokes type which on a graph becomes a delay differential equation since the Laplacian is just a numerical matrix. The guidance for this study should come from Kolmogorov’s ICM paper of 1954 which led Arnold and Moser to prove the famous KAM theorem. With any clear understanding of parameters and thresholds that divide laminar and chaotic phases, we use statistical technology to model the world’s volatility directly as an intrensic dynamical system. Read Full Post » ## Graph Analysis Pipeline Given a graph with n nodes (n~3000 in our case) probably the simplest code is to loop directly. If A is the adjacency: n<-dim(A)[1] deg<-degrees(graph) tdeg=sum(deg) probs<-matrix(0,nrow=n,ncol=n) # create the maximum likelihood implicit log-linear Bernoulli probabilities # for edge i<->j for (i in 1:n){ for (j in 1:i){ const<-deg[i]*deg[j]/tdeg p[i,j]<-const p[j,i]<-const }} # modularize the sampling of the graph sample.graph<-function(p){ n<-dim(p)[1] adj.new<-matrix(0,nrow=n,ncol=n) for (i in 1:n){ for (j in 1:i){ const<-p[i,j] q<-sample(x=c(0,1),size=1,prob=c(const,1-const)) adj.new[i,j]<-q[1] adj.new[j,i]<-q[1] }} graph.analysis<-function(g){ ddist<-degree.distribution(g) plot(ddist) # create laplacian matrix H<-graph.laplacian(g) E<-eigen(H) plot(E$value)
}
# test sampling
for (r in 1:100){
graph.analysis(sampled.graph)
}