Feeds:
Posts
Comments

Stocks: “ABX” “NEM” “GG” “AEM” “RGLD” “KGC” “GFI” “PVG” “HMY” “RIC”
“GORO”

Before we get to the actual code, the idea is simple.  It’s to calculate the past 100 day correlations of the stocks and do pairs trading on the stock with the maximum correlation and the portfolio with a holding period of 4 days (which gives a Sharpe versus S&P 500 of > 18.0 estimated roughly by dividing sum(strategy return)-sum(SP500 return) by sigma(strategy return)*sqrt(254).  More useful is the graph.  These types of strategies (with good returns) are actually not extremely hard to construct in backtests although they are unstable and finicky.  In this case, if you used past correlations much more or less than 100 will make the performance much worse and almost unusable.  So there is the question of whether there are fundamental laws governing such things.  If there are, they are not connected to concepts of daily correlations.

April-2017-gold-strategy

symbols = c("ABX" "NEM" "GG" "AEM" "RGLD" "KGC" "GFI" "PVG" "HMY" "RIC" 
"GORO","SPY")
getSymbols(symbols)
rets = merge(lapply(symbols, function(s){ dailyReturn(eval(parse(text=s))) })

cstrat<-function(thresh,days,corrdays){
meanport<-rowMeans(rets)
meanport[is.na(meanport)]<-0
N<-nrow(rets)
v<-rep(0,N)
dirs<-rep(0,N)
for (a in (corrdays+1):(N-days-1)){
m<-mean(meanport[(a-10):a])
if (is.nan(m)){m=0}

# choose the stock with the highest correlation
# in the past 20 days

#C<-cor(rets[(a-21):a,])
#C[is.na(C)]<-0
#diag(C)<-0
#idx<-which(C==max(C),arr.ind=TRUE)[1]

CC<-rep(0,ncol(rets))
for (bb in 1:ncol(rets)){
CC[bb]<-cor( meanport[(a-corrdays):a],rets[(a-corrdays):a,bb])
}
#mb<-median(CC)
#CD<-abs(CC-mb)
b<-which.max(CC)
print(paste(b,CC[b]))
R<-mean(rets[(a-10):a,b])
if (abs(R-m)>thresh){
direction<- -sign(R-m)
if (v[a]< 0 && v[a-1]<0 && v[a-2]<0 && dirs[a] == dirs[a-1] && dirs[a-2]==dirs[a-1]){
direction<- -dirs[a]
}
dirs[a+1]<-direction
v[a+1]<- direction*mean(meanport[a+1:days]-rets[a+1:days,b])
}
}
print(paste('SD=',sd(v),'mean=',mean(v),'av sd=',
sd(meanport),'av mean=',mean(meanport)))
plot(cumsum(v))
(sum(v)-sum(X[,12]))/max(0.0001,sd(v)*sqrt(254))
}

 

It seems clearer now that the Einstein static universe can remove dark energy and dark matter and solve the cosmological constant problem and explain the CMB anisotropy as well.  Euclidean quantum field theory via Osterwalder-Schrader axioms can be formulated for it by Jaffee-Ritter.  This leaves only the hierarchy problem which is definitely easier to address in this situation.  The hierarchy between gravity and electromagnetism should be a simple function of the radius of the static universe.  More interesting issue is whether my initial (and naive excitement) that quantization itself is a CONSEQUENCE of a static Einstein universe and perhaps determines quantum field theory is true.  So this is a great problem.  While it may seem as naive to say that a static spherical geometry can describe quantum field theory as the pre-Keplerian idea that the orbits of planets must be perfect circles, in this case since Planck’s constant is a universal quantization of ALL energy in the universe and not only the energy that emanates from a particular atom, this is actually a NATURAL CONJECTURE if we rewind the clock back to 1900.  If we return to 1900 armed now with some knowledge that the universe IS an Einstein static universe and showed PLANCK the cosmic background radiation then he might have decided that the entire universe must be a compact blackbody and the Planck’s quantum hypothesis must then apply to the entire universe.  In other words a static Einstein (or S4) universe then justifies the Planck’s constant itself naturally.

The main hypothesis is that matter distribution in a static universe in a static pattern is sufficient to produce the observed CMB anisotropy patterns (that has been used to justify the entire Inflation/Big Bang model).  The exercise here is not to fit the data exactly but to produce a simple simulation model of galaxy distributions on a rectangle of latitudes/longitudes and graph the frequency distributions in the same way as the CMB anisotropy graph (using spherical harmonics coefficients) and note the pattern of peaks.  In order to get a computationally tractable problem, we consider a grid of 100 by 100.  We use the Cox model to distribute galaxies: choose points randomly according to a Poisson distribution and then replace the points by a line segment with size chosen from an exponential distribution with a rotation angle chosen uniformly on the circle.

pkg load statistics
pkg load gsl
pkg load image
global q0
q0=100 

function pts=fillPoints( A, B)
 if isnull(B),
 return
 endif
 dx = B(1)-A(1);
 dy = B(2)-A(2);
 pts =[];
 for delta = 0:0.001:1,
 pts = [pts;round(A(1)+dx*delta),round(A(2)+dy*delta)];
 endfor
 pts=unique(pts,"rows");
endfunction

g1 = poissrnd(100,q0,q0);
g2 = 0;
g = g1+g2;
h=g;
# choose a random angle and draw a line proportional to mass
alpha = 1.7
for k=1:q0,
 for m=1:q0,
 if h(k,m) > 0,
 mass = exprnd(1)*2;
 theta = unifrnd(0,1)*2*pi;
 ptx = mass*cos(theta);
 pty = mass*sin(theta);
 fillPts = fillPoints( [-ptx,-pty],[ptx,pty] );
 if length(fillPts)>0,
 for r=1:rows(fillPts),
 a=fillPts(r,1);
 b=fillPts(r,2);
 if abs(a)>0 && abs(b)>0 && k+a>1 && m+b>1 && k+a<q0 && m+b<q0,
 g(k+a,m+b) = g(k+a,m+b)+h(k,m);
 endif
 endfor
 disp([k,m,g(k,m)])
 fflush(stdout)
 endif
 endif
 endfor
endfor
 
g=imsmooth(g,"Gaussian",2,2);
sum(sum(g))
#g = (g-min(min(g)));
g=g/sum(sum(g));
g=g*100/max(max(g));
g=g-mean(mean(g));
save g.mat g;
# value at a spherical harmonic

function v=sphCoeff( l, g)
 global q0;
 v = 0;
 for j=1:q0,
 for k=1:q0,
 w1 = cos(2*pi*j/q0);
 w2 = cos(pi*k/q0);
 x = gsl_sf_legendre_array( 1,l, w1, -1 );
 fnval = mean(x*w2);
 if ~isnan(fnval*g(j,k)),
 v = v + fnval*g(j,k);
 endif
 endfor
 endfor
 disp([l,v])
 fflush(stdout)
endfunction

function v=sumSphCoeff( l, g)
 v=sphCoeff(l,g);
 v=abs(v)^2
endfunction


valsByL = zeros(1,100);

for k=2:30,
 p=k
 valsByL(1,k)=sqrt(p*(p+1))*sumSphCoeff(p,g);
endfor

plot( 2:30, valsByL(2:30));

The graph has a sequence of peaks that is qualitatively quite similar to the observed anisotropy of the CMB.

simCMBanisotropy

Actual CMB anisotropies have peaks at much higher frequencies that in principle can be matched by a larger scale simulation.  What is clear is that the qualitative pattern of relative heights of peaks can occur from the the mass/galaxy distributions alone in a static universe.  The mechanism for the CMB anisotropy are gravitational scattering and the gravitational redshifts due to matter.

CMB-LCDM-w-WMAP
So the picture that makes the most sense is a static universe with static distribution of matter.  What is missing from our simulation to the actual data is the use of actual distribution of matter to determine the anisotropy.

The major issue is that the Standard Model is making the wrong deduction that the matter in the universe was CAUSED by anisotropy in the CMB which has some primordial existence.  Much more sensible is the opposite conclusion that the CMB is nothing other than some thermal equilibrium in a static universe and the matter distribution perturbs it.  This is an explanation that is far more conservative and parsimonious.  There are no deep secrets of origin of the universe in the CMB.  The universe is a static steady state system which produces the CMB through starlight.  In fact, there need have been no difference in temperature from 3K in the past at all.

 

SUCH A CHILDISH PLAY

Sometimes it does seem like my entire life is a childish play because I was simply lacking the genius that led Shakespeare’s characters to be immortalized by names of Jupiter’s moons.  On the other hand the great Titan Prometheus only had a little moon of Saturn, and the universe is very large.  I am definitely closer to salvaging Eternity from Oblivion by overthrow of Big Bang even though I did not seriously enter the arena of science in a proper way except in brief positions at Biospect/Predicant where I had reached the rank of Scientist II ingloriously ejected by the ambitions of some punk kid to return to New York as a quant who failed to produce anything better than a prediction model for commodities based on inflation which was a result of note.  It is clear to me now that EVERY true result is of a great deal of interest.  My life is a disaster in too many ways but such a childish play indeed.

My first job was at Lehman Brothers Fixed Income Research in Andy Morton’s group.  I was hard-working and naive about what was required then for vertical ascent.  I came with mathematical background but untrained in science and I spent my time coding.  My perspective has matured regarding science since, so now I realize the hard work of getting nature to follow the quantitative models now attempting to get a static Einstein universe to produce the CMB anisotropy.  What is clear is that the hard part of numerical work remains as much drudgery as I had in the trading floor of Lehman, 13 hours in the neon light punctuated by dark skies.  What has not changed when making Octave models for CMB anisotropy (as opposed to the derivative pricing models) is that the annoyance of the coding problems is dwarfed by the difficulty of producing a model that can fit the observed anisotropy.  Tonight, I can’t solve the problem: find a distribution of point masses on a three-sphere such that the gravitational redshifts alone may reproduce the anisotropy.  So much for the coding problem.

The sample-frequency-concentrated-distribution looks quite good but not in a realistic FREQUENCY SCALE.

pkg load statistics
pkg load gsl
pkg load image

global n=10000
global q0=100
global points=zeros(n,2)
global q0 
global n
global X = mvnrnd( [0,0,0,0],diag([1,1,1,1]),n);

for i=1:n,
 X(i,:)=X(i,:)/norm(X(i,:),2);
endfor

# stereographic projection + normalize

function z=pwrrnd(k,n)
 u=unifrnd(0,1,n)
 z=(1-u)^(1/(1-k))*0.001
endfunction

global masses = lognrnd(0,4,n,1)*50;

function d=direction(x1,x2)
 d=x2-x1;
 d=d/norm(d,2);
endfunction

function d=distance(x1,x2)
 d=acos(dot(x1,x2));
endfunction
 
function f=gforce( x1,x2, m1,m2)
 f=m1*m2*direction(x1,x2)/distance(x1,x2)^2;
 #f=0
endfunction

timestep=10

if 0,
for a=1:n,
 force_on_x = 0;
 
 x1 = X(a,:);
 m1 = masses(a);
 for b=1:n,
 x2 = X(b,:);
 d=distance(x1,x2);
 m2=masses(b);
 if d>0 && d<pi,
 force_on_x = force_on_x + gforce(x1,x2,m1,m2);
 endif
 endfor
 x = x+0.5*force_on_x*timestep^2/m1;
 x = x/norm(x,2);
 X(a,:)=x;
endfor
endif
 
global z = zeros(n,3);
for i=1:n,
 y=X(i,:);
 y1=y(1);
 y2=y(2);
 y3=y(3);
 y4=y(4);
 zp=[ y1/(1-y4), y2/(1-y4),y3/(1-y4) ];
 z(i,:) = zp/norm(zp,2);
 
 # select from log-normal
 #u = lognrnd(0,1);
 #if u>5,
 #z(i,:) = [0,0,0];
 #endif
endfor

function t=fitangle(x)
 t=x;
 while t>2*pi,
 t=t-2*pi;
 endwhile
 while t<0,
 t=t+2*pi;
 endwhile
endfunction
 
 

function b=bin_s2point( z )
 if abs(z)<0.001,
 b=[0,0];
 return
 endif
 global q0;
 theta = atan( z(2)/z(1));
 theta=fitangle(theta);
 phi = atan( sqrt(z(1)^2+z(2)^2)/z(3));
 phi = fitangle(phi);
 thetan = floor(theta*q0/(2*pi));
 phin = floor(phi*q0/(2*pi));
 b=[thetan,phin];
endfunction
# create a spherical grid in longitudes/latitudes
global g = zeros(q0,q0);

global refpoint = [1,0,0,0]

function dox()
 global n;
 global g;
 global X;
 global masses;
 for i = 1:n,
 global refpoint;
 global z;
 global q0;
 global points;
 dg = zeros(q0,q0);

 cds = bin_s2point(z(i,:));
 cds(1)=mod(cds(1),q0);
 cds(2)=mod(cds(2),q0);
 disp([cds(1),cds(2)])
 points(i,:)=cds;
 fflush(stdout)
 #sdir = sign(direction(refpoint,X(i,:));
 dist = distance(refpoint,X(i,:));
 #g(cds(1)+1,cds(2)+1) = g(cds(1)+1,cds(2)+1)+1;
 m1=masses(i);
 dg(cds(1)+1,cds(2)+1) = m1/dist;
 dg = imsmooth( dg, "Gaussian", ceil(2),ceil(2));
 #disp(sum(sum(dg)))
 #fflush(stdout)
 dg(1,1)=0;
 g = g+dg;
 endfor
endfunction

dox()
sum(sum(g))
#g = (g-min(min(g)));
g=g/sum(sum(g));


# value at a spherical harmonic

function v=sphCoeff( l,m, g)
 v = 0;
 q0=20;
 for j=1:q0,
 for k=1:q0,
 w1 = cos(2*pi*j/q0);
 w2 = exp(1i*2*pi*k/q0);
 fnval = gsl_sf_legendre_sphPlm( l, m, w1 )*w2;
 if ~isnan(fnval*g(j,k)),
 v = v + fnval*g(j,k);
 endif
 endfor
 endfor
 disp([l,m,v])
 fflush(stdout)
endfunction

function v=sumSphCoeff( l, g)
 v=0;
 for m=0:1
 v = v+sphCoeff(l,m,g);
 endfor
 v=abs(v)^2
endfunction


valsByL = zeros(1,1000);

for k=1:1000,
 valsByL(1,k)=sumSphCoeff(200*k,g);
endfor

plot( 1:1000, valsByL);

The simplest explanation of the uniformity of the cosmic background radiation is thermal equilibrium in a STATIC universe, in fact the Einstein static universe model is infinitely more plausable than inflation theories where things happened during the beginning of the universe (whatever that means).  The Standard Model of Cosmology is completely incredible in the sense that it makes very little sense.  I bet that a relatively simple model where the anisotropy of the CMB is explained by things like gravitational redshift due to mass clusters could fit the anisotropies with less work than what went into fitting inflation models.  Unlike the Standard Model of Particle Theory the cosmological models are completely not based on experiments.

In my last blog, I considered radius of static Einstein universes with only CMB radiation of 2.7K.  We are interested in radius of static Einstein universes with realistic values for matter density.  Published estimates of matter density (for references see this) are \rho=10^{-30} g/cm^3 (Tipler 1987), \rho=4-18\times 10^{-30} g/cm^3 (Guth 1987), \rho=5\times 10^{-30} g/cm^3.  We need to convert to kg/m^3 so the order becomes 10^{-30} g/cm^3 = 10^{-27} kg/m^3 and solve the equation

\frac{8}{3c^2} \pi G \rho R^2 = 1 = k

  • \rho=10^{-27} kg/m^3 corresponds to a radius of 13.38 Gpc
  • \rho=5\times 10^{-27} kg/m^3 corresponds to radius 6 Gpc
  • \rho=18\times 10^{-27} kg/m^3 corresponds to radius 3.154 Gpc

Here we have to worry about the negative pressure issue, i.e. the second Friedmann equation for static universe is satisfied.

\frac{\ddot{R}}{R} = -4\pi G(\rho+3p/c^2) =0

Unlike radiation for which \rho_{rad}+3p/c^2=0 we don’t automatically get the cancellation for pure matter where p=0.  So here we need a cosmological constant to produce the negative pressure.  Here we can invoke quantum field theory for a static Einstein spacetime to obtain a cosmological constant.  The Friedmann equations with a cosmological constant are

0 = \frac{8\pi G}{3}\rho - k c^2/R^2 + \Lambda/3

0 = -\frac{4\pi G}{3}(\rho + 3p) + \Lambda/3

In the case of interest to us, where \rho \sim 10^{-27} the pressure due to radiation will be much smaller, say due to CMB 2.7K p \sim 10^{-31} (which is obtained by the Stefan-Boltzmann law and then dividing by c^2).  We fix \rho=5 \times 10^{-27} and p = 10^{-31} and solve for R in the first equation for the sake of clarity

kc^2/R^2 = \frac{4\pi G}{3}( 3\rho+3p)

In this case we get the following radii for static universes:

  • \rho= 1\times 10^{-27} corresponds to 10.92 Gpc
  • \rho= 5\times 10^{-27} corresponds to 4.88 Gpc
  • \rho= 18\times 10^{-27} corresponds to 2.57 Gpc

This is a purely CLASSICAL picture that does not include quantum field theory effects.  When pressure p increases the radii will increase as well.  For static Einstein universes the vacuum energy for example for massless neutrinos are reasonable so these could explain slightly larger radii.