Feeds:
Posts
Comments

Ladies and Gentlemen,

James Simons the mathematician co-responsible for the Chern-Simons invariants (integral of AdA + 2/3 A^3 for connection A on a compact three-manifold that Witten related to Jones polynomials of knots etc.)  suggests that the discoveries by the researchers at Renaissance Technologies of anomalies in markets are not worth publishing because they lack universality.  Of course universality in science is actually pretty rare as well.  Big Bang theory is hardly based on universal features and shares much more with the messy quantitative analysis of financial markets.  Anyway, let’s dig into an anomaly of a type that is extremely well-known.  You will learn how such a well-known type of anomaly does not disappear.  The efficient markets hypothesis is obviously a VERY rough approximation of the truth.  The laws of the markets may remain far more mysterious than the laws of the cosmos (whose mysteries in principle can be vastly lowered by removing the Big Bang cosmology and adopting Einstein static universe in principle).  The issue at hand is pairs trading on gold companies.  The first idea that occurs is that there is a gold mining index ETF and maybe trading components versus the index could be useful.  So here is the funny thing.  THIS idea does not do very well.  So for this idea, ARBITRAGE traders (at D E Shaw, Renaissance etc.) have cleaned up this part.  On the other hand a minor variation which I present here will be enormously profitable.  The variation is to use the index not for trading but to pick pairs.  Use 30-days to calculate CORRELATIONS between the components and the index.  Pick two: MAXIMUM correlated and MINIMUM correlated with the GDX index.  Then take position on the difference of the two stocks for a single day every day.  More precisely, run the following R code which gets the data, runs the strategy etc.  The Sharpe is 15.4 on S&P 500.  So this strategy is worth investing in by any respectable quant fund (who are reasonably happy to run Sharpe 4+).  But what this is telling us is that ARBITRAGEURS are UNABLE to close the gaps of this type which are ALREADY the focus of a great deal of attention.  In other words, EFFICIENT MARKET HYPOTHESIS is much like the HUMAN RATIONALITY HYPOTHESIS.  It’s extremely misleading.  Just as in the Enlightenment there was a hope of human rationality becoming transcendent in some way which then led to not much since rationality of human beings is limited and ultimately not worth centralizing at all, so the EFFICIENT MARKETS HYPOTHESIS is pure dogma.  So without further ado, the code and the graph of the backtest.  The function cstrat2 with parameters thresh=0.0,holding days=1,number of days to calculate correlations=30 produces the results.  There is no cheating (in terms of hidden look-ahead type error) obviously although it’s not trivial to catch such bugs.  This is a real (backtest) result which could be implemented live.  The interesting issue is precisely the issue of efficient market hypothesis.  So how long have people been doing arbitrage with PAIRS?  Well, say from the late 1970s.  So in principle PAIRS should not produce ANY profits especially when the conglomerate (gold mining stocks) has been of interest for a long time.  And yet, a simple variation of basic pairs STILL produces without great deal of sophistication, a sizeable arbitrage opportunity.  I am tempted to conclude that the human capacity for ARBITRAGE is much more limited than is justified by the efficient markets hypotheses.  

library(quantmod)
library(tseries)
 
symbols<-c(“ABX”,”NEM”,”NCM”,”GG”,”FNV”,”AEM”,”SLW”,”GOLD”,”RGLD”,”AU”)
indexName<-“GDX”
 
 
# Get historical data and construct table
 
constructComponentRets<-function(syms,StartDate)
{
Stocks = lapply(syms, function(sym) {
  dailyReturn(na.omit(getSymbols(sym, from=StartDate, auto.assign=FALSE)))
})
Stocks<-do.call(merge, Stocks)
Stocks
}
 
constructIndexRets<-function(indexName,StartDate)
{
    dailyReturn(getSymbols(indexName,from=StartDate,auto.assign=FALSE))
}
 
 
componentRets<-constructComponentRets(symbols,”2007-01-03″)
indexRets<-constructIndexRets(indexName,”2007-01-03″)
 
 
cstrat<-function(thresh,days,corrdays){
    # some best parameters 0.0,4,100
    rets<-componentRets
    rets[is.na(rets)]<-0
    Z<-merge(componentRets[,5],indexRets)
    meanport<-Z[,2]
    meanport[is.na(meanport)]<-0
    N<-nrow(rets)
    v<-rep(0,N)
    dirs<-rep(0,N)
    bprev<-2
 
for (a in (corrdays+1):(N-days-1)){
m<-mean(meanport[(a-10):a])
if (is.na(m)){m=0}
 
# choose the stock with the highest correlation
# in the past 20 days
 
CC<-rep(0,ncol(rets))
for (bb in 1:ncol(rets)){
CC[bb]<-cor( meanport[(a-corrdays):(a-1)],rets[(a-corrdays):(a-1),bb])
}
 
b<-bprev
if (!is.na(sum(CC))){
    b<-which.min(CC)[1]
    bprev<-b
}
else {
    print(CC)
}
R<-mean(rets[(a-10):a,b])
if (is.na(R)){R<-0}
print(paste(R,m))
if (abs(R-m)>thresh){
 
direction<- sign(R-m)
DinWords<-“Long Port Short Stock”
if (direction<0){
DinWords<-“Short Portfolio Long Stock”
}
 
print(paste(symbols[[b]],b,CC[b],DinWords))
 
#tryCatch({
#if (v[a]< 0 && v[a-1]<0 && v[a-2]<0 && dirs[a] == dirs[a-1] && dirs[a-2]==dirs[a-1]){
#direction<- -dirs[a]
#}
#},error=function(e){}
#)
    dirs[a+1]<-direction
    val<-mean(meanport[a+1:days]-rets[a+1:days,b])
    if (is.na(val)){
        val<-0
    }
    v[a+1]<- direction*val
}
}
print(paste(‘SD=’,sd(v),’mean=’,mean(v),’av sd=’,
sd(meanport),’av mean=’,mean(meanport)))
sharpe<-(sum(v)-sum(X[,12]))/max(0.0001,sd(v)*sqrt(254))
print(paste(‘Sharpe=’,sharpe))
stratChoices<-xts(order.by=index(rets))
stratChoices<-merge(stratChoices,v)
    stratChoices<-merge(stratChoices,dirs)
    plot(cumsum(stratChoices[,1]))
stratChoices
 
}
 
 
cointegration<-function(x,y)
{
vals<-data.frame(x,y)
beta<-coef(lm(vals[,2]~vals[,1]+0,data=vals))[1]
(adf.test(vals[,2]-beta*vals[,1], alternative=”stationary”, k=0))$p.value
}
 
cstrat2<-function(thresh,days,corrdays){
    # some best parameters 0.0,4,100
    rets<-componentRets
    rets[is.na(rets)]<-0
    Z<-merge(componentRets[,5],indexRets)
    meanport<-Z[,2]
    meanport[is.na(meanport)]<-0
    N<-nrow(rets)
    v<-rep(0,N)
    dirs<-rep(0,N)
    bMinPrev<-2
    bMaxPrev<-2
for (a in (corrdays+1):(N-days-1)){
m<-mean(meanport[(a-10):a])
if (is.na(m)){m=0}
 
 
# choose the stock with the highest correlation
# in the past 20 days
 
CC<-rep(0,ncol(rets))
for (bb in 1:ncol(rets)){
CC[bb]<-cor( meanport[(a-corrdays):(a-1)],rets[(a-corrdays):(a-1),bb])
}
 
bmin<-bMinPrev
bmax<-bMaxPrev
if (!is.na(sum(CC))){
    bmin<-which.min(CC)[1]
    bMinPrev<-bmin
    bmax<-which.max(CC)[1]
    bMaxPrev<-bmax
}
else {
    print(CC)
}
 
# cointegration p-value
#x<-rets[1:a,bmin]
#y<-rets[1:a,bmax]
#ci.pv<-cointegration(x,y)
#print(paste(‘CIPV=’,ci.pv))
 
Rmin<-mean(rets[(a-2):a,bmin])
Rmax<-mean(rets[(a-2):a,bmax])
if (abs(Rmax-Rmin)>thresh){
 
direction<- sign(Rmax-Rmin)
DinWords<-“Long Port Short Stock”
if (direction<0){
DinWords<-“Short Portfolio Long Stock”
}
 
 
print(paste(‘Max:’,symbols[[bmax]],bmax,CC[bmax],DinWords))
print(paste(‘Min:’,symbols[[bmin]],bmin,CC[bmin],DinWords))
 
#tryCatch({
#if (v[a]< 0 && v[a-1]<0 && v[a-2]<0 && dirs[a] == dirs[a-1] && dirs[a-2]==dirs[#a-1]){
#direction<- -dirs[a]
#}
#},error=function(e){}
#)
    dirs[a+1]<-direction
    val<-mean(rets[a+1:days,bmax]-rets[a+1:days,bmin])
    if (is.na(val)){
        val<-0
    }
    v[a+1]<- direction*val
}
}
print(paste(‘SD=’,sd(v),’mean=’,mean(v),’av sd=’,
sd(meanport),’av mean=’,mean(meanport)))
sharpe<-(sum(v)-sum(X[,12]))/max(0.0001,sd(v)*sqrt(254))
print(paste(‘Sharpe=’,sharpe))
stratChoices<-xts(order.by=index(rets))
stratChoices<-merge(stratChoices,v)
    stratChoices<-merge(stratChoices,dirs)
    plot(cumsum(stratChoices[,1]),main=”Backtest of Gold Pairs Strategy”)
stratChoices
 
}
The output from this code should end as follows in case you want to run it.

[1] “Min: NCM 3 NA Short Portfolio Long Stock”
 [1] 0.7847300 0.8726714        NA 0.7700649 0.8316100 0.8164236 0.8714387 0.8393124 0.8181947 0.8939798
[1] “Max: SLW 7 0.87143867038477 Short Portfolio Long Stock”
[1] “Min: NCM 3 NA Short Portfolio Long Stock”
 [1] 0.7842892 0.8768577        NA 0.7765014 0.8296686 0.8118670 0.8751698 0.8298539 0.8248970 0.8925556
[1] “Max: SLW 7 0.875169814860507 Short Portfolio Long Stock”
[1] “Min: NCM 3 NA Short Portfolio Long Stock”
 [1] 0.7861991 0.8728876        NA 0.7659750 0.8205137 0.8068286 0.8693557 0.8199464 0.8164895 0.8846133
[1] “Max: SLW 7 0.869355689699331 Long Port Short Stock”
[1] “Min: NCM 3 NA Long Port Short Stock”
 [1] 0.8183631 0.8665196        NA 0.7701502 0.8430734 0.8178325 0.8775521 0.8059414 0.8265138 0.8878411
[1] “Max: SLW 7 0.877552110562935 Long Port Short Stock”
[1] “Min: NCM 3 NA Long Port Short Stock”
 [1] 0.8202531 0.8781273        NA 0.7692949 0.8435192 0.8186440 0.8794814 0.7999895 0.8267055 0.8876754
[1] “Max: SLW 7 0.879481365179698 Long Port Short Stock”
[1] “Min: NCM 3 NA Long Port Short Stock”
 [1] 0.8235891 0.8627736        NA 0.7591603 0.8357791 0.8093991 0.8701011 0.7784181 0.8089477 0.8688979
[1] “Max: SLW 7 0.870101141833533 Long Port Short Stock”
[1] “Min: NCM 3 NA Long Port Short Stock”
 [1] 0.8759150 0.8951543        NA 0.8318822 0.8822690 0.8754342 0.9111954 0.8150941 0.8711518 0.9181651
[1] “Max: SLW 7 0.911195353619005 Long Port Short Stock”
[1] “Min: NCM 3 NA Long Port Short Stock”
 [1] 0.8720164 0.8664675        NA 0.8263342 0.8736644 0.8710797 0.9157766 0.8174924 0.8958259 0.9101778
[1] “Max: SLW 7 0.915776562559851 Long Port Short Stock”
[1] “Min: NCM 3 NA Long Port Short Stock”
 [1] 0.8736416 0.8647396        NA 0.8306787 0.8769669 0.8696473 0.9157012 0.8185065 0.9055496 0.9093271
[1] “Max: SLW 7 0.915701227312764 Short Portfolio Long Stock”
[1] “Min: NCM 3 NA Short Portfolio Long Stock”
 [1] 0.8715562 0.8476112        NA 0.8156205 0.8880961 0.8670091 0.9140808 0.8098111 0.8994516 0.9082622
[1] “Max: SLW 7 0.914080812730942 Short Portfolio Long Stock”
[1] “Min: NCM 3 NA Short Portfolio Long Stock”
 [1] 0.8748459 0.8551964        NA 0.8194074 0.8895541 0.8697001 0.9038042 0.8324408 0.9000305 0.9113899
[1] “Max: SLW 7 0.903804164280884 Long Port Short Stock”
[1] “Min: NCM 3 NA Long Port Short Stock”
 [1] 0.8734272 0.8527967        NA 0.8157374 0.8725731 0.8668520 0.7595406 0.8312755 0.8945350 0.9084764
[1] “Max: SLW 7 0.759540622320781 Long Port Short Stock”
[1] “Min: NCM 3 NA Long Port Short Stock”
 [1] 0.8694111 0.8511296        NA 0.8224220 0.8710239 0.8688679 0.7376991 0.8208379 0.8914138 0.9042604
[1] “Max: SLW 7 0.737699051475897 Long Port Short Stock”
[1] “Min: NCM 3 NA Long Port Short Stock”
 [1] 0.8657232 0.8480939        NA 0.8205625 0.8694782 0.8663341 0.7343544 0.8207944 0.8896814 0.8984146
[1] “Max: SLW 7 0.73435444368399 Short Portfolio Long Stock”
[1] “Min: NCM 3 NA Short Portfolio Long Stock”
 [1] 0.8704843 0.8519156        NA 0.8214306 0.8751230 0.8691931 0.7453998 0.8235559 0.8977702 0.9020496
[1] “Max: SLW 7 0.745399845253296 Long Port Short Stock”
[1] “Min: NCM 3 NA Long Port Short Stock”
 [1] 0.8777785 0.8586096        NA 0.7957486 0.8798905 0.8746327 0.7461774 0.8248994 0.8915092 0.9045272
[1] “Max: SLW 7 0.746177356267977 Short Portfolio Long Stock”
[1] “Min: NCM 3 NA Short Portfolio Long Stock”
[1] “SD= 0.0173418135298884 mean= 0.00189100848160369 av sd= 0.0281268089045118 av mean= 0.000189184692073368”
[1] “Sharpe= 15.3981976285237”
2 Attachments

Stocks: “ABX” “NEM” “GG” “AEM” “RGLD” “KGC” “GFI” “PVG” “HMY” “RIC”
“GORO”

Before we get to the actual code, the idea is simple.  It’s to calculate the past 100 day correlations of the stocks and do pairs trading on the stock with the maximum correlation and the portfolio with a holding period of 4 days (which gives a Sharpe versus S&P 500 of > 18.0 estimated roughly by dividing sum(strategy return)-sum(SP500 return) by sigma(strategy return)*sqrt(254).  More useful is the graph.  These types of strategies (with good returns) are actually not extremely hard to construct in backtests although they are unstable and finicky.  In this case, if you used past correlations much more or less than 100 will make the performance much worse and almost unusable.  So there is the question of whether there are fundamental laws governing such things.  If there are, they are not connected to concepts of daily correlations.

April-2017-gold-strategy

symbols = c("ABX" "NEM" "GG" "AEM" "RGLD" "KGC" "GFI" "PVG" "HMY" "RIC" 
"GORO","SPY")
getSymbols(symbols)
rets = merge(lapply(symbols, function(s){ dailyReturn(eval(parse(text=s))) })

cstrat<-function(thresh,days,corrdays){
meanport<-rowMeans(rets)
meanport[is.na(meanport)]<-0
N<-nrow(rets)
v<-rep(0,N)
dirs<-rep(0,N)
for (a in (corrdays+1):(N-days-1)){
m<-mean(meanport[(a-10):a])
if (is.nan(m)){m=0}

# choose the stock with the highest correlation
# in the past 20 days

#C<-cor(rets[(a-21):a,])
#C[is.na(C)]<-0
#diag(C)<-0
#idx<-which(C==max(C),arr.ind=TRUE)[1]

CC<-rep(0,ncol(rets))
for (bb in 1:ncol(rets)){
CC[bb]<-cor( meanport[(a-corrdays):a],rets[(a-corrdays):a,bb])
}
#mb<-median(CC)
#CD<-abs(CC-mb)
b<-which.max(CC)
print(paste(b,CC[b]))
R<-mean(rets[(a-10):a,b])
if (abs(R-m)>thresh){
direction<- -sign(R-m)
if (v[a]< 0 && v[a-1]<0 && v[a-2]<0 && dirs[a] == dirs[a-1] && dirs[a-2]==dirs[a-1]){
direction<- -dirs[a]
}
dirs[a+1]<-direction
v[a+1]<- direction*mean(meanport[a+1:days]-rets[a+1:days,b])
}
}
print(paste('SD=',sd(v),'mean=',mean(v),'av sd=',
sd(meanport),'av mean=',mean(meanport)))
plot(cumsum(v))
(sum(v)-sum(X[,12]))/max(0.0001,sd(v)*sqrt(254))
}

 

It seems clearer now that the Einstein static universe can remove dark energy and dark matter and solve the cosmological constant problem and explain the CMB anisotropy as well.  Euclidean quantum field theory via Osterwalder-Schrader axioms can be formulated for it by Jaffee-Ritter.  This leaves only the hierarchy problem which is definitely easier to address in this situation.  The hierarchy between gravity and electromagnetism should be a simple function of the radius of the static universe.  More interesting issue is whether my initial (and naive excitement) that quantization itself is a CONSEQUENCE of a static Einstein universe and perhaps determines quantum field theory is true.  So this is a great problem.  While it may seem as naive to say that a static spherical geometry can describe quantum field theory as the pre-Keplerian idea that the orbits of planets must be perfect circles, in this case since Planck’s constant is a universal quantization of ALL energy in the universe and not only the energy that emanates from a particular atom, this is actually a NATURAL CONJECTURE if we rewind the clock back to 1900.  If we return to 1900 armed now with some knowledge that the universe IS an Einstein static universe and showed PLANCK the cosmic background radiation then he might have decided that the entire universe must be a compact blackbody and the Planck’s quantum hypothesis must then apply to the entire universe.  In other words a static Einstein (or S4) universe then justifies the Planck’s constant itself naturally.

The main hypothesis is that matter distribution in a static universe in a static pattern is sufficient to produce the observed CMB anisotropy patterns (that has been used to justify the entire Inflation/Big Bang model).  The exercise here is not to fit the data exactly but to produce a simple simulation model of galaxy distributions on a rectangle of latitudes/longitudes and graph the frequency distributions in the same way as the CMB anisotropy graph (using spherical harmonics coefficients) and note the pattern of peaks.  In order to get a computationally tractable problem, we consider a grid of 100 by 100.  We use the Cox model to distribute galaxies: choose points randomly according to a Poisson distribution and then replace the points by a line segment with size chosen from an exponential distribution with a rotation angle chosen uniformly on the circle.

pkg load statistics
pkg load gsl
pkg load image
global q0
q0=100 

function pts=fillPoints( A, B)
 if isnull(B),
 return
 endif
 dx = B(1)-A(1);
 dy = B(2)-A(2);
 pts =[];
 for delta = 0:0.001:1,
 pts = [pts;round(A(1)+dx*delta),round(A(2)+dy*delta)];
 endfor
 pts=unique(pts,"rows");
endfunction

g1 = poissrnd(100,q0,q0);
g2 = 0;
g = g1+g2;
h=g;
# choose a random angle and draw a line proportional to mass
alpha = 1.7
for k=1:q0,
 for m=1:q0,
 if h(k,m) > 0,
 mass = exprnd(1)*2;
 theta = unifrnd(0,1)*2*pi;
 ptx = mass*cos(theta);
 pty = mass*sin(theta);
 fillPts = fillPoints( [-ptx,-pty],[ptx,pty] );
 if length(fillPts)>0,
 for r=1:rows(fillPts),
 a=fillPts(r,1);
 b=fillPts(r,2);
 if abs(a)>0 && abs(b)>0 && k+a>1 && m+b>1 && k+a<q0 && m+b<q0,
 g(k+a,m+b) = g(k+a,m+b)+h(k,m);
 endif
 endfor
 disp([k,m,g(k,m)])
 fflush(stdout)
 endif
 endif
 endfor
endfor
 
g=imsmooth(g,"Gaussian",2,2);
sum(sum(g))
#g = (g-min(min(g)));
g=g/sum(sum(g));
g=g*100/max(max(g));
g=g-mean(mean(g));
save g.mat g;
# value at a spherical harmonic

function v=sphCoeff( l, g)
 global q0;
 v = 0;
 for j=1:q0,
 for k=1:q0,
 w1 = cos(2*pi*j/q0);
 w2 = cos(pi*k/q0);
 x = gsl_sf_legendre_array( 1,l, w1, -1 );
 fnval = mean(x*w2);
 if ~isnan(fnval*g(j,k)),
 v = v + fnval*g(j,k);
 endif
 endfor
 endfor
 disp([l,v])
 fflush(stdout)
endfunction

function v=sumSphCoeff( l, g)
 v=sphCoeff(l,g);
 v=abs(v)^2
endfunction


valsByL = zeros(1,100);

for k=2:30,
 p=k
 valsByL(1,k)=sqrt(p*(p+1))*sumSphCoeff(p,g);
endfor

plot( 2:30, valsByL(2:30));

The graph has a sequence of peaks that is qualitatively quite similar to the observed anisotropy of the CMB.

simCMBanisotropy

Actual CMB anisotropies have peaks at much higher frequencies that in principle can be matched by a larger scale simulation.  What is clear is that the qualitative pattern of relative heights of peaks can occur from the the mass/galaxy distributions alone in a static universe.  The mechanism for the CMB anisotropy are gravitational scattering and the gravitational redshifts due to matter.

CMB-LCDM-w-WMAP
So the picture that makes the most sense is a static universe with static distribution of matter.  What is missing from our simulation to the actual data is the use of actual distribution of matter to determine the anisotropy.

The major issue is that the Standard Model is making the wrong deduction that the matter in the universe was CAUSED by anisotropy in the CMB which has some primordial existence.  Much more sensible is the opposite conclusion that the CMB is nothing other than some thermal equilibrium in a static universe and the matter distribution perturbs it.  This is an explanation that is far more conservative and parsimonious.  There are no deep secrets of origin of the universe in the CMB.  The universe is a static steady state system which produces the CMB through starlight.  In fact, there need have been no difference in temperature from 3K in the past at all.

 

SUCH A CHILDISH PLAY

Sometimes it does seem like my entire life is a childish play because I was simply lacking the genius that led Shakespeare’s characters to be immortalized by names of Jupiter’s moons.  On the other hand the great Titan Prometheus only had a little moon of Saturn, and the universe is very large.  I am definitely closer to salvaging Eternity from Oblivion by overthrow of Big Bang even though I did not seriously enter the arena of science in a proper way except in brief positions at Biospect/Predicant where I had reached the rank of Scientist II ingloriously ejected by the ambitions of some punk kid to return to New York as a quant who failed to produce anything better than a prediction model for commodities based on inflation which was a result of note.  It is clear to me now that EVERY true result is of a great deal of interest.  My life is a disaster in too many ways but such a childish play indeed.

My first job was at Lehman Brothers Fixed Income Research in Andy Morton’s group.  I was hard-working and naive about what was required then for vertical ascent.  I came with mathematical background but untrained in science and I spent my time coding.  My perspective has matured regarding science since, so now I realize the hard work of getting nature to follow the quantitative models now attempting to get a static Einstein universe to produce the CMB anisotropy.  What is clear is that the hard part of numerical work remains as much drudgery as I had in the trading floor of Lehman, 13 hours in the neon light punctuated by dark skies.  What has not changed when making Octave models for CMB anisotropy (as opposed to the derivative pricing models) is that the annoyance of the coding problems is dwarfed by the difficulty of producing a model that can fit the observed anisotropy.  Tonight, I can’t solve the problem: find a distribution of point masses on a three-sphere such that the gravitational redshifts alone may reproduce the anisotropy.  So much for the coding problem.

The sample-frequency-concentrated-distribution looks quite good but not in a realistic FREQUENCY SCALE.

pkg load statistics
pkg load gsl
pkg load image

global n=10000
global q0=100
global points=zeros(n,2)
global q0 
global n
global X = mvnrnd( [0,0,0,0],diag([1,1,1,1]),n);

for i=1:n,
 X(i,:)=X(i,:)/norm(X(i,:),2);
endfor

# stereographic projection + normalize

function z=pwrrnd(k,n)
 u=unifrnd(0,1,n)
 z=(1-u)^(1/(1-k))*0.001
endfunction

global masses = lognrnd(0,4,n,1)*50;

function d=direction(x1,x2)
 d=x2-x1;
 d=d/norm(d,2);
endfunction

function d=distance(x1,x2)
 d=acos(dot(x1,x2));
endfunction
 
function f=gforce( x1,x2, m1,m2)
 f=m1*m2*direction(x1,x2)/distance(x1,x2)^2;
 #f=0
endfunction

timestep=10

if 0,
for a=1:n,
 force_on_x = 0;
 
 x1 = X(a,:);
 m1 = masses(a);
 for b=1:n,
 x2 = X(b,:);
 d=distance(x1,x2);
 m2=masses(b);
 if d>0 && d<pi,
 force_on_x = force_on_x + gforce(x1,x2,m1,m2);
 endif
 endfor
 x = x+0.5*force_on_x*timestep^2/m1;
 x = x/norm(x,2);
 X(a,:)=x;
endfor
endif
 
global z = zeros(n,3);
for i=1:n,
 y=X(i,:);
 y1=y(1);
 y2=y(2);
 y3=y(3);
 y4=y(4);
 zp=[ y1/(1-y4), y2/(1-y4),y3/(1-y4) ];
 z(i,:) = zp/norm(zp,2);
 
 # select from log-normal
 #u = lognrnd(0,1);
 #if u>5,
 #z(i,:) = [0,0,0];
 #endif
endfor

function t=fitangle(x)
 t=x;
 while t>2*pi,
 t=t-2*pi;
 endwhile
 while t<0,
 t=t+2*pi;
 endwhile
endfunction
 
 

function b=bin_s2point( z )
 if abs(z)<0.001,
 b=[0,0];
 return
 endif
 global q0;
 theta = atan( z(2)/z(1));
 theta=fitangle(theta);
 phi = atan( sqrt(z(1)^2+z(2)^2)/z(3));
 phi = fitangle(phi);
 thetan = floor(theta*q0/(2*pi));
 phin = floor(phi*q0/(2*pi));
 b=[thetan,phin];
endfunction
# create a spherical grid in longitudes/latitudes
global g = zeros(q0,q0);

global refpoint = [1,0,0,0]

function dox()
 global n;
 global g;
 global X;
 global masses;
 for i = 1:n,
 global refpoint;
 global z;
 global q0;
 global points;
 dg = zeros(q0,q0);

 cds = bin_s2point(z(i,:));
 cds(1)=mod(cds(1),q0);
 cds(2)=mod(cds(2),q0);
 disp([cds(1),cds(2)])
 points(i,:)=cds;
 fflush(stdout)
 #sdir = sign(direction(refpoint,X(i,:));
 dist = distance(refpoint,X(i,:));
 #g(cds(1)+1,cds(2)+1) = g(cds(1)+1,cds(2)+1)+1;
 m1=masses(i);
 dg(cds(1)+1,cds(2)+1) = m1/dist;
 dg = imsmooth( dg, "Gaussian", ceil(2),ceil(2));
 #disp(sum(sum(dg)))
 #fflush(stdout)
 dg(1,1)=0;
 g = g+dg;
 endfor
endfunction

dox()
sum(sum(g))
#g = (g-min(min(g)));
g=g/sum(sum(g));


# value at a spherical harmonic

function v=sphCoeff( l,m, g)
 v = 0;
 q0=20;
 for j=1:q0,
 for k=1:q0,
 w1 = cos(2*pi*j/q0);
 w2 = exp(1i*2*pi*k/q0);
 fnval = gsl_sf_legendre_sphPlm( l, m, w1 )*w2;
 if ~isnan(fnval*g(j,k)),
 v = v + fnval*g(j,k);
 endif
 endfor
 endfor
 disp([l,m,v])
 fflush(stdout)
endfunction

function v=sumSphCoeff( l, g)
 v=0;
 for m=0:1
 v = v+sphCoeff(l,m,g);
 endfor
 v=abs(v)^2
endfunction


valsByL = zeros(1,1000);

for k=1:1000,
 valsByL(1,k)=sumSphCoeff(200*k,g);
endfor

plot( 1:1000, valsByL);

The simplest explanation of the uniformity of the cosmic background radiation is thermal equilibrium in a STATIC universe, in fact the Einstein static universe model is infinitely more plausable than inflation theories where things happened during the beginning of the universe (whatever that means).  The Standard Model of Cosmology is completely incredible in the sense that it makes very little sense.  I bet that a relatively simple model where the anisotropy of the CMB is explained by things like gravitational redshift due to mass clusters could fit the anisotropies with less work than what went into fitting inflation models.  Unlike the Standard Model of Particle Theory the cosmological models are completely not based on experiments.