]]>

The code is in octave: The first example of anisotropy for the model is this little graph:

artificial-simple-anisotropy-static-universe-gravitational-redshiftonly

pkg load statistics

pkg load gsl

pkg load image

n=10000

q0=200

X = mvnrnd( [0,0,0,0],diag([1,1,1,1]),n);

for i=1:n,

X(i,:)=X(i,:)/norm(X(i,:),2);

endfor

# stereographic projection + normalize

function z=pwrrnd(k,n)

u=unifrnd(0,1,n)

z=(1-u)^(1/(1-k))*0.001

masses = exprnd(1,n)

function d=direction(x1,x2)

d=x2-x1

d=d/norm(d,2)

function d=distance(x1,x2)

d=acos(dot(x1,x2))

function f=gforce( x1,x2, m1,m2)

f=m1*m2*direction(x1,x2)/distance(x1,x2)^2

#f=0

timestep=0.1

for a=1:n,

force_on_x = 0

x1 = X(a,:)

m1 = masses(a)

for b=1:n,

x2 = X(b,:)

d=distance(x1,x2)

m2=masses(b)

if d>0 and d<pi,

force_on_x = force_on_x + gforce(x1,x2,m1,m2)

endif

endfor

x = x+0.5*force_on_x*timestep^2/m1

x = x/norm(x,2)

endfor

z = zeros(n,3);

for i=1:n,

y=X(i,:);

y1=y(1);

y2=y(2);

y3=y(3);

y4=y(4);

zp=[ y1/(1-y4), y2/(1-y4),y3/(1-y4) ];

z(i,:) = zp/norm(zp,2);

endfor

function t=fitangle(x)

t=x;

while t>2*pi,

t=t-2*pi;

endwhile

while t<0,

t=t+2*pi;

endwhile

endfunction

function b=bin_s2point( z )

theta = atan( z(2)/z(1));

theta=fitangle(theta);

phi = atan( sqrt(z(1)^2+z(2)^2)/z(3));

phi = fitangle(phi);

thetan = floor(theta*100/(2*pi));

phin = floor(phi*100/(2*pi));

b=[thetan,phin];

endfunction

# create a spherical grid in longitudes/latitudes

g = zeros(q0,q0);

refpoint = [1,0,0,0]

for i = 1:n,

cds = bin_s2point(z(i,:));

cds(1)=mod(cds(1),100);

cds(2)=mod(cds(2),100);

sdir = sign(direction(refpoint,X(i,:));

dist = distance(refpoint,X(i,:));

dg = zeros(q0,q0);

dg(cds(1)+1,cds(2)+1) = 1;

dg = imsmooth( dg, “Gaussian”, ceil(m1),ceil(m1));

g = g+dg;

endfor

#g = (g-min(min(g)));

#g=g/sum(sum(g));

h=imsmooth(g,”Gaussian”,10,10);

g=h;

# value at a spherical harmonic

function v=sphCoeff( l,m, g)

v = 0;

q0=100;

for j=1:q0,

for k=1:q0,

w1 = cos(2*pi*j/100);

w2 = exp(1i*2*pi*k/100);

fnval = gsl_sf_legendre_sphPlm( l, m, w1 )*w2;

v = v + fnval*g(j,k);

endfor

endfor

endfunction

function v=sumSphCoeff( l, g)

v=0;

for m=0:l

v = v+sphCoeff(l,m,g);

endfor

v=abs(v)^2

endfunction

valsByL = zeros(1,100);

for k=1:100,

valsByL(1,k)=sumSphCoeff(20*k,g);

endfor

plot( 1:100, valsByL)

]]>

which satisfy . The Christoffel symbols are computed from

There are too many of these to compute exhaustively so we note first that

unless

Therefore the only nonzero are those with two 2 and one 1 in the indices. For example,

and

$\Gamma_{122}=\Gamma_{212}=\Gamma_{221} = -H e^{-2Ht}$

Then we have to raise the first index which has the solution

Now the geodesic equation is

For coordinates we have

I don’t know how to solve this system of equations in general. But the light cone condition in this case is

which we plug into the first equation . This is solved by integrating , so and .

The second equation now simplifies when we plug in .

Again and and .

]]>

]]>

- Doppler effect (universe is expanding)
- Tired light theory (light traveling from distance sources getting absorbed and re-emitted by dust)

My new explanation — several years old — the universe is static with fixed curvature. My preferred model is a membrane in a four-sphere but the explanation is equally good in the original Einstein static three-sphere. This is that spherical waves do not have an inverse relation between frequency and wavelength. It is easiest to see this by considering a circle of radius versus a three sphere of the same radius. The frequencies of standing waves on the circle have eigenvalues of Laplacian $C n^2$ while on a three-sphere they are . For light traveling on a sphere of a large distance, the resulting wavelength will seem to grow linearly with distance while there has been no actual physical feature of the universe that has changed. Thus my explanation of the redshift is that it is a MATHEMATICAL ARTIFACT of expecting an inverse wavelength-redshift relationship while in fact it is just an effect of curvature.

One could then bring up the evidence for acceleration on measurements of supernovae. Well this is actually a real redshift but this is actually graviational redshift. Heavier objects have a higher redshift due to time dilation according to Einstein, and in fact there is work addressing this issue: gravitationalredshift. So once you correct for this there is no acceleration of any expansion.

DARK ENERGY does not exist. It is simply the curvature of a static spherical eternal universe. And the Big Bang singularity does not exist either. By the way, the cosmic background radiation was predicted by steady state theorists as the temperature more accurately before Gamow which is well known:

]]>

where is the orbital velocity and is the central mass. It is NOT . It is . So the issue is what is a reasonable model of mass distribution of a galaxy in terms of radius . The most reasonable hypothesis is that it is assuming it’s two dimensional. Then the expected velocity rotation curve is

So all the graphs that people present showing anomalous rotational velocities are severely misleading because the observed galaxy rotations are actually extremely close to the above relation. A discrepancy between the observed velocities and the above relation can be easily explained by GRAVITATIONAL REDSHIFT. Gravitational redshift formula is approximately (see Gravitational redshift)

So the approximate velocity due to gravitational redshift is just this multiplied by the speed of light. I will illustrate by a simple procedure in R how to get a from actual data so that the relation has statistical significance with on actual data which can be found here:Galaxy rotation dataGalaxy rotation dataGalaxy rotation data. So take for example the data for DDO 64

r v verr

1 1.5 6.3 4.9

2 4.5 7.6 3.0

3 7.5 9.9 2.6

4 10.5 13.8 1.4

5 13.5 9.1 2.5

6 16.5 16.6 3.2

7 19.5 14.2 2.5

8 25.5 13.3 1.2

9 28.5 22.4 2.4

10 31.5 25.2 5.0

11 34.5 32.0 4.0

12 40.5 30.8 1.7

13 43.5 41.6 2.9

14 46.5 44.6 2.3

15 49.5 36.7 4.0

16 52.5 51.2 3.9

17 55.5 44.9 1.8

18 58.5 53.6 13.0

19 61.5 48.1 9.8

20 64.5 59.8 9.0

The columns are radius, velocity, velocity error. We conside a power law relationship so in R summarize linear regression of log-velocity versus log-radius:

> summary(lm(log(v)~log(r))) Call: lm(formula = log(v) ~ log(r)) Residuals: Min 1Q Median 3Q Max -0.59982 -0.09126 0.01336 0.19051 0.54301 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.02706 0.21837 4.703 0.000177 *** log(r) 0.66710 0.06551 10.183 6.75e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2843 on 18 degrees of freedom Multiple R-squared: 0.8521, Adjusted R-squared: 0.8439 F-statistic: 103.7 on 1 and 18 DF, p-value: 6.752e-09

So we see that the relationship that fits (with very significant p-value) is a power law . This is a bit higher than the simple classical model suggests, i.e. . So let’s assume that this deviation is due to gravitational redshift, which by the same model will be based on the mass-by-radius formula for a two-disc, where the constant is different in varoous occurrences above since we are interested in the power of in this exercise. Well, a little experimentation gives us:

> summary(lm(log(v-0.0059*r^2)~log(r))) Call: lm(formula = log(v - 0.0059 * r^2) ~ log(r)) Residuals: Min 1Q Median 3Q Max -0.64957 -0.08524 -0.00853 0.19738 0.35968 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.27580 0.20315 6.280 6.38e-06 *** log(r) 0.50058 0.06095 8.214 1.68e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2645 on 18 degrees of freedom Multiple R-squared: 0.7894, Adjusted R-squared: 0.7777 F-statistic: 67.46 on 1 and 18 DF, p-value: 1.681e-07

So if we consider the action of gravitational redshift to lower the velocities we recover approximately quite easily. It is not hard to check that this is true about the other examples as well. The significance of this exercise is that in fact there is no missing mass at all if we make the assumption that the mass is approxmately distributed as which can be obtained from the luminous matter without the need for any mysterious dark matter. We do not need to modify gravity laws (which is what many people had done to explain the anomalous rotation curves). We don’t need to introduce dark matter either. The problem is that the ‘expected Keplerian velocities’ are a red herring since they assume that objects distributed inside the galaxy can treat the center with the same mass. This assumption is silly. A point at a given radius from the center should experience the center of mass as a function of radius obviously. So this tells us that standard Newtonian gravity with Einstein’s gravitational redshift as correction WITHOUT dark matter explains the galaxy rotations without any problems. There is no evidence of dark matter from galaxy rotation velocities.

Another example to ensure that we are not making an unjustified extrapolation. Data for UGC 1281:

#UGC,1281

5.0,2.5,2.2

7.0,11.2,2.8

9.0,7.9,3.1

13.0,15.6,1.2

15.0,12.9,3.3

17.0,8.8,2.3

23.0,9.6,1.8

25.0,12.1,4.8

27.0,8.9,1.3

29.0,7.0,3.5

31.0,14.1,5.0

33.0,18.3,1.4

35.0,16.2,4.0

37.0,14.1,4.5

39.0,12.1,2.9

41.0,20.4,2.3

43.0,19.3,1.8

45.0,24.3,2.1

47.0,25.4,5.8

49.0,17.8,6.3

51.5,33.5,3.7

54.5,30.1,2.7

60.5,26.5,1.2

63.5,33.6,3.1

66.5,45.8,1.9

69.5,37.8,5.5

Without any gravitational redshift adjustment, we get a law :

> summary(lm(log(v-0.00*r^2)~log(r))) Call: lm(formula = log(v - 0 * r^2) ~ log(r)) Residuals: Min 1Q Median 3Q Max -0.79011 -0.22565 0.00927 0.20012 0.73392 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.2390 0.3605 0.663 0.514 log(r) 0.7416 0.1040 7.129 2.28e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3665 on 24 degrees of freedom Multiple R-squared: 0.6793, Adjusted R-squared: 0.6659 F-statistic: 50.83 on 1 and 24 DF, p-value: 2.279e-07

With a loss of goodness-of-fit but without losing significance of the power law fit, we can obtain :

> summary(lm(log(v-0.0035*r^2)~log(r))) Call: lm(formula = log(v - 0.0035 * r^2) ~ log(r)) Residuals: Min 1Q Median 3Q Max -1.01665 -0.23224 0.02452 0.20987 0.70148 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.7161 0.4291 1.669 0.108115 log(r) 0.5051 0.1238 4.080 0.000431 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.4363 on 24 degrees of freedom Multiple R-squared: 0.4095, Adjusted R-squared: 0.3849 F-statistic: 16.65 on 1 and 24 DF, p-value: 0.0004306

We are treating gravitational redshift with mass distribution for simplicity. Our central point here is that galaxy rotation curves in the classical model with mass distribution should follow rather than so the anomaly in these rotation curves is small perturbation in the power law from expected . Gravitational redshift can account for the discrepancy. There should be no need to modify gravitational laws and no need to hypothesize dark matter to explain the rotation curves.

]]>

The trouble with quantum gravity is that it is only an inaccessibly hard problem because it is the wrong question on arbitrary Lorentzian spacetimes. The problem is a bad cosmology: there is no expansion or any early universe. The problem of quantum gravity is that it is attempting to solve an irrelevant and overly general problem of an expanding spacetime while in fact the problem can be solved when the spacetime is static and globally hyperbolic with a compact Cauchy surface. It is in this setting where quickly we discover that there is no dark energy necessary in the sense that vacuum energy (which is well-defined for static spacetimes not suffering ambiguities of more general Lorentzian situations) is approximately equal to measured cosmological constant. The dark energy is just the curvature of a static universe. If one proceeds by a Wheeler-deWitt equation for quantum gravity, then this can also be solved explicitly via Huen functions etc. Thus specialization of the quantum gravity problem to a specific static spacetime resolves the most pressing issues of quantum gravity. It seems quite dangerous for thousands of physicists getting lost in speculative scenarios which will most likely have the same fate as the thousands of papers on solutions of Einstein equations without physical relevance. The quantum gravity problem is a GLOBAL problem in the sense that it must necessarily be affected by the geometry of the entire cosmos. It will be very unlikely to have a local solution that has merit. The hardness of the problem seems to be due to trying to force local considerations lead to gravity.

]]>

where the quantities replace the distance quantities for the formula in the flat spacetime. Specifically for curvature one considers the Robertson-Walker metric

.

All this is known material that I took from S_Hilbert_Gravitational_Lensing. The interesting question is whether this can be used to fit the actual data. The data can be found here: gravitational lensing datagravitational lensing data

The dataset has three columns of interest to us: (i) redshift of the object deflecting the light, (ii) redshift of the source, and (iii) the ‘size’ of the deflection which is essentially the Einstein radius. We can show that a static universe with a finite produces smaller estimates for the mass of the deflecting object than a flat space. The significance of this is that it is possible that the ‘dark matter’ that is inferred from gravitational lensing may be explained by the curvature of the universe.

Here is the way to test this: we compare the inferred mass from the flat space model:

with the corresponding inferred mass formula in curved space:

Now it is easy enough to produce a reduction of inferred mass that is proportional to the inferred mass from a flat universe by changing the parameter in the Friedmann-Robertson-Walker metric. For example, if we set and consider the metric

where the curvature is set by then we can take more or less any radius we want, including which is a generic estimate of the radius of the universe and find that the curved space estimate of the mass is that of the flat space mass with the same Einstein radius. That is, if dark matter be universally proportional to mass of the object then we can remove any dark matter halo needed to explain the gravitational lensing in a STATIC Robertson-Walker metric. This is very simple to check on actual dataset. Read in the dataset for lensing into R and extract the columns for zs,zl, and size (), then just set and compare with . The ratio will be obviously be determined by . So this is a trivial way to get rid of the necessity of a halo of dark matter to explain the gravitational lensing. What is more important is that a static constant (spatial) curvature universe automatically removes DARK ENERGY as well and explains redshift. This is the right sort of idea because you do NOT need modified gravity at least for the case of lensing. I suspect that if we get rid of thinking of the ‘cosmological redshift’ as expansion, a revisiting of the galactic rotation curves will probably clarify whether there really is a serious discrepancy with more classical predictions.

]]>

The Euler-Lagrange equation reduces to since the Lagrangian have no terms with . The trick is to rewrite

first and then apply the product rule

This is

Therefore which are Maxwell’s equations. The nice thing about this trick is that we can use standard product rule without problems once we lower the indices of . This is pretty useful because deriving the Euler-Lagrange equations for Lagrangians is basic for all quantum field theory models.

]]>

For the actual universe we could try and the constants , $\gamma=7.6\times 10^{81}$ which are too large to handle simply. We can do something simpler — first, implement a simple version of the triconfluent Huen with the exponential factor without normalization. Then we can use some simple numbers for to get some sensible shape for the wavefunction; we are interested in the case of a radiation-filled univese for which the wavefunction is as in the graphic above.

function(xi,a,b,c,z){

N<-5

ef=0.5*(xi^3*z^3-c*xi*z)

prx<-function(A,B){

sA<-sign(A)

sB<-sign(B)

xA<-log(abs(A))

xB<-log(abs(B))

return(sA*sB*exp(xA+xA-ef))

}

e<-rep(0,N)

s<-rep(0,N)

e[1]<-1

e[2]<-0

e[3]<–a/2

s[1]<-1

s[2]<-c/2

s[3]<-(c^2-a)/6

T1 T2 print(T1)

for (i in 3:N){

ip<-i+1

e[ip]<-i*c*e[i]-a*e[i-1]-(b+6-3*i)*e[i-2]

e[ip]<-e[ip]/(ip*i)

s[ip]<-i*c*s[i]-a*s[i-1]-(b+3-3*i)*s[i-2]

s[ip]<-s[ip]/(ip*(ip+1))

print(paste(‘ip=’,ip,’T2=’,T2,’s=’,s[ip],’zp=’,z^(ip+1)))

T1<-T1+prx(e[ip],z^ip)

print(T1)

T2<-T2+prx(s[ip],z^(ip+1))

}

T1

}

For and with which is not realistic but gives a clear shape, we have

plot(t,lapply(t,function(x) huenwave(1,1e30,0,100,x)),type=’l’)

So this is a sanity check. There is no obvious rigorous scaling argument I can see that allows me to take the full-width at half height for these parameters and scale them to the actual radius of the universe but it is still comforting to recognize that for unrealistic parameters the Wheeler-deWitt equation does have some localization for the radius.

]]>