Feeds:
Posts
Comments

The classical velocity orbit formula is

v = \sqrt{ GM/r}

where v is the orbital velocity and M is the central mass.  It is NOT v = Cr^{-3/2}.  It is v= C r^{-1/2}.  So the issue is what is a reasonable model of mass distribution of a galaxy in terms of radius r.  The most reasonable hypothesis is that it is M(r) = C r^2 assuming it’s two dimensional.  Then the expected velocity rotation curve is

v= C r^{1/2}

So all the graphs that people present showing anomalous rotational velocities are severely misleading because the observed galaxy rotations are actually extremely close to the above relation.  A discrepancy between the observed velocities and the above relation can be easily explained by GRAVITATIONAL REDSHIFT.  Gravitational redshift formula is approximately (see Gravitational redshift)

z_{approx} = \frac{1}{2} GM/{c^2 r}

So the approximate velocity due to gravitational redshift is just this multiplied by the speed of light.  I will illustrate by a simple procedure in R how to get a v = C\sqrt{r} from actual data so that the relation has statistical significance with R^2 > 0.8-0.9 on actual data which can be found here:Galaxy rotation dataGalaxy rotation dataGalaxy rotation data.  So take for example the data for DDO 64

r v verr
1 1.5 6.3 4.9
2 4.5 7.6 3.0
3 7.5 9.9 2.6
4 10.5 13.8 1.4
5 13.5 9.1 2.5
6 16.5 16.6 3.2
7 19.5 14.2 2.5
8 25.5 13.3 1.2
9 28.5 22.4 2.4
10 31.5 25.2 5.0
11 34.5 32.0 4.0
12 40.5 30.8 1.7
13 43.5 41.6 2.9
14 46.5 44.6 2.3
15 49.5 36.7 4.0
16 52.5 51.2 3.9
17 55.5 44.9 1.8
18 58.5 53.6 13.0
19 61.5 48.1 9.8
20 64.5 59.8 9.0

The columns are radius, velocity, velocity error.  We conside a power law relationship so in R summarize linear regression of log-velocity versus log-radius:

> summary(lm(log(v)~log(r)))

Call:
lm(formula = log(v) ~ log(r))

Residuals:
     Min       1Q   Median       3Q      Max 
-0.59982 -0.09126  0.01336  0.19051  0.54301 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  1.02706    0.21837   4.703 0.000177 ***
log(r)       0.66710    0.06551  10.183 6.75e-09 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.2843 on 18 degrees of freedom
Multiple R-squared:  0.8521,    Adjusted R-squared:  0.8439 
F-statistic: 103.7 on 1 and 18 DF,  p-value: 6.752e-09

So we see that the relationship that fits (with very significant p-value) is a power law v = C r^{0.667}. This is a bit higher than the simple classical model suggests, i.e. v= C r^{0.5}. So let’s assume that this deviation is due to gravitational redshift, which by the same model will be based on the mass-by-radius formula for a two-disc, M(r) = Cr^2 where the constant C is different in varoous occurrences above since we are interested in the power of r in this exercise. Well, a little experimentation gives us:

> summary(lm(log(v-0.0059*r^2)~log(r)))

Call:
lm(formula = log(v - 0.0059 * r^2) ~ log(r))

Residuals:
     Min       1Q   Median       3Q      Max 
-0.64957 -0.08524 -0.00853  0.19738  0.35968 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  1.27580    0.20315   6.280 6.38e-06 ***
log(r)       0.50058    0.06095   8.214 1.68e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.2645 on 18 degrees of freedom
Multiple R-squared:  0.7894,    Adjusted R-squared:  0.7777 
F-statistic: 67.46 on 1 and 18 DF,  p-value: 1.681e-07

So if we consider the action of gravitational redshift to lower the velocities we recover approximately v = C r^{1/2} quite easily. It is not hard to check that this is true about the other examples as well. The significance of this exercise is that in fact there is no missing mass at all if we make the assumption that the mass is approxmately distributed as M(r)= Cr^2 which can be obtained from the luminous matter without the need for any mysterious dark matter. We do not need to modify gravity laws (which is what many people had done to explain the anomalous rotation curves). We don’t need to introduce dark matter either. The problem is that the ‘expected Keplerian velocities’ are a red herring since they assume that objects distributed inside the galaxy can treat the center with the same mass. This assumption is silly. A point at a given radius from the center should experience the center of mass as a function of radius obviously. So this tells us that standard Newtonian gravity with Einstein’s gravitational redshift as correction WITHOUT dark matter explains the galaxy rotations without any problems. There is no evidence of dark matter from galaxy rotation velocities.

Another example to ensure that we are not making an unjustified extrapolation.  Data for UGC 1281:

#UGC,1281
5.0,2.5,2.2
7.0,11.2,2.8
9.0,7.9,3.1
13.0,15.6,1.2
15.0,12.9,3.3
17.0,8.8,2.3
23.0,9.6,1.8
25.0,12.1,4.8
27.0,8.9,1.3
29.0,7.0,3.5
31.0,14.1,5.0
33.0,18.3,1.4
35.0,16.2,4.0
37.0,14.1,4.5
39.0,12.1,2.9
41.0,20.4,2.3
43.0,19.3,1.8
45.0,24.3,2.1
47.0,25.4,5.8
49.0,17.8,6.3
51.5,33.5,3.7
54.5,30.1,2.7
60.5,26.5,1.2
63.5,33.6,3.1
66.5,45.8,1.9
69.5,37.8,5.5

Without any gravitational redshift adjustment, we get a law v \sim C r^{0.7416}:

> summary(lm(log(v-0.00*r^2)~log(r)))

Call:
lm(formula = log(v - 0 * r^2) ~ log(r))

Residuals:
     Min       1Q   Median       3Q      Max 
-0.79011 -0.22565  0.00927  0.20012  0.73392 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   0.2390     0.3605   0.663    0.514    
log(r)        0.7416     0.1040   7.129 2.28e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.3665 on 24 degrees of freedom
Multiple R-squared:  0.6793,    Adjusted R-squared:  0.6659 
F-statistic: 50.83 on 1 and 24 DF,  p-value: 2.279e-07

With a loss of goodness-of-fit but without losing significance of the power law fit, we can obtain v_{adj} \sim C r^{1/2}:

> summary(lm(log(v-0.0035*r^2)~log(r)))

Call:
lm(formula = log(v - 0.0035 * r^2) ~ log(r))

Residuals:
     Min       1Q   Median       3Q      Max 
-1.01665 -0.23224  0.02452  0.20987  0.70148 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   0.7161     0.4291   1.669 0.108115    
log(r)        0.5051     0.1238   4.080 0.000431 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.4363 on 24 degrees of freedom
Multiple R-squared:  0.4095,    Adjusted R-squared:  0.3849 
F-statistic: 16.65 on 1 and 24 DF,  p-value: 0.0004306

We are treating gravitational redshift with mass distribution M(r) = Cr^3 for simplicity. Our central point here is that galaxy rotation curves in the classical model with mass distribution M(r)=Cr^2 should follow v \sim Cr^{1/2} rather than v \sim Cr^{-1/2} so the anomaly in these rotation curves is small perturbation in the power law from expected \beta=1/2. Gravitational redshift can account for the discrepancy. There should be no need to modify gravitational laws and no need to hypothesize dark matter to explain the rotation curves.

 

The problem of quantum gravity is impossibly hard because it is too general.  Quantum gravity problem is not trivial but not nearly as hard if we consider a static spacetime such as Einstein static model \mathbf{R}\times S^3(a).  Quantum field theory on this specific spacetime is essentially a solved problem.  In fact, even Euclidean quantum field theory following Osterwalder-Schrader quantization and Wightman axioms has been worked out in this case ( jaffee-ritter-qft1jaffee-ritter-qft2 osterwalder-schrader-2osterwalder-schrader-1 ).  In presentations of quantum field theory on curved spacetimes the Einstein static spacetime appears often as a relatively simple example (see Fewster lecturenote-3908 (1)).

The trouble with quantum gravity is that it is only an inaccessibly hard problem because it is the wrong question on arbitrary Lorentzian spacetimes.  The problem is a bad cosmology: there is no expansion or any early universe.  The problem of quantum gravity is that it is attempting to solve an irrelevant and overly general problem of an expanding spacetime while in fact the problem can be solved when the spacetime is static and globally hyperbolic with a compact Cauchy surface.  It is in this setting where quickly we discover that there is no dark energy necessary in the sense that vacuum energy (which is well-defined for static spacetimes not suffering ambiguities of more general Lorentzian situations) is approximately equal to measured cosmological constant.  The dark energy is just the curvature of a static universe.  If one proceeds by a Wheeler-deWitt equation for quantum gravity, then this can also be solved explicitly via Huen functions etc.  Thus specialization of the quantum gravity problem to a specific static spacetime resolves the most pressing issues of quantum gravity.  It seems quite dangerous for thousands of physicists getting lost in speculative scenarios which will most likely have the same fate as the thousands of papers on solutions of Einstein equations without physical relevance.  The quantum gravity problem is a GLOBAL problem in the sense that it must necessarily be affected by the geometry of the entire cosmos.  It will be very unlikely to have a local solution that has merit.  The hardness of the problem seems to be due to trying to force local considerations lead to gravity.

I suspect that gravitational lensing data used to support dark matter can be explained by gravitational lensing in a closed static (Einstein) spacetime.  In order to make progress, we begin with getting a formula for (ahe Einstein radius in a curved spacetime.  It is

\theta_E = \sqrt{ \frac{4GM}{a(z^L)}c^2 \frac{f^{SL}}{f^Sf^L}}

where the quantities f^S,f^L,f^{SL} replace the distance quantities for the formula in the flat spacetime.  Specifically for curvature K>0 one considers the Robertson-Walker metric

g = -c^2 dt^2 + a^2( d\chi^2 + f^2_K(\chi)[ d\beta_1^2 + \cos^2(\beta_1) d\beta_2^2])

f_K(\chi) = \frac{1}{\sqrt{K}} \sin(\sqrt{K} \chi)

f^{S} = f_K(\chi^S), f^L = f_K(\chi^L), f^{SL} = f_K(\chi^S-\chi^L).

All this is known material that I took from S_Hilbert_Gravitational_Lensing.  The interesting question is whether this can be used to fit the actual data.  The data can be found here: gravitational lensing datagravitational lensing data

The dataset has three columns of interest to us: (i) redshift of the object deflecting the light, (ii) redshift of the source, and (iii) the ‘size’ of the deflection which is essentially the Einstein radius.  We can show that a static universe with a finite produces smaller estimates for the mass of the deflecting object  than a flat space.  The significance of this is that it is possible that the ‘dark matter’ that is inferred from gravitational lensing may be explained by the curvature of the universe.

Here is the way to test this: we compare the inferred mass from the flat space model:

\theta_E \cdot (10^{11.09} M_{\odot})^{1/2} ( D^L D^S/D^{LS}/Gpc )^{1/2} = \sqrt{M}

with the corresponding inferred mass formula in curved space:

\theta_E \cdot (10^{11.09} M_{\odot})^{1/2} ( 1/a f^L f^S/f^{LS}/Gpc )^{1/2} = \sqrt{M}

Now it is easy enough to produce a reduction of inferred mass that is proportional to the inferred mass from a flat universe by changing the parameter a in the Friedmann-Robertson-Walker metric.  For example, if we set a=1.25 and consider the metric

g = -c^2 dt^2 + a^2 ( d\chi^2 + f_K^2(\chi)[d\beta_1^2 + \cos^2(\beta_1) d\beta_2^2])

where the curvature is set by K=1/R^2 then we can take more or less any radius we want, including R=28.5 Gpc = 28.5 \times 3 \times 10^{25} m which is a generic estimate of the radius of the universe and find that the curved space estimate of the mass is 1/4 that of the flat space mass with the same Einstein radius.  That is, if dark matter be universally proportional to mass of the object then we can remove any dark matter halo needed to explain the gravitational lensing in a STATIC Robertson-Walker metric.  This is very simple to check on actual dataset.  Read in the dataset for lensing into R and extract the columns for zs,zl, and size (\theta_E), then just set a=1.25 and compare \theta_E \cdot (D^L D^S/D^{LS})^{1/2} with \theta_E \cdot (\frac{1}{a} \frac{f^L f^L}{f^{LS}})^{1/2}.  The ratio will be obviously be determined by a.  So this is a trivial way to get rid of the necessity of a halo of dark matter to explain the gravitational lensing.  What is more important is that a static constant (spatial) curvature universe automatically removes DARK ENERGY as well and explains redshift.  This is the right sort of idea because you do NOT need modified gravity at least for the case of lensing.  I suspect that if we get rid of thinking of the ‘cosmological redshift’ as expansion, a revisiting of the galactic rotation curves will probably clarify whether there really is a serious discrepancy with more classical predictions.

 

 

 

L = -\frac{1}{4} F^{ab} F_{ab}

F_{ab} = \partial_a A_b - \partial_b A_a

The Euler-Lagrange equation reduces to \partial_a \partial L /\partial( \partial_a A_b) = 0 since the Lagrangian have no terms with A_a.  The trick is to rewrite

F^{ab}F_{ab} = \eta^{ac} \eta^{bd} F_{cd} F_{ab} first and then apply the product rule

\partial L/\partial(\partial_a A_b) = -1/4 \eta^{ac} \eta^{bd}( \frac{\partial F_{cd}}{\partial (\partial_a A_b)} + F_{cd} \frac{\partial F_{ab}}{\partial (\partial_a A_b)})

This is

-(1/4) \eta^{ac}\eta^{bd} (\delta_{ac}\delta_{bd} F_{cd} + F_{cd}) = -(1/4) \eta^{ac}\eta^{bd}(2 F_{cd}) = -(1/2) F^{ab}

Therefore \partial_a F^{ab} = 0 which are Maxwell’s equations.  The nice thing about this trick is that we can use standard product rule without problems once we lower the indices of F^{ab}. This is pretty useful because deriving the Euler-Lagrange equations for Lagrangians is basic for all quantum field theory models.

In theory there is an exact solution of the Wheeler-deWitt equation on a static Einstein universe that is a function of the radius a that is given in terms of the triconfluent Huen function; this is the solution due to Vieira and Bezerra (ExactSolutionsWheelerDeWittClosedFRWMetrics)Screenshot 2017-03-12 19.45.06

For the actual universe we could try a=10^{26} and the constants \alpha=1.45\times 10^{163}, $\gamma=7.6\times 10^{81}$ which are too large to handle simply.  We can do something simpler — first, implement a simple version of the triconfluent Huen with the exponential factor without normalization.  Then we can use some simple numbers for \alpha,\beta to get some sensible shape for the wavefunction; we are interested in the case of a radiation-filled univese for which the wavefunction is as in the graphic above.

function(xi,a,b,c,z){
N<-5
ef=0.5*(xi^3*z^3-c*xi*z)
prx<-function(A,B){
sA<-sign(A)
sB<-sign(B)
xA<-log(abs(A))
xB<-log(abs(B))
return(sA*sB*exp(xA+xA-ef))
}
e<-rep(0,N)
s<-rep(0,N)
e[1]<-1
e[2]<-0
e[3]<–a/2
s[1]<-1
s[2]<-c/2
s[3]<-(c^2-a)/6
T1 T2 print(T1)
for (i in 3:N){
ip<-i+1
e[ip]<-i*c*e[i]-a*e[i-1]-(b+6-3*i)*e[i-2]
e[ip]<-e[ip]/(ip*i)
s[ip]<-i*c*s[i]-a*s[i-1]-(b+3-3*i)*s[i-2]
s[ip]<-s[ip]/(ip*(ip+1))
print(paste(‘ip=’,ip,’T2=’,T2,’s=’,s[ip],’zp=’,z^(ip+1)))
T1<-T1+prx(e[ip],z^ip)
print(T1)
T2<-T2+prx(s[ip],z^(ip+1))
}
T1
}

For \alpha=10^{30} and \gamma=100 with \xi=1 which is not realistic but gives a clear shape, we have

plot(t,lapply(t,function(x) huenwave(1,1e30,0,100,x)),type=’l’)

wheeler-dewitt-wavefunction-example-shape

So this is a sanity check.  There is no obvious rigorous scaling argument I can see that allows me to take the full-width at half height for these parameters and scale them to the actual radius of the universe but it is still comforting to recognize that for unrealistic parameters the Wheeler-deWitt equation does have some localization for the radius.

I am quite convinced that the actual universe is a static Einstein universe and the entire edifice of Big Bang cosmology is based on extrapolations of an imaginary ‘early universe’.  So I want to know: if we look at an EXACT solution of the Wheeler-deWitt quantum gravity equation, which gives a wavefunction of the universe as a function of radius, then does this wavefunction look sharply peaked or diffuse?  Given that there exists an exact solution — take a look here ExactSolutionsWheelerDeWittClosedFRWMetrics— in terms of TRICONFLUENT HUEN functions (which are totally new to me) I thought it would not take too long to plot the wavefunction and gauge whether it is sharply peaked near some fixed value, say 10^{26} meters (the standard radius that seems to be accepted, or is it diffuse?  Well, this plotting issue is not completely trivial because despite the importance of these Huen functions, there does not seem to be public implementations.  So this will be a project of a few days perhaps since it seems that just like the Mittag-Leffler function I may have to implement it myself.  A static Einstein universe would be a great solution to many fundamental problems in physics, so this is definitely a great step:  explanation of redshift and solution of the cosmological constant problem are trivial in a static Einstein universe and as you can see there are EXACT EXPLICIT solution of the fundamental quantum gravity equations that can be examined in quantitative detail.  It would not be completely crazy to think that this is the solution to the problem of quantum gravity and the missing piece of the puzzle was a fixed geometry of the universe that is not explanding and not complicated.  This is actually the conservative idea, unlike supersymmetry, dark energy, inflation and the rest of the epicycles that are the rage.

Some-Special-Solutions-of-Biconfluent-and-Triconfluent-Heun-Equations-in-Elementary-Functions-by-Extended-Nikiforov-Uvarov-Method

The Union of Supernova (you can get the dataset from heresupernova datasetsupernova datasetIa that were used to declare that the expansion of the universe is expanding is explained very simply by a static Einstein universe.  The raw dataset comes with redshift and distance modulus \mu, so the distance is d=\exp(\mu/5+1).  The plot of distance versus redshift looks like this graphic:

supernova_redshift

The x-axis is distance d in parsecs and the y-axis is (1+z) c/\nu_{emit} where \nu_{emit} is the frequency of the H-alpha line.

The adjustment is made as follows, we assume a universe of radius 10^{26}m and consider the adjusted redshift (1+z)c(1/\nu_emit - C d/\sqrt{n(n+2)}) where n = \sqrt{ \nu_{emit} R_{univ}/c}.  This is not necessarily the optimal choice theoretically but it is a rough translation of the comparison of eigenfunctions of Laplacian on a circle of radius 2\pi R_{univ} as A n^2 to the eigenfunctions of the Laplacian on a three-sphere as A n(n+2).  Then I chose C to minimize variance of the corrected redshifts.  The variance of redshifted wavelengths in the dataset (in meters) is 4.157 \times 10^{-14} while this procedure leads to C=7.76\times 10^{-4} which is not fundamental.  What is important is that the variance of the corrected redshift is \sigma_{corr}^2 = 7.233 \times 10^{-6}.  The variance reduction for the (observed redshifted wavelengths) is 0.9826, and the resulting distance/corrected redshift graph is the following:

distance_v_corrected_redshift

This graph for supernova dataset would not be indicative of any accelerated expansion of the universe.  The important point in this exercise is that the correction to the observed redshift is done with an extremely simple assumption that the ‘cosmological redshift’ is due ONLY to the static spherical geometry of spatial section say in a comoving frame which is the static Einstein spacetime with a fixed radius.  The explanation that I have for the redshift is NOT tired light.  It is simply that if space is curved then the waves on the space will not have the usual wavelength-distance relation and therefore we would expect a redshift that is not a physical phenomenon of interest beyond curvature.  Photons are assumed to follow the classical Maxwell’s equation which can be modified with a quantum field theoretic model etc. etc.  But the main issue is that redshift is a purely geometric phenomenon and not due to any special interactions that photons experience all classical waves will have this feature.  So there is no expansion of the universe that need be explained by the redshift and no accelerated expansion on supernova data.  There need be no dark energy either.