Feeds:
Posts
Comments

Archive for September, 2012

Every repetition of the line of argument improves the rigor, so here goes.  The empirical data required to refute the Big Bang are extremely well-known data highly checked by cosmologists: the distance-redshift data used to fit the cosmological redshift and the CBR temperature distribution available from the COBE satellite collection.  These data are widely available.  The major issues in this refutation are mathematical and conceptual and the numerical fit to data is not difficult.

(a) Assume the universe background space is static (this is the parsimonious basic assumption) — a complete riemannian manifold, but with four macroscopic space dimensions from evidence of 5, 8, 10, 12 fold rotational symmetry observed in crystals from early 1980s which are rotational symmetries impossible in three space dimensions by the crystallographic restriction theorem.  These have been called ‘quasicrystals’ but I can give you a simple parsimony based argument that it is better by parsimony to assume crystal and 4D rather than quasicrystal and 3D.

(b) Consider as first observation the uniform lower bound of the cosmic background radiation, a thermal Planck form. Now use the Li-Yau Gaussian upper bound on the heat kernel. Get a contradiction with uniform lower bound for any distribution of heat in a compact region of the universe.  Keep the static universe as the assumption.   Then ask the question of given static universe, can it be noncompact?  Consider the possibilities: if we start with any temperature distribution in a noncompact space concentrated in a compact region D in any finite time T, the heat distribution will essentially spread concentration to distance \sqrt{T} away from the boundary of D. Sufficiently far away, the exponential fall-off cannot produce a uniform lower bound on temperature. In a noncompact space, infinite time heat distribution would be zero temperature.  We know the CBR lower bound in the range of 300 light years in every direction. The lower bound on temperature, say of 2.7 K, is essentially uniform and extrapolation everywhere is reasonable.

(c) Now you have concluded that the universe must be compact and four dimensional. Is this sensible? What should the gravitational field equations be? Well the graviational field equations will be the Ricci curvature of hypersurfaces with covariantly constant stress-energy tensor. For a 4-sphere of radius 1/h, you will match the measured cosmological constant (and also produce quantization in the right units). So let us consider what happens in this case:(d) Solve the wave equation with progapation speed c on the 4-sphere of radius 1/h. The solution will be superpositions of Phi_k(x) exp( i omega_k t) where omega_k = sqrt(k(k+3)) x (ch). Compare this to waves on a flat 3D space with the same speed and note that the discrepancy can be used to explain the observed red shift.(e) Will this procedure also explain some of the other facts that have been used to establish the Big Bang versus Steady State thrown out in 1961? Yes. The discrepancy function for frequency d(k,D) = D(1/k - 1/\sqrt{k(k+3)}) where D is the distance between waves on a sphere and waves on a flat space produced in (d) has a frequency dependence so one expects that radio galaxies will show a skew towards ‘higher redshift’ even if there is no actual skew in distribution of radio and optical galaxies in reality.This is sufficient to refute the basic Big Bang picture. Now note that one can quantify the parsimony as follows: a model of a universe changing shape requires more parameters to explain than a static universe. Therefore one can apply, for example, the Akaike theorem which provides a quantitative way of balancing complexity and accuracy in quantitative models. The Akaike function is of the form (1/n) ( Likelihood – number of parameters), and n is the sample size. Assuming that one can explain the cosmological redshift, which is tightly linear in distance by the discrepancy (which is linear in distance), the likelihood of the Big Bang fit and the fit to the discrepancy are of the same order, but the number of parameters is higher for expanding space.
Advertisements

Read Full Post »

The universe must be compact, which is a simple application of Gaussian upper bound on the heat kernel on noncompact manifolds to the uniform lower bound on the cosmic background radiation which is known to be close to thermal equilibrium at around 2.7 K up to 300 light years in every direction. The argument is as simple as a Gaussian can’t have a uniform lower bound. Then one has evidence for four macroscopic spatial dimensions from the ‘impossible symmetries’ of 5, 8, 10 and 12 fold rotational symmetry in crystals observed from early 1980s.

Supposing that the universe is a 4-sphere, which can be matched to the observed cosmological constant O(h^2) when the radius is set to 1/h in appropriate length units, let us consider how to make the argument that classical physics can describe quantization rigorously.

Let me point out that this can be done extremely rigorously from the work of J. V. Ralston and others as well as the geometric quantization work of A. Weinstein and others. A careful reading of their results gives us rigorous justification for the intuition that ‘quantization is a consequence of the shape of the universe’.

Work of J. V. Ralston et. al that you need are:

http://arxiv.org/abs/math-ph/9807005 (Gutzwiller trace formula)

http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.cmp%2F1103900389 (Construction of quasimodes concentrated on periodic geodesics)

The major point is that on a space where every geodesic is closed of fixed length — such as a 4-sphere of radius 1/h, h Planck constant, appropriate length units — the classical Hamiltonian H(p,q) = g_ij(q) p^i p^j has the geodesics as the closed orbits. Then the Gutzwiller trace formula relates the eigenvalues of the Schroedinger operator to the closed orbits. The construction of quasimodes that Ralston produces are concentrated on neigborhood of radius O(n^{-1/2}) of the closed geodesic.

This is a rigorous basis for the S4 physics claim that classical physics ACTUALLY governs the universe, and quantum mechanics is the linear approximation. Once you get to this point, you have essentially gotten rid of quantum mechanics as the description of nature because it can be treated as a linear approximation of a classical physics. With this you remove problems such as the ‘measurement problem’.

Read Full Post »

I do not believe any individual or political organization — whether it be a militant organization or a nation state — has any legitimate right to take any human life.  As a corollary, I believe that capital punishment is untenable uniformly across the globe under all circumstances, even ‘self-defence’ either of an individual or as a threat to any political organization.  This means that I believe that although there are questions to be answered when there is a death due to self-defense, there is no a priori legitimacy to such a case of killing but rather that in such a case punitive measures might not be justified against the agent.

One of the favorite justifications for capital punishment in America is that it is a useful measure to prevent crime.  In this case, there is an empirical fact that is usually unknown to the proponents of capital punishment, which is that in America, homicide, which is the usual crime that justifies capital punishment, is higher than in Europe where capital punishment has been abolished.  The homicide rate in America is 4.2 per 100,000 per annum, while that in Europe as a whole, where capital punishment has been abolished, is 3.5 per 100,000 per annum (here is the data from a UNDOC study of 2012).  These statistics by themselves provide an empirical refutation for various ideas about how capital punishment could induce lower homicide rates and could be thought of as the outcome of a large scale experiment with large populations over a period of time:  approximately from 1950 but for the UK up to 1998.

Read Full Post »

The tool one needs to show that the universe is compact is the heat equation, and this argument I first formulated in July 2008.  The heat equation is not very complicated to describe intuitively: it is the equation governing how distribution of temperature evolves in a particular space in time.  This is a classical equation, first formulated by Jean Baptiste Joseph Fourier in 1807.  The equation is \Delta u = \partial_t u for a function u(x,t) of space and time variables.  We accept this as the equation describing the heat distribution over time in the actual world.

The solution of the heat equation can be written in terms of an integral operator.  If the time zero temperature distribution is f(x) then

u(x,t) = \int p_t(x,y) f(y) dy

where p_t(x,y) is the fundamental solution, or the so-called ‘heat kernel’.

We can study heat equation abstractly in curved spaces.  The Laplacian of functions on a riemannian manifold is described intrinsically via the metric.  We consider the possibilities of heat distributions of manifolds of different types: closed or open manifolds.  This has been a topic of active mathematical investigation for many decades.  We can take a particularly modern result, such as Li and Yau’s 1986 analysis of heat equation on ‘infinite’ or non-compact manifolds with some control of the Ricci curvature.  They show that the heat kernel for Schroedinger operators of type \Delta + V have upper and lower bounds that have Gaussian form,  C_1 \exp(-C_2 d(x,y)^2) where d(x,y) is the riemannian distance and C_1, C_2 > 0.

We now assume that our actual universe is noncompact and use actual data to come to a contradiction.  The data will be the cosmic background radiation discovered in 1964.  In every direction up to around 300 light years, the universe is not cold but has radiation distribution peaking in the microwave range, with temperature approximately 2.72 K.  There is thus a uniform lower bound on temperature in the CBR.  This uniform lower bound contradicts the assumption of noncompact universe because a Gaussian upper bound on the heat kernel implies that any initial distribution that begins in a compact region of the universe would have lower temperature than any level sufficiently far away from the compact region.

Now you might object that the standard picture is of an ‘expanding universe’.  We can argue that this standard picture is less parsimonious than assuming that there is no change in the ‘fabric of space’ because describing such change requires some parameters, and then explaining the redshift which has justified such ‘expansion’ of space by other means in a static space.

Read Full Post »

Path to S4 physics

There is absolutely no experimental evidence or otherwise for theories that claim higher than 4 microscopic dimensions, so by this criteria we should dismiss string theories and other theories with high numbers of micro dimensions as ‘hypothetical’ by your stringent requirements for empirical proof. On the other hand, there is observational evidence of 4 macroscopic spatial dimensions from 5, 8, 10, and 12 fold rotational symmetry observed in ‘quasicrystals’ for which a parsimonious conclusion of 4 macroscopic space dimensions is natural. Then the major question is how to address a force law proportional to 1/r^3 if the four dimensions behave in the same way as the three dimensions we are used to. For this, there have been proposals by Randall and Sundrum of matter constrained to lie in a 3-brane which would not change the force law from 1/r^2.

If we assume that the universe has four macroscopic dimensions and is compact (which is provable), then one can proceed to construct a minimally parsimonious theory: the S4 theory can be shown to unify gravity which is interpeted as the Ricci curvature equation for hypersurfaces where SM matter occurs through extrinsic curvatures of the physical matter in the 4-sphere. This idea I thought was new when I first sketched out this theory in 2008 but since then I have learned that this idea was used by Wesson and Ponce de Leon in 1992 in Kaluza-Klein theories.

Now on an S4 scaled by 1/h, we have quantization by h without a quantum mechanics, we have a cosmological constant of O(h^2) which is naturally interpreted as the curvature of the ambient 4-sphere. We thus have a geometric unification of gravity and the gauge forces. Furthermore, the gauge forces can be described by a classical Yang-Mills theory. So quantization can be eliminated altogether. Since for example the Gutzwiller trace formula relates classical periodic orbits to quantum energy levels and there exist other theory that produce an equivalence between quantum and classical mechanics on a sphere. The essential property here is that on a sphere every geodesic is closed of the same length.

Thus we have a classical grand unification theory without some of the major problems of previous approaches such as the cosmological constant problem. Clearly by the criteria of parsimony such a theory is superior to other candidates. Since quantum mechanics is an approximation, previous measurements are not invalidated.

What about Big Bang and expansion of the universe? Well I show that the redshift itself must occur as an artifact of measuring frequencies incorrectly on a sphere.

The major topic of contention is still how four dimensions are experienced. Here I personally have treated my metaphysical experiences as empirical observations — there have been reputable studies that show that the pineal gland is a visual organ — there is a Science article from 1999 and others. Before answering questions about the evidence of beings I had seen we have to clarify what objective fourth dimension behaves like in a physical theory. I could instead say — ‘in a dream I saw beings etc. etc.’ but I believe that many ‘subjective’ experiences are perfectly objectively real except that our current paradigm of science does not allow us to understand the precise manner in which they are objective.

Read Full Post »

For several years I have the intuitive claim that quantization is automatic on a sphere because all geodesics are closed of the same length.  This statement can be given precise meaning through the following consideration.  On the cotangent bundle T^*S^n consider the kinetic energy k(p,q) = 1/2 \sum_{i,j} g^{ij}(q)p^ip^j.  The preimage of an energy level C = k^{-1}(E) is reducible foliated by circles that project to a great circle.  If \gamma parametrizes a geodesic then its preimage satisfies \dot{\gamma} = (2 E)^{-1/2} X_k.  Now let \alpha be the 1-form whose exterior derivative is the symplectic form.  Then \alpha(X_k) = 2E.  Therefore integrating \alpha over S gives 2\pi (2E)^{1/2}.  Then one considers the issue of whether   This analysis is reproduced from Alan Weinstein’s geometric quantization notes.

For prequantization, one considers the \hbar-integrality of 2 pi (2 E)^{1/2}.  This example gives precision to the claim that quantization is automatic on a sphere because of the closed geodesic property.

This construction can be modified by rescaling the sphere metric by 1/h where it becomes clear that integrality rather than \hbar-integrality becomes the criteria for prequantization.

Read Full Post »

Arkani-Hamed, Dimopoulos, and Dvali had proposed macroscopic higher dimensions to solve the hierarchy problem, which translates generally to the gigantic gap between Planck mass ~10^19 GeV and the Higgs mass ~10^3 GeV.  They consider adding n compactified dimensions of radius R and solve for R where the Higgs mass is set to be the effective Planck mass.  Then they rule out n=1 because the solution would provide R=10^13 cm which would alter the force law in their thinking at the solar system scale.
If nature has four macroscopic dimensions, which is the parsimonious conclusion from the observation of four-dimensional crystal symmetries in the so-called quasicrystals, then we first have to ask why are four dimensions not simply observed by us.  Do we have any ability that could probe in more than three dimensions?  My own answer had been that the third eye, which is a visual organ, is our natural vision into a four dimensional universe.  Given this, we can conclude that there is a large qualitative difference between our own experience of a four dimensional reality and the mundane three dimensional reality.  In particular, although we have sharpened our intuition for dimension on three dimensions, we must treat dimensions purely mathematically when we speak of four macroscopic dimensions.

We can interpret Einstein equations with a cosmological constant as the Ricci curvature equation for a hypersurface of a four dimensional sphere where the cosmological constant appears as the curvature of the ambient sphere.  If for a moment we take the Einstein equations as defining equations for the hypersurface of a sphere that is evolving in time, then these are producing for us the restriction to three dimensions that is equivalent to the gravitational force.  In other words, we can abandon the idea of a ‘force law’ in four dimensions altogether and treat the Einstein equations as producing a rigid restriction to a three dimensional hypersurface regardless of whether one could measure four dimensional distances in principle.

Read Full Post »

Older Posts »