Let Cl(TS^4) be the Clifford algebra bundle on the round four-sphere, and spinors will be sections of an bundle \Sigma on which this acts as the endomorphism group.   Let M\subset S^4(\Lambda) and f\in \Gamma(\Sigma^{+}|_{M}) be the spinor bundle on M.  This is slightly annoying but O. Hijazi and Richard Melrose and others go through the relationship between the restiction of spinor bundles.  We are interested in the Dirichlet problem for taking a spinor field on M and and extending it uniquely to a harmonic spinor on S^{+} the upper hemisphere.  We know there are no harmonic global spinors on S^4(\Lamba) because the Lichnerowicz formula, or the Schroedinger-Lichnerowicz-Weitzenbock formula says that D_{S^4}^2 = \Delta_{S^4} + \frac{1}{4}Scal and since the scalar curvature is positive, there won’t be any harmonic spinors different from zero by positivity.  But we want to show that unique harmonic extensions do exist for any spinor field on M to the upper and lower hemispheres separately.  They will necessarily not behave nicely in some precise manner to be understood.  We won’t worry about that complication and just go through the standard machinery of solving Dirichlet problem.  This is a technique I learned when I was a junior at Princeton from Peter Sarnak’s functional analysis course.  I remember quite well that he emphasized the Weierstrass was not happy with the argument that the minimizing the Dirichlet integral \int_{\Omega} |Du|^2  dx solves the Dirichlet problem because he worried that there may be no actual minimizers.


Let A = \{ \phi \in \Gamma(\Sigma): \phi=f on \partial S^{+} = M\}.  We have to pick sections not just smooth but maybe Sobolev L^2_2 to ensure we can do analysis without worries and get a Hilbert space and then use elliptic regularity to end with smooth solutions.  So let’s just assume that’s what \Gamma means.

Let’s see how does this work.  We have a Hilbert space A so that’s good.  Now we have a functional on the Hilbert space I(\phi) = \int_{S^{+}} | D_{S^4} \phi |^2 dx.  This functional is obviously non-negative.  We claim that \phi_m = \argmin_{\phi\in A} I(\phi) will solve the Dirichlet problem with the boundary value specified.  Let us defy Weierstrass and assume that the minimum is achieved first by \phi_m.  Let e_k be a orthonormal basis of A formed by eigenspinors of Dirac squared operator, so this is why it’s nice to ensure that the sections are Sobolev rather than smooth.  Then we use self-adjointness of D_{S^{+}}.

Take any \phi \in A.  Then compute ( D^2 \phi, \phi) = ( D\phi, D \phi ) = I(\phi).

Write \phi_m = \sum_k a_k e_k where e_k forms an eigenbasis with De_k = \lambda_k e_k.  Then

I(\phi_m) = \sum_k \lambda_k^2 a_k^2

For any k use the fact that I(\phi_m) \le I( a_k e_k) to remove all the coefficients one by one.  This will give you that I(\phi_m) \le 0.

So if we ignore Weierstrass worries this trivial argument shows that if a minimizer is achieved it will solve D^2\phi_m=0.  Now let’s worry about Weierstrass and try to see why we are guaranteed that a solution must exist.  This is actually a solved problem so I am doing this just as an exercise.  The exercise has some value because one could worry that since we are dealing with spherical geometry and a geometrically nontrivial boundary something strange could happen but this argument is the same as for the standard euclidean domain with a smooth boundary.

The standard discussion of conditions that guarantee existence of minimizers in thes situations, a good discussion is found here, provides the simple example of a real valued function with zero infimum which is never reached, f(x) = e^{-x^2} which is positive on the real line.  One condition that is used to ensure f(x) reaches a minimizer is to restrict to functions \lim_{x\rightarrow \infty} f(x) =\infty. The two sorts of conditions put on the integrand in these sorts of Dirichlet integrals is convexity and coercivity to ensure a solution.  Coercivity conditions are the uniform ellipticity conditions on the symbol such as L(p,x)\ge a|p|^q-b for a,b>0.  In the specific case of interest because D_{S^4}^2 = \Delta_{S^4} + \frac{1}{4} Scal and scalar curvature is positive we have coercivity and the rest of the machine gives us the existence of a minimizer.

This is standard enough an argument that I won’t worry about writing it up carefully until there is some deeper result that occurs but this is a result nonetheless: Every spinor field at a frozen time in the physical universe can be extended to unique harmonic spinor fields in the upper and lower hemispheres of the absolute space S4 by solving the Dirichlet problem.

Now we return to the issue of what is really happening in the actual universe.  I have already provided the true law of electromagnetism in the universe on July 4 2018, i.e. the potential A is a spinor field on S4 that will satisfy the wave equation:

(D_{S^4}^2 - \frac{1}{c^2}(\frac{\partial}{\partial t})^2)A = 0

Now the physical universe M(t) is evolving in S4 according to deterministic electromagnetism and the complexity of all phenomena in the universe is contained in this deterministic evolution, from senses, smells, atoms, molecules, all things that we think of as the physical world and all that it contains, moving along deterministically with a single clock for the entire system, no time dilation depending on reference frame.  But every moment the hypersurface shape of the universe is changing.  Putting aside nuclear forces for a moment, we have all change in the physical universe being tied to changes in the physical hypersurface nanosecond by nanosecond from one complex configuration of the entire physical universe to another complex configuration.  In particular, the Dirac eigenspinors of the physical universe at time t=t_1 are not going to be the Dirac eigenspinors of the physical universe at time t=t_2.

The Dirichlet problem at a fixed time t is telling us that in the eigenspinor basis of M(t) we can extend the eigenspinor basis to harmonic spinors on S^{+}(t) and to S^{-}(t) separately.  We still worry about what must happen to the two extensions in two directions.  We still worry about what happens to these when a nanosecond passes.  Of course we are not used to dealing with simultaneity for more than a century so these things are not yet clear to us.

So we ask:  consider the data at time t:  eigenspinor basis of M(t), two extensions of the basis to S^{\pm}(t) and let a moment pass and there is now a deformation of M(t) we assume smoothly to M(t') and all the data changes.  Can we be assured that there is a nice relation between the eigenspinor basis?  And if so, can we be assured that the harmonic extensions also have some relationship that is constant?

How can we quantify the deformations of the analytic objects?  What is the relation between eigenspinors of D_{M(t)} and D_{M(t')}?  Then I go back to seeing what people have done before I try to solve this alone especially because this is the sort of thing that geometers have been studying for a while as well as analysts.

Let’s consider a simpler case to visualize.  You have a moving boundary of a closed region.  The boundary is B(t) \in \mathbf{R}^n enclosing the region \Omega(t).  At two different times t_1,t_2 you select functions f_1,f_2 on boundaries B(t_1),B(t_2).  You solve the Dirichlet problem for these two situations and obtain solutions u_1(x),u_2(x) both satisfying \Delta u_i=0 in with the two boundary conditions.  Suppose that B(t_2) is a slight deformation of B(t_1) by some smooth operation.  Then we have the principle of not feeling the boundary for diffusion processes far from the boundary that says that the transition density of the Brownian motion is very close for both boundary values.  Since harmonic functions are the equilibrium states for the heat kernel, since e^{t\Delta}u_i = u_i we might conclude that in the interior u_1=u_2 or quite close to each other.  We don’t want to jump to trying to prove quantiative statements at the moment.  So we expect the solutions to be very close to each other far from the boundary and near the boundaries completely depend on f_1,f_2 and the geometry of the boundaries B(t_1),B(t_2).

Now let’s take a small step.  Let’s say that B(t_1) is oriented and has an outer normal n(t_1) at every point and there is a function of time a(t,x) o$B(t_1)$ so that B(t_2) = \{ x + a(t_2,x)n(t_1) : x \in B(t_1)\} where we start counting time from t_1 for f so a(t_1,x)=0 for all x\in B(t_1).  The point of this construction is to put the deformation from B(t_1) to B(t_2) in a function.

Now let’s make things even.  Let’s choose the boundary value functions f_1,f_2 to be the ‘same’ in the sense that we set f_2(x+a(t_2,x)n(t_1) ) = f_1(x).  This guarantees that the ‘corresponding’ points of the two boundaries will have the same function values.  Then only the geometric difference between B(t_1) and B(t_2) should make a difference in the solution of the Dirichlet problems.

Ok, now we ask what will be the difference of the two solutions u_1,u_2 be?

Let’s make this problem even simpler by assuming a(t,x)>0 so that one of the regions is completely contained in the other.  We need some insight here and then we can go back to being sophisticated.

The first thought that comes to my mind is to try to understand the integral:

\int_{\Omega(t_1)} \Delta u_2 w dx

So this is the ‘more harmonic function’ of the bigger region \Omega(t_2) being integrated in the smaller region.  Not only is \Delta u_2 zero in the interior of the region \Omega(t_1) but it’s zero even at the boundary and more.  This integral therefore is zero.

Now let’s integrate \Delta(u_2 w) = w \Delta(u_2) + 2 du_2 dw + u_2 \Delta(w).  We’re applying Stokes’ theorem to get a boundary integral on the left.  This is integration by parts in slow motion.

\int_{B(t_1)} d( u_2 w ) = 2 \int_{\Omega(t_1)} du_2 dw + \int_{\Omega(t_1)} u_2 \Delta(w)

Fabulous since we have this freedom to do something with w.  I want to try a couple of things w=1, w=u_1 to see what’s happening.

If we set w=1 then everything on the right hand size vanishes so we find that \int_{B(t_1)} du_2 = 0 which is useful.  If we set w=u_2 then the second term above vanishes.

Now I look at the first integral and I want to just say ‘Stokes’ theorem’ implies it is zero but I want to be absolutely sure about things like this because it’s one of those things that without a lot of practice it’s easy to make an error.  I am better off being careful actually because boundary is what we are most interested in.  The Elie Cartan version of Stokes theorem says \int_{\Omega} d\omega = \int_{\partial \Omega} \omega.  So formally the left side should be zero since \partial B(t_1) = \varnothing.

I see the issue is analytically a bit subtle.  Formal adjoints for the Dirichlet problem depend on the Hilbert space chosen and here the space of functions are different so we need to find a way to put the Hilbert spaces in the same footing.  That is the point of why I am hovering on this simple situation.  I want to see how the comparison can be done concretely in this situation.

I’m going to keep this problem open.  It’s definitely solvable–I mean as an exercise.  And will give me some insight about what deformations of the physical universe is tied to the harmonic spinor extensions to upper and lower hemispheres of S4.  Recall that any connected oriented hypersurface will divide S4 into exactly two pieces in analogy to the Jordan curve theorem on the plane.  When the boundary moves a little bit we want to know how the harmonic extensions change close to the boundary because far in the interior the spinors will be very close to each other not feeling the boundary.

Let’s hone in at a point  x\in B(t_1).  Choose coordinates (x_1,\dots, x_{n-1},\tau) where the last variable is normal.  Choose the coordinates orthonormal, and let \Delta_T=(\frac{\partial}{\partial x_1})^2 + \cdots + (\frac{\partial}{\partial x_{n-1}})^2.  This is the sort of thing Elias Stein has done many times in his work as well as others.  So \Delta_T f_1 is fixed and we don’t have any control over this quantity but \Delta_T u_2 = -(\frac{\partial}{\partial \tau})^2 u_2.  So that’s useful but now we sort of have to do a little dance to make sense of the one-sided \tau derivatives for u_1 which is defined up to the boundary from the interior and not defined beyond the boundary or on it.

Intuitively we should have something like ‘for the one sided inside pointing variable -\tau. the difference of \Delta u_2 and \Delta u_1 on the boundary B(t_1) is -\Delta_{T} f -(\frac{\partial}{\partial \tau})^2 u_1‘.  This is an estimate of how much u_1 differs from actually solving \Delta u_1=0 on the boundary.

Now we could say the same about u_2 but at the boundary B(t_2).  Ok now we focus on the coordinate changes necessary to get the tangential Laplacian by translation by a(t,x) along the normal direction.  Ok so this is nice.  We can keep the same (x_1,\dots, x_{n-1}) coordinates since we are just parallel translating the tangent space T_xB(t_1) by displacing it along the normal.

There is a little more complication however.  We cannot avoid Riemannian geometry even here since the Laplacian on \mathbf{R}^n cannot always be written as a nice quadratic form in terms of orthonormal coordinates of B(t_1) by a translation and rotation.  The curvature of B(t_1) will come in regardless.

Ok that’s good.  We know what we are trying to do and even in the flat space situation we are forced to confront the Riemannian geometry.  Excellent.  We learned now that the metric geometry of the boundaries must play a crucial role no matter how we slice the problem.

I was working on a similar issue on June 2015, when the Laplacian of a hypersurface of a sphere popped up and I had recorded it and we will need this again soon but right now we would like to know the Laplacian of the a hypersurface of \mathbf{R}^n which should have a similar formula.  I thought this would be extremely standard fare but it is not, although it is worked out in some places. I took a version from these notes which are helpful.


So one has to take the formula with some interpretive care.   So here h is a definining function for the hypersurface \Sigma in the notation of the author here.  He sets things up in this case by choosing coordinates so that the origin is the point of concern and at the origin the tangent space of \Sigma is \mathbf{R}^{n-1}\times\{ 0 \}, the metric at the point is identity matrix and the Christoffel symbols at the point vanish.  His \Delta_x is our \Delta_{\mathbf{R}^n} and in his coordinates \beta=1 and his \nabla_x^2 is the usual Hessian matrix of mixed partial derivatives at every entry. So this is the Laplacian of the hypersurface at a point.  So this is progress, we have some concrete formula.  Ah, I see, the gradient \nabla_x h(0)=0.  That’s good, because that will kill off most of the mess at the point zero and this will leave

\Delta_{\Sigma} \rho(0) = \Delta_{\mathbf{R}^n} \rho(0) - H(0)

where H(0)=\sum_i \kappa_i = \Delta_{\mathbf{R}^n} h(0) is the mean curvature of \Sigma.  This is what I really want.  After appropriate change of coordinates the difference between \Delta_{\mathbf{R}^n} and \Delta_{\Sigma}  at the point but not in a neighborhood of the point is zero because the mean curvature term from \Delta_{\mathbf{R}^n}h(0) gets multiplied by (\nabla_x h|\nabla_x \rho).  Ok so we were right all along but we can only do this at a point.  So if we pick coordinates for B(t_1) at a point we can carry out this coordinate choice but if we translate the coordinates by moving parallelly moving normally by a(t,x) we will have to worry about the defining function for B(t_2) which might have a different behaviour.  The normal direction will stay the same but the tangent directions will be affected by a linear transformation with origin preserved.  If h_2 is the defining function of B(t_2) at (0,\dots,0, a(t,0)) then we have to worry that in these coordinates we will not have the nice relations.  We’ll come back to B(t_2) points later.  We can get very good coordinates for our hypersurface B(t_1) at a point so our previous analysis is fine there.  The problem is that it’s only at a point and not in a neighborhood.

We need a defining function for B(t_1) and in terms of the defining function h we want to compare \Delta_{\mathbf{R}^n} u_1 with \Delta_{\mathbf{R}^n} u_2 at the origin point.  We had decided that at the chosen point \Delta_{\mathbf{R}^n} u_2 - \Delta_{\mathbf{R}^n} u_1 = -\Delta_{B(t_1)} f + (\frac{\partial}{\partial \tau})^2 u_1.  What does this do for us?  It tells us something about how non-harmonic the u_2 is at the boundary.

Ok so now we repeat this process but now with a third larger region B(t_3) with a function f_3, a solution u_3 of the Dirichlet problem and we use that to get our discrepancies.  But now we have to care about geometry differences in the boundary.

If we ask: how nonharmonic is u_1 on the boundary, we find u_1 has a deficiency -\Delta_{B(t_1)} f_1 + \partial_{\tau}^2 u_1.  If we ask how nonharmonic is u_2 at its boundary, and we keep the coordinates the same as the first point with origin at B(t_1) then we can’t just repeat the same formula because now we are married to the coordinates of B(t_1) and the nonharmonicity of u_2 at its boundary will not have all the nice cancellations that we had the freedom to produce at B(t_1).  So what is the difference at B(t_2) and why do we care?  We want to understand how the change in geometry from B(t_1) to B(t_2) changes the harmonicity of u_1 and u_2 now with a reference u_3 which will be harmonic at the points of interest, the origin, the line drawn from the origin following the normal to intersect B(t_2) and the function value at the boundary are set so that these two points have the same boundary value.

I see now we can see what this setup was good for.  We want to compare something like  F(1,2) = \Delta_{B(t_2)} f_2 - \Delta_{B(t_1)} f_1 because these are the terms that determine how far a ‘more harmonic function’ u_3 will differ from u_1,u_2.  Ok now this makes sense.  We should have a theorem that says that if u_3 is a harmonic function in a region containing both \Omega(t_1),\Omega(t_2) then at the boundary points, F(1,2) will be some measure how how u_1,u_2 compare to u_3 for harmonicity at their respective boundaries.  This seems like a long roundabout way of saying something quite obvious.  But there is something we are circling to find here,  so for example, we now know that the changes in harmonicity will be determined by the geometry difference of the defining functions that can be described by a single linear transformation of one tangent space to the other.

Let’s take a step back again.  We have the physical universe evolving by something very similar to this Euclidean space moving boundary.  We know that we can extend the spinor fields to harmonic fields on the upper hemisphere of S4.  The physical universe moves a little bit as time passes.  Here we are talking about defining equations.  Lets say the defining equation of the physical universe goes from h_1 to h_2 in the time elapsed.  We can see that the Laplacian on the hypersurfaces change as a function of the difference between gradient and laplacian of h_1,h_2. From this we want to get some sense for how big the difference is for the functions u_1,u_2.

I see we need something like a Harnack’s inequality here.  Ok so I was right that we need geometric analysis to understand these things.  Recall that the Harnack’s inequality says that if u(x) is harmonic on \mathbf{R}^n and |x-x_0|=r <R then \frac{1-(r/R)}{(1+(r/R))^{n-1}} u(x_0) \le u(x) \le \frac{1+(r/R)}{(1-r/R)^{n-1}} u(x_0).  The Harnack’s inequality tells us about limited variability for a given harmonic function in a neighborhood.

Let’s take a look to see if this can be useful in the context of our problem.