|
Let us discuss the main properties of
each of these approaches with respect to their treatment of
infinity. In this connection it is useful to look at conformal
diagrams. In all three cases the physical situation is the same. A
source, indicated by a shaded brown region, evolves between the
time-like past and future infinities and thereby emits
gravitational radiation, which escapes towards future null infinity
. The three methods differ in how they introduce an
evolution process in order to simulate the physical system. The
goal is to obtain the radiation data that are received by an
observer on
as accurately as possible
(see [49
] for a discussion of
the issues involved in the reception process).
In Figure 9 we sketch the
geometry implied by solving the Cauchy problem for the standard
Einstein equations in any of the various forms that exist today:
ADM formulation, old [6]
and improved [17],
or any one of the hyperbolic formulations such as
e.g. in [1, 74]. In this approach
the physical space-time is decomposed into space and time by the
introduction of a time coordinate. The hypersurfaces of constant
physical time are Cauchy surfaces and, hence, asymptotically
Euclidean. They are indicated by green lines. The blue line
corresponds to a (normally time-like) hypersurface which has to be
introduced because for the standard Einstein equations, formulated
in the physical space-time, null-infinity is infinitely far away.
The numerical grids are distributed along the constant time
hypersurfaces, and with each time-step they evolve from one
hypersurface to the next. They have outermost grid-points which, in
the course of the evolution, move along that time-like
hypersurface.
Thus, inevitably, we are facing the task of solving an initial boundary value problem (IBVP) for the Einstein equations. Such problems are notoriously difficult to treat numerically. This, in fact, is one of the big problems in scientific computing as witnessed by the following quotation [159]:
The difficulties caused by boundary conditions in scientific computing would be hard to overemphasize. Boundary conditions can easily make the difference between a successful and an unsuccessful computation, or between a fast and a slow one. Yet in many important cases, there is little agreement about what the proper treatment of the boundary should be.
One of the sources of difficulty is that although some numerical boundary conditions come from discretizing the boundary conditions for the continuous problem, other ‘artificial’ or ‘numerical boundary conditions’ do not. The reason is that the number of boundary conditions required by a finite difference formula depends on its stencil, not on the equation being modeled. Thus even a complete mathematical understanding of the initial boundary value problem to be solved (which is often lacking) is in general not enough to ensure a successful choice of numerical boundary conditions. This situation reaches an extreme in the design of what are variously known as ‘open’, ‘radiation’, ‘absorbing’, ‘non-reflecting’, or ‘far-field’ boundary conditions, which are numerical artifacts designed to limit a computational domain in a place where the mathematical problem has no boundary at all.
Despite these remarks, perhaps the most basic and useful advice one can offer concerning numerical boundary conditions is this: Before worrying about discretization, make sure to understand the mathematical problem. If the IBVP is ill-posed, no amount of numerical cleverness will lead to a successful computation. This principle may seem too obvious to deserve mention, but it is surprising how often it is forgotten.
While these remarks apply to any numerical problem, they are particularly relevant to numerical relativity.
Until only quite recently no result on the IBVP
for the Einstein equations was available. This situation has
changed with the work in [72] where it is proved
that the IBVP for the Einstein equations in the physical space-time
is well-posed for a certain class of boundary conditions. In rough
terms, these boundary conditions can be understood as follows: For
an arbitrarily specified time-like unit vector tangent to the
boundary, we can define a null-tetrad by taking the two real
null-vectors as lying in the two-dimensional space spanned by that
vector and the unit vector normal to the boundary. This fixes the
null-tetrad up to boosts in that plane and rotations in the plane
orthogonal to it. With respect to that tetrad we can define the
(complex) Weyl tensor components
and
. Then, the boundary conditions have the form
Consider the boundary condition with and
viz.
. This looks like a completely absorbing boundary
condition or, in other words, like the condition of no incoming
radiation. However, this is deceiving. Recall that the Weyl tensor
components are defined with respect to a time-like unit vector. But
there is no distinguished time-like unit-vector available so that
they are all equivalent. This means that the boundary conditions,
and hence the presence or absence of incoming radiation, are observer dependent. Clearly, the condition
of no incoming radiation is not compatible with the Einstein
equations anyway, because outgoing waves will always be accompanied
by backscattered waves generated by the non-linear nature of the
equations. The impossibility of a local completely absorbing
boundary condition for the scalar wave equation in flat space has
been emphasized already in numerical analysis [43]. The deep
reason behind this fact seems to be the Lorentz invariance of the
equations. While the conditions in (38
) give rise to a
well-posed IBVP, they have not found their way into the numerical
work yet.
Returning to Figure 9 we see that there are
several fundamental difficulties with this (currently standard)
approach. Recall that the goal is to obtain radiative information
near
. To achieve this goal we need to arrange for two
things: We need to make sure that we simulate an asymptotically
flat space-time, and we need to have a stable code in order to get
meaningful results. In this approach, both these requirements can
only be realized by an appropriate boundary condition. The orange
line in the diagram indicates the domain of influence of the
boundary. The parts of space-time above that line are influenced by
the boundary condition. It is obvious from the causal dependencies
that the radiative information that can be accessed using this
approach is to a large extent affected by the boundary condition. A
bad choice of the boundary condition, in particular one which is
not compatible with the Einstein evolution, or one which is not
adequate for asymptotic flatness, can ruin any information gained
about the radiation.
So we see that a boundary condition has to have several important properties. It must be:
Devising a boundary condition that has these properties has been found to be very hard, if not impossible. There have been several proposals, ranging from specifying Minkowski or Schwarzschild data to matching with linearized solutions of the Einstein equations. Although these conditions are physically reasonable to some extent, they are not compatible with the Einstein evolution and therefore they do not possess the other two properties. Such inappropriate boundary conditions can be expected to perform well for small amplitudes and for short times when the non-linear effects can be neglected. However, they will necessarily become unstable in the long run.
We can also see from the diagram that the choice
of asymptotically Euclidean hypersurfaces is unfortunate for the
following reason. A natural thing to do in order to increase the
accuracy of the results is to push the boundary further outward
towards infinity. However, pushing along surfaces of constant time
brings us closer to and not to the interesting parts in the
neighbourhood of
. This implies that the numerical codes
not only increase in size but in addition that they have to run
longer (in the sense of elapsed physical time) until the
interesting regions are covered. The constant time hypersurfaces do
not “track the radiation”.
One idea [23] to
overcome the problems encountered in the standard Cauchy approach
is the so-called Cauchy-Characteristic matching
procedure [24]. In this
approach one introduces a time-like hypersurface, the interface, along which one tries to match a
Cauchy problem with a characteristic initial value problem. The
Cauchy part provides the interior boundary condition for the
characteristic part while the latter provides the outer boundary
condition for the former. Thus, the space-time is foliated by
hypersurfaces that are partly space-like and partly null, where the
causal character changes in a continuous but non-differentiable way
across a two-dimensional surface (see Figure 10).
|
The characteristic part of the
hypersurfaces reaches out all the way to . The advantage of
this procedure is that the problem of finding the correct boundary
condition has been eliminated by the introduction of the
characteristic part of the scheme.
The characteristic part has been implemented
numerically in a very successful way (see the Living Reviews
article [164] for a recent review
on that topic). It is based on the Bondi-Sachs equations for the
gravitational field on null-hypersurfaces. Using a “compactified
coordinate” along the generators of the null-hypersurfaces it is
even possible to “bring null-infinity to finite places”. However,
this is different from the conformal compactification because the
metric is not altered. This necessarily leads to equations that
“notice where infinity is” because they degenerate there. Because
of this degeneracy the solutions can be singular unless very
specific boundary values are prescribed. This is essentially the
peeling property of the fields in disguise.
|
The critical place is, of course, the interface where two completely different codes have to be matched. This is not merely a change in the numerical procedures but the entire setup changes across the interface: the geometry, the independent and dependent variables, and so on. Naturally, this requires a very sophisticated implementation of the interface. Test calculations have been performed successfully for space-times with symmetries (cylindrical, spherical) and/or model equations such as non-linear wave equations. Currently, a combined code for doing the pure gravitational problem in three dimensions is being developed. This is also described in more detail in [164].
Finally, we want to discuss the approach based on
the conformal compactification using the conformal field equations
(see Figure 11). In this case, the
arena is not the physical space-time but some other unphysical
manifold, which is conformally related via some conformal factor
.
The physical space-time is the part of the
conformal manifold on which is positive. Introducing an
appropriate time-coordinate in the conformal manifold leads to a
foliation by space-like hypersurfaces that also cover the physical
manifold. Those hypersurfaces that intersect
transversely are hyperboloidal hypersurfaces in the
physical space-time. It is important to note that they are
submanifolds of the conformal manifold so that they do not stop at
but continue smoothly across
, which is just another submanifold of the conformal
manifold, albeit a special one.
Hyperboloidal initial data (see Section 3.4) are given on one such
hypersurface and are subsequently evolved with the symmetric
hyperbolic (reduced) system of evolution equations. In contrast to
the behaviour discussed above in connection with the characteristic
evolution, the conformal field equations do not know where is. They know that there might be a
(because the conformal factor is one of the
variables in the system) but they have to be told its location on
the initial data surface, and from there on this information is
carried along by the evolution.
What are the advantages of this kind of
formulation? First of all, one might have thought that the question
of the correct boundary condition for the IBVP for the Einstein
equations has now merely been shifted to the even more complicated
problem of finding a boundary condition for the conformal field
equations. This is true but irrelevant for the following reason:
Suppose we give hyperboloidal initial data on that part of the
initial surface that lies inside the physical region. We may now
smoothly extend these data into the unphysical region. The
structure of the characteristics of the conformal field equations
is such that (provided the gauges are chosen appropriately) the
outermost characteristic is the light cone. Hence is a characteristic for the evolution equations, and
this implies that the extension of the initial data into the
unphysical region cannot influence the interior. Similarly, it is
true that also the conformal field equations need a boundary
condition. But since this boundary has been shifted into the
unphysical region there is no need for it to be physically
reasonable. It is enough to have a boundary condition that is
numerically reasonable, i.e. which leads to stable codes. The
information generated at the boundary travels at most with the
velocity of light and so it cannot swap into the physical region.
acts as a “one-way membrane”. It should be remarked
here that this is true for the exact analytical case. Numerically,
things are somewhat different but one can expect that any influence
from the outside will die away with the order of accuracy of the
discretization scheme used (see below).
A further advantage of the conformal approach is
the possibility to study global
properties of the space-times under consideration. Not only do the
hyperboloidal hypersurfaces extend up to and beyond null-infinity,
but it is also possible to study the long-time behaviour of
gravitational fields. If the initial data are small enough so that
future time-like infinity is a regular point (see
Theorem 6), then one can
determine in a finite (conformal and computational) time the entire
future of the initial surface and therefore a (semi-)global
space-time up to (and even beyond)
. This has been
successfully demonstrated by Hübner [90
] in the case of
spherically symmetric calculations and more recently also in higher
dimensions.
The calculation of a global space-time up to has the effect of shrinking the region on the
computational domain that corresponds to the physical space-time
because
seems to move through the grid. This
means that more and more grid-points are lost for the computation
of the physical part. Sometimes it might be more useful to have
more resolution there. Then the available gauge freedom can be used
to gain complete control over the movement of
through the grid [46
].
The conformal field equations lead to a first order system of approximately sixty equations, depending on the specific formulation and gauge conditions. In the version derived in Appendix 6 there are 65 equations. This is a very large number of equations. However, on the one hand one has to compare this with current proposals for hyperbolic systems for the Einstein equations, which advocate a number of equations of the same order (although there are not even equations for a conformal factor). On the other hand, the variables appearing in the conformal field equations are of a very geometric nature, and for further investigations of the space-times (finding the geodesics, computing forces, etc.), these variables would have to be computed anyway.
But we can be somewhat more specific: The
conformal field equations have, roughly, a total number of 60
equations. The conventional codes use roughly 10 equations if they
use the standard Cauchy approach. So there is a factor of 6 between
the conformal field equations and the others. But that is not
really the point. The complexity (in particular, the memory) scales
roughly with the number of gridpoints, and for a 3D code that is
, where
is the number of grid points per
dimension. So already doubling
gives a factor of 8. The
upshot of this is that the memory requirements are not dictated by the number of equations,
because this is a fixed factor, but by the dimensionality of the
code, and this affects both codes in the same way.
A final word concerning the inclusion of matter into the conformal approach: It is clear that this is more complicated than it is for the standard Einstein equations. The reason is that the conformal field equations also contain the (rescaled) Weyl curvature, which couples to the energy-momentum tensor via its derivatives. This means that one needs equations not only for those matter fields that appear in the energy-momentum tensor but also for their derivatives. Furthermore, the fact that under conformal rescalings the trace-free part of the energy-momentum tensor behaves differently compared to its trace, causes additional difficulties. However, these can be overcome under certain circumstances [63, 91].
It was mentioned in Section 3.1 that for the conformal field equations reduce to the standard
Einstein equations together with the vacuum Bianchi identity. This
suggests that one should specify the conformal factor initially to
be equal to unity in a region which contains the sources (assuming
the sources have spatially compact support) and then decreases
until it vanishes on
. Furthermore, the time-derivative of
should be adjusted such that it vanishes in a
neighbourhood of the source. In fact, it is possible to do so, but
it is not yet known, whether and, if so, for how long one can keep
up this condition of constant conformal factor around the source.
The proposed procedure does not eliminate the problem that
additional equations for the matter variables are needed, but it
might reduce the complexity somewhat.
We will now discuss some issues related to the
construction of hyperboloidal initial data and the implementation
of the evolution equations.