The two main codes for performing -body simulations are Kira and NBODYx. The Kira integrator
is part of the Starlab environment which also includes stellar evolution codes and other modules for doing
-body simulations [120]. The NBODYx codes have been developed and improved by Aarseth since the
early 1960’s. He has two excellent summaries of the general properties and development of the NBODYx
codes [1, 2]. For further details, see Aarseth’s book on
-body simulations [3]. A good summary of
general
-body applications can also be found at the NEMO website [226]. NBODY6++ is a
parallelization of the NBODY6 code for use on large computer clusters [220], and a parallel version of Kira
is under development [151
]. Most large
-body calculations are done with a special purpose computer
called the GRAPE (GRAvity PipE) invented by Makino [150]. The most recent incarnation of the
GRAPE is the GRAPE 6, which has a theoretical peak speed of 100 Tflops [228]. There is also a
PCI card version (GRAPE-6A) which is designed for use in PC clusters [74]. The GRAPE
calculates the accelerations and jerks for the interaction between each star in the cluster. The next
generation GRAPE-DR, which could reach
Pflops, should be operational in about three
years.
The main advantage of -body simulations is the small number of simplifying assumptions which
must be made concerning the dynamical interactions within the cluster. The specific stars and trajectories
involved in any interactions during the simulation are known. Therefore, the details for those specific
interactions can be calculated during the simulation. Within the limits of the numerical errors that
accumulate during the calculation [87], one can have great confidence in the results of
-body
simulations.
Obviously, one of the main computational difficulties is simply the CPU cost necessary to integrate the
equations of motion for bodies. This scales roughly as
[100
]. The other computational
difficulty of the direct
-body method is the wide range of precision required [117, 100].
Consider the range of distances, from the size of neutron stars (
) to the size of the
globular cluster (
), spanning 14 orders of magnitude. If the intent of the
calculations is to determine the frequency of interactions with neutron stars, we have to know the
relative position of every star to within 1 part in
. The range of time scales is worse yet.
Considering that the time for a close passage of two neutron stars is on the order of milliseconds and
that the age of a globular cluster is
, we find that the time scales span 20
orders of magnitude. These computational requirements coupled with hardware limitations mean
that the number of bodies which can be included in a reasonable simulation is no more than
. This is about an order of magnitude less than the number of stars in a typical globular
cluster.
Although one has great confidence in the results of an -body simulation, these simulations are
generally for systems that are smaller than globular clusters. Consequently, applications of
-body
simulations to globular cluster dynamics involve scaling lower
simulations up to the globular cluster
regime. Although many processes scale with
, they do so in different ways. Thus, one scales the results
of an
-body simulation based upon the assumption of a dominant process. However, one can never be
certain that the extrapolation is smooth and that there are no critical points in the scaling with
.
One can also scale other quantities in the model, so that the quantity of interest is correctly
scaled [185]. An understanding of the nature of the scaling is crucial to understanding the
applicability of
-body simulations to globular cluster dynamics (see Baumgardt [15] for
an example). The scaling problem is one of the fundamental shortcomings of the
-body
approach.
![]() |
http://www.livingreviews.org/lrr-2006-2 |
© Max Planck Society and the author(s)
Problems/comments to |