The first and probably most important instrument was invented by Wulf in 1909. It consisted of an airtight
cylinder containing a two-fiber electrometer [227, 172]. Its greatest advantage was that it was easily
portable and therefore could ideally be taken to great heights for the observation of penetrating rays. As a
further development of the conventional ionization chamber, the high-pressure ionization chamber was able
to multiply the effect of the ionization measured in the conventional apparatus about fifty times.
Kohlhörster modified the apparatus, laying special emphasis on more reliable air-tightness. Another
aspect he approached was the change of temperature; since his balloon-carried experiments
reached up to 6300 m, the great cold endangered the reliability of the instruments. Finally he
also dealt with the question of what effect different pressures would have on the results of the
measurements. Due to the modifications he tried out, Kohlhörster was not only able to confirm
Wulf’s and Hess’ findings of the rise of the number of ions per cubic-cm per second, but also to
discover, by sinking the apparatus into the Saale river, that under water the number of ions
declined [123
].
In 1928, Geiger and Müller invented the wire counter, which was able to count single events of an ionizing
ray hitting the counter’s tube [74]. The wire counter was the basis for another important development in
cosmic-ray studies, the coincidence method, which was for the first time described by Bothe
and Kohlhörster in 1929 [30]. This method makes use of the Geiger counter by connecting at
least two of them. Observing the results of both counters one finds many events that are not
related, but several of them coincide and so give a good clue that those hits have been caused by
strong penetrating rays, which are able to hit both tubes without losing all of their energy in
between. So, after the idea of cosmic rays had started to settle in scientists’ understanding of the
natural world, these inventions helped to widen the range of research. The fundamental work that
had been done up to around 1932 and the knowledge that had been gained through it was
summed up by Geiger in 1940 [74]. What had been acknowledged by then as a fact was the
cosmic origin of the radiation. In addition, it had become clear that the intensity of the rays
was independent from the time of day that they were measured as well as from the direction
they appeared to come from. With about one pair of ions per second, per cubic centimeter,
ionization at sea level was barely measurable; though the radiation, for lack of a better definition
up to then simply called “ultra-radiation”, was extremely penetrating, even in water or solid
rock.
Another technical device that played a decisive role in the early research of cosmic radiation has not been
regarded here properly so far: the balloon. It had helped Hess to detect cosmic rays and through constant
improvement would finally mark the transition from earth-bound studies to the beginnings of modern space
travel. Hess had made a total of ten flights, all manned and equipped with Wulf’s electrometers and oxygen
supply. A malfunction of the gas in the balloons prevented Hess on several occasions from reaching his
desired height [107]. But finally he succeeded, as did Kohlhörster one year later [123]. After World War I
the importance of manned flights lessened and work with unmanned balloons gained significance. Millikan
had used them in 1923 to gain greater height in his flights [146], using a Wulf apparatus that
weighed less than 200 g, thus being able to ascend higher than any manned balloon could have
done. In Germany, Regener made use of unmanned “pilot-balloons” in order to scrutinize the
characteristics of cosmic rays at ever greater heights. He had built a Wulf electrometer, the
“balloon-electrometer”, that fit the special needs of high altitude, including being resistant
to high air pressure and intensive coldness. This instrument, and the use of several balloons
to carry it, allowed ascents up to 25 km, outdoing the most extreme manned balloon flights
by almost 10 km in 1932. These high altitude ascents showed that the increase of ionization
seemed to reach a constant level in the highest part of the atmosphere. The balloons would be
put together in tandem carrying a gondola of up to 12 kg with the electrometer. The landing
was initialized by the bursting of a rubber membrane on one of the balloons. That was the
main disadvantage of unmanned balloons. The instruments that had been on board had to be
collected after landing, which meant a great deal of extra work, as they first had to be located in
sometimes rather impassable terrain and then had to be brought back to the institute at which the
measurement data was to be analyzed. Regener is reported to have launched balloons that were found
more than 200 km away from the starting point [172]. Therefore, the technical progress that
had been made in the field of data transmission and data archiving helped a lot to simplify
the balloon flights of cosmic-ray physicists. Knowing about the difficulties, Russian scientists
tried to circumvent the problem by using radio signals to transmit their data. First the data
were sent to a telephone, from which they had to be recorded by hand. Later oscilloscopes
made visible the measured data so that they could be recorded on film [213]. Some [213, 159
]
consider these manned and unmanned balloon experiments to mark the beginning of space
physics.
Already during World War II, Regener had invented a container in the shape of a barrel that would
have been able to carry a great number of different experimental set-ups in a space rocket and protect these
devices against the hostile conditions of such a flight, e.g. extreme temperatures and pressure.
Unfortunately, Regener’s invention did not come into regular use, as the A4 rockets, now better known as
the V2, were, after some test flights, not used for scientific purposes, but for military purposes only and so
the prototype of that device got lost in the war. Still, it brought Regener the fame of becoming the founder
of extra-terrestrial physics [164]. Another basis for later satellite-borne experiments conducted by American
and Russian researchers was a mixture of both known devices, balloons and rockets, balloon-borne
rockets called rockoons. Invented in 1949, rockoons were used by Van Allen during the 1950s to
explore the magnetosphere of the Sun, as well as to learn about the properties of cosmic rays. He
came to the conclusion that the Earth’s magnetic field shields us from the majority of cosmic
radiation [47].
This interesting blending of different methods points at another important aspect concerning the historical analysis of early cosmic-ray studies. Because scientists at the beginning of the 20th century felt somewhat uncertain about the nature and characteristics of the many different kinds of radiation, there was a vast range of interrelated methods applied to this problem. On the one hand, different radiation phenomena were detected and analyzed with experimental gear that would all be carried at the same time in either a balloon or a plane. [220]. On the other hand, the same methods and technical devices could be used for very different purposes [178, 179]. This makes it sometimes a bit difficult, but also very interesting, for historians of science to find out which scientists worked in the field of cosmic-ray studies, as they sometimes came from very diverse fields of physics.
|
But balloon flights were not the only means of high altitude research; a number of observatories were erected and existing stations were extended for the special needs of cosmic-ray research. In Europe the Alps became the focus of interest in Earth-bound studies at great height, the Andes in South America. There were also several observatories in North America and in Asia. Though they were later to be important for the investigation into the nature of the eruptions from the Sun in the International Geophysical Year of 1957/58 and some of them are still in use today, further work needs to be done in order to learn what role exactly they played in the early stage of cosmic-ray studies [124]. The stunning thing is that they came into use for the purposes of cosmic-ray studies very soon after Hess’ “discovery”, at least as early as 1916 [108]. That means, that a – up to today not historically analyzed – scientific, financial, administrative and human effort of an enormous extent had been made in order to construct proper equipment for the experiments and take it to the stations on mountain tops, if these stations already existed at all. In addition a huge investment of man-power was required in the execution of the experiments, not to mention the necessary supplies for the experimenters. If one takes into account the time of these first experiments and the other epoch-making events of that day, like World War I, it is quite astonishing that there was still enough interest to mobilize such powers for a scientific purpose of this extent.
The photographic emulsion technique is based on the improvement of the usual photo-plates, like those used
by Röntgen and Becquerel. The emulsion is thicker and has a high concentration of silver bromide grains
that condense along the tracks of particles that pass through the emulsion. Microscopic examination of the
properties of the grains that form a particle track gives information about the mass of the particle.
Yet the standard products available in the 1930s could only detect slow protons and alpha
particles. After the emulsion had been greatly improved in the mid-1940s, the pions, long since
theoretically predicted, could be traced experimentally [136]. From then on the method was
successfully used for investigating the properties of muons and pions that were contained in
cosmic rays and the detection of more as yet unknown particles, like the tauon, the kaon and
others [170].
As the technique was rather cheap it was an attractive means of cosmic-ray studies after World War II. Numerous emulsion plates would be stacked and placed on unmanned balloons. Unmanned balloons having the main disadvantage of having to be traced on quite long distances, the emulsion technique was one of the first to require larger international collaborations than the rather small ones of the 1930s, benefiting from the fact that the physicists involved could cover a much greater terrain, literally, but also in a figurative sense [136].
Particle detectors, as we have already seen in the case of the cloud chamber, are important in particle
physics and astroparticle physics alike. As the objects in these fields of physics cannot be directly
encountered by human sensory perception, they have to be made “visible”. Philosophically speaking that
makes them special objects when talking about the “reality” of such entities, yet from a scientific point of
view there are various means to make them either visible or audible and then analyze their different
properties with the help of detectors [185]. There are many different kinds of detectors, the following
being just a very short overview of the most important ones. Their development through time
mirrors the development of the relationship between early cosmic-ray studies and particle physics.
From the 1950s onwards, detectors were optimized for use in connection with accelerators,
while in principle they could have been used with cosmic rays as well. Detectors in modern
astroparticle physics are even more complex than the detectors briefly described below; they have
developed into huge experimental set-ups that make use of very different technical know-how (see
Section 4.5).
|
Invented by Glaser in 1952, for which he received the Nobel Prize in 1960 [84], the bubble chamber
works with the same principle as the cloud chamber, being a volume of liquid that makes tracks of charged
particles visible by condensation. But the bubble chamber makes use of a different mechanism: A cylinder
filled with super-heated liquid, like diethyl ether in Glaser’s first set-up [84
] or hydrogen, is placed in a
magnetic field. Then – while the liquid is close to the boiling point – the pressure in the chamber is lowered.
When particles now enter the chamber, their loss of energy in the liquid makes a bubbling
effect, of which a photo is taken, showing characteristic particle tracks [185
] (see Figure 4
).
Usually bubble chambers are used for taking pictures of particles from accelerators, but they
are also able to trace cosmic rays. When looking for the ideal liquid to build the detector he
had in mind, Glaser came across the problem some chemists encountered while working with
super-heated diethyl ether. At certain intervals the super-heated liquid started to erupt into
bubbles. Glaser’s calculations showed that the disruptions the chemists had witnessed were
in total agreement with the cosmic radiation and the general radioactive background at sea
level [84].
Spark-chamber detectors are a good device for detecting weakly interacting particles. Several plates of light metal are arranged in a gaseous atmosphere such that there is no overlap between them, if a voltage is supplied to them. Incoming particles are then able to ionize the gas and produce a cascade of further electrons that can be traced by the sparks they release from the plates.
Drift chambers became the more commonly used as they deliver immediate electronic data, rather than
relying on photographs, and thus are able to analyze much more information. Drift-chamber detectors,
invented by Charpak around 1967, are a modified form of wire chambers [41]. The latter make use of the
fact that incoming particles may produce a shower of electrons in the gas of the chamber. The freed
electrons drift through the gas to the wire where high voltage gives enough energy to produce an avalanche
of secondary electrons, which can then be detected. This method gives the opportunity to reconstruct the
time of the primary event [185].
Today the usage of computers in general is so common among physicists that this method might be seen as self-evident, but the application of ever-better computer programs to various scientific problems must have been quite a relief for researchers in the 1950s and 1960s, both for the mathematical analysis of collected data and the calculation of theoretical predictions.
One good example might be the Monte Carlo simulation method. After being successfully applied to the
problem of predicting the development of particle showers in 1958 [37], it became a standard method for
calculating electromagnetic cascades [205]. One advantage of the method is the ability to track particles
individually, thus providing reliable results concerning the lateral spread of cosmic-ray and other particle
showers. Yet in the very early days of computers in cosmic-ray physics, the rather low standard of
computers forced researchers to deal intensively with the scientific processes behind the application of
technical equipment; e.g. the methods of sampling had to be considerably improved in order to get good
results out of the calculations [205
].
But as the history of computer science has its own peculiarities [140], I will not go into any detail on that point.
|
Particle accelerators do not, of course, belong to cosmic-ray physics. Yet, as this whole section tries to show the influence technical improvement may have on the development of science, a short overview of some of the most important kinds of accelerators seems to be appropriate at this point, as they are generally held responsible for the rise of particle physics after the 1950s. What is even more important, discoveries of numerous new particles enabled researchers to develop the Standard Model of particle physics [58], which was to become one important basis of modern astroparticle physics. Moreover, man-made accelerators taught researchers the basic mechanisms of particle acceleration and propagation, so that their findings could be applied to cosmic accelerators of particles, like the shockwaves of supernovae. Especially as the problem of the acceleration of particles by interstellar magnetic fields had been a topic of interest even before [64].
In 1919, the Norwegian electrical engineer Wideröe was the first person to speculate about the concept
of circular accelerators, calling them “ray transformers”. Because he was interested in Rutherford’s
scattering experiments and the problems of nuclear fission, he applied the theory of general relativity to
describe particles moving close to the speed of light [225]. This concept of a ray transformer, using changing
magnetic fields to accelerate electrons, was later to become a betatron, but as first experiments
Wideröe conducted with this set-up failed, he turned to linear accelerators. Invented by Ising in
1924, the device was a success, as it was able to accelerate ions of sodium and mercury. In the
1940s, Alvarez improved the linear accelerator so that it became possible to work with proton
beams [185
].
The next step towards larger and more effective particle accelerators was marked by the invention of the
cyclotron, usually ascribed to Lawrence (see Figure 5). He had the idea of combining two of
Wideröe’s drift tubes so that the particles would be accelerated by a magnetic field in a spiral-like
circular movement. The principle worked out quite well, though early cyclotrons of smaller
diameter faced some problems in surpassing the 30 MeV energy limit at which protons become
relativistic. As cyclotrons turned out to be useful for a number of applications, like nuclear
physics, they were used during World War II for producing fissile material. In the meantime
Wideröe’s idea was taken up again, this time to be turned into the betatron. This electron
accelerator was able to reach energies of about 300 MeV. With the increase in energy came an
analogous increase in weight: even rather early betatrons in the 1940s weighed up to a few hundred
tons [225
].
It was this desire to accelerate particles at higher energies – an obstacle that in the opinion of some
researchers could only be overcome by cosmic accelerators [80] – that paved the way for the synchrotron.
After the principle of phase stability had been discovered, cyclotrons were first developed into
synchro-cyclotrons until they were finally merged together with the betatron to form synchrotrons. These
machines were not only able to accelerate electrons and protons alike, but also soon after their invention
reached energies in the scope of a few GeV [185
].
Finally, the concept of storage rings was born out of the conclusion that a head-on collision of two
particles would give the opportunity to use all the acceleration energy for the collision and no longer waste a
great deal of energy on the recoil. Thus, the first storage rings reached collision energies of a few GeV, and
future colliders are heading for the TeV mark [185]. In comparison, cosmic rays are able to exceed even the
TeV scope and reach up to a few PeV [163].
http://www.livingreviews.org/lrr-2008-2 | ![]() This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 2.0 Germany License. Problems/comments to |