A History of Astrophysics - Part 3

The process of combining light elements into heavier ones – nuclear fusion – happens in the central region of stars. In their extremely hot cores, instead of individual atoms you have a mix of nuclei and free electrons, what we call plasma. The term “plasma” was first applied to ionized gas by Irving Langmuir (1881-1957), a physical chemist from the USA, in 1923. It is the fourth and by far the most common state of matter in the universe in addition to the three we are familiar with from everyday life on Earth: solid, liquid and gas. Extreme temperatures and pressure is needed to overcome the mutual electrostatic repulsion of positively charged atomic nuclei (ions), often called the Coulomb barrier after the French natural philosopher Charles de Coulomb, who formulated the laws of electrostatic attraction and repulsion.

 

While their work represented a huge conceptual breakthrough, the initial theories of Weizsäcker and Bethe did not explain the creation of elements heavier than helium. Edwin Ernest Salpeter (1924-2008) was an astrophysicist who emigrated from Austria to Australia, studied at the University of Sydney and finally ended up at Cornell University in the USA, where he worked in the fields of quantum electrodynamics and nuclear physics with Hans Bethe. In 1951 he explained how with the “triple-alpha” reaction, carbon nuclei could be produced from helium nuclei in the nuclear reactions within certain large and hot stars.

The fusion of hydrogen to helium by the proton-proton chain or CNO cycle requires temperatures in the order of 10 million degrees Celsius or Kelvin. Only at those temperatures will there be enough hydrogen ions in the plasma with high enough velocities to tunnel through the Coulomb barrier at sufficient rates. There are no stable isotopes of any element with atomic masses 5 or 8; beryllium-8 (4 protons and 4 neutrons) is highly unstable and short-lived. Only at extremely high temperatures of around 100 million K can the sequence called the triple-alpha process take place. It is so called because the net effect is to combine 3 alpha particles, which means standard helium-4 nuclei of two protons and two neutrons, to form a carbon-12 nucleus (6 protons and 6 neutrons). In main sequence stars, the central temperatures are too low for this process to take place, but not in stars in the red giant phase.

More advances were made by the English astrophysicist Fred Hoyle (1915-2001). He was born in Yorkshire in northern England and educated in mathematics and theoretical physics at the University of Cambridge by some of the leading scientists of his day, among them Arthur Eddington and Paul Dirac. During World War II he contributed to the development of radar. With the German American astronomer Martin Schwarzschild (1912-1997), son of the astrophysicist Karl Schwarzschild and a pioneer in the use of electronic computers and high-altitude balloons to carry scientific instruments, he developed a theory of the evolution of red giant stars. Hoyle stayed at Cambridge from 1945 to 1973. In addition to his career in physics he is known for his popular science works and wrote novels, plays and short stories. He attributed life on Earth to an infall of organic matter from space. He remained controversial throughout his life for his support of many highly unorthodox ideas, yet he made indisputable contributions to our understanding of stellar nucleosynthesis and together with a few others convincingly showed how heavy elements are created during supernova explosions.

The English astrophysicist Margaret Burbidge (born 1919) was educated at the University of London. She worked in the USA for a long time, but also served as director of the Royal Greenwich Observatory in her native Britain. She studied spectra of galaxies, determining their masses and chemical composition and married fellow Englishman Geoff Burbidge (1925-2010), who was educated at the University of Bristol and at University College, London, where he earned a Ph.D. in theoretical physics. The American astrophysicist William Alfred Fowler (1911-1995) earned his B.S. in engineering physics at Ohio State University and his Ph.D. in nuclear physics at the California Institute of Technology. He and his colleagues at Caltech measured the rates of nuclear reactions of astrophysical interest. After 1964, Fowler worked on problems involving supernovae and the formation of light elements.

Building on the work of Hans Bethe, Hoyle in 1957 co-authored with Fowler and the husband-and-wife team of Geoffrey and Margaret Burbidge the paper Synthesis of the Elements in Stars. They demonstrated how the cosmic abundances of all heavier elements from carbon to uranium could be explained as the result of nuclear reactions in stars. Out of the four, William Fowler alone shared the Nobel Prize in Physics in 1983 with Subrahmanyan Chandrasekhar for work on the evolution of the stars. By then Fred Hoyle was known for, among other things, attributing influenza epidemics to viruses carried in meteor streams.

The Canadian scientist Alastair G. W. Cameron (1925-2005) further aided our understanding of these stellar processes. Astrophysicists spent the 1960s and 70s establishing detailed descriptions of the internal workings of stars. Chushiro Hayashi (,1920-2010), educated at the University of Tokyo, together with his students made valuable contributions to stellar models. He found that pre-main-sequence stars follow what are now called “Hayashi tracks” downward on the Hertzprung-Russell diagram until they reach the main sequence. He was a leader in building astrophysics as a discipline in Japan. The Armenian scientist Victor Ambartsumian (1908-1996) was a pioneer in astrophysics in the Soviet Union, studied stellar evolution and hosted international conferences to search for extraterrestrial civilizations.

The Austrian physicist Wolfgang Pauli in 1930, trusting the principle of energy conservation, proposed that an unknown particle carries off some missing energy. If it existed it had to be electrically neutral, possess virtually zero mass and move at nearly the speed of light. Enrico Fermi named it the neutrino, meaning “little neutral one” in Italian. Because of their weak interactions with matter, neutrinos are extremely difficult to detect, but their existence was confirmed through experiments with tanks containing hundreds of liters of water by the scientists Frederick Reines (1918-1998) and Clyde Cowan (1919-1974) in the USA in 1956. This achievement was decades later rewarded with a well-deserved Nobel Prize in Physics.

Physicists realized that the nuclear reactions in stars should produce enormous amounts of neutrinos. In 1967, the physicist Raymond Davis, Jr. (1914-2006) installed a large tank of cleaning fluid in a deep gold mine in South Dakota in the United States. In the 1990s, Japanese and American scientists obtained experimental evidence indicating that neutrinos have non-zero mass, yet it is extremely small even compared to electrons. The Kamiokande detector in the Japanese Alps was of pivotal importance. Davis and the Japanese physicist Masatoshi Koshiba (born 1926) shared the 2002 Nobel Prize in Physics for work on neutrinos.

From the 1960s to about 2002, scientists struggled to explain what appeared to be a number of observed neutrinos from the Sun that was less than predicted. The mystery of the “missing solar neutrinos” was finally solved when it was understood that neutrinos can change type, and that certain types are more challenging to detect than others. After these adjustments had been made, the number of observed solar neutrinos closely matched theoretical predictions, which indicates that our understanding of the nuclear processes in stars like the Sun is pretty accurate. As the leading American neutrino physicist John N. Bahcall (1934-2005) writes:

A 1% error in the [Sun’s central] temperature corresponds to about a 30% error in the predicted number of neutrinos; a 3% error in the temperature results in a factor of two error in the neutrinos. The physical reason for this great sensitivity is that the energy of the charged particles that must collide to produce the high-energy neutrinos is small compared to their mutual electrical repulsion. Only a small fraction of the nuclear collisions in the Sun succeed in overcoming this repulsion and causing fusion; this fraction is very sensitive to the temperature. Despite this great sensitivity to temperature, the theoretical model of the Sun is sufficiently accurate to predict correctly the number of neutrinos.”

Neutrinos have become an important tool for astrophysicists. 1987 was a landmark year in neutrino astronomy, with the first naked-eye supernova seen since 1604. That event, called SN1987A, took place in our galactic neighbor the Large Magellanic Cloud. The two most sensitive neutrino observatories in the world, one in Japan and another in the USA, detected a 12-second burst of neutrinos roughly three hours before the supernova became optically visible, which, again, seemed to match theoretical predictions for such events pretty well.

In 1911 the American astronomer Edward Pickering differentiated between low-energy novae, often seen in the Milky Way, and novae seen in other nebulae (galaxies) like Andromeda. By 1919, the Swedish astronomer Knut Lundmark (1889-1958) had realized that low-energy novae occur commonly whereas the brighter novae, which are vastly more luminous, occur rarely. The challenge was to explain the difference between them. In 1981, Gustav A. Tammann from Switzerland estimated that three supernovae occur every century in the Milky Way, yet most of them go undetected owing to obscuring interstellar material.

A nova (pl. novae) is a nuclear explosion caused by the accretion of hydrogen from a nearby companion onto the surface of a white dwarf star, which briefly reignites its nuclear fusion process until the hydrogen is gone. From the Earth we will see what appears to be a nova (“new” in Latin), but in reality it is an old star undergoing an eruption. It is possible for a star to become a nova repeatedly as this process does not destroy it, unlike a supernova event which obliterates a massive star in a cataclysmic explosion. A supernova explosion can release extraordinary amounts of energy and for a limited period outshine an entire galaxy.

If a white dwarf gains so much more additional mass that it exceeds the Chandrasekhar Limit of about 1.44 solar masses, electron degeneracy pressure can no longer sustain it. The star will then collapse and explode in a so-called Type Ia supernova. Since this limit is constant, this type of supernovas has been used as a kind of standard candles to measure cosmic distances. Observations of Type Ia supernovas were used in 1998 to demonstrate that the expansion of the universe is accelerating. However, some observations indicate that such events can also be triggered by two white dwarves colliding, which might make them slightly less reliable as uniform standard candles as the weight limit could be less constant than once believed.

The neutron was discovered in 1932. Shortly after, the German-born Walter Baade (1893-1960) and the Swiss astronomer Fritz Zwicky (1898-1974), both eventually based in the United States, proposed the existence of neutron stars. Zwicky had a number of brilliant teachers at the ETH in Zürich, including Herman Weyl, Auguste Piccard and Peter Debye, but left Switzerland for the United States and the California Institute of Technology in 1925 to work with Robert Millikan. Another notable Swiss-born astronomer, Robert Trumpler (1886-1956) from Zürich, had immigrated to the USA in 1915. Trumpler studied galactic open star clusters and clusters of interstellar dust and discovered the interstellar extinction.

Zwicky was not as systematic a thinker as Baade, but he could have excellent intuitive ideas. He was a bold and visionary scientist, but also eccentric and not always easy to work with. He stated that “Astronomers are spherical bastards. No matter how you look at them they are just bastards.” His colleagues did not appreciate his often aggressive attitude, but he was friendly toward students and administrative staff. In the words of the English-born physicist Freeman Dyson, “Zwicky’s radical ideas and pugnacious personality brought him into frequent conflict with his colleagues at Caltech. They considered him crazy and he considered them stupid.”

Educated at Göttingen, Walter Baade worked at the Hamburg Observatory in Germany from 1919 to 1931 and at the Mount Wilson Observatory outside of Los Angeles, California, from 1931 to 1958. During the World War II blackouts, Baade used the large Hooker telescope to resolve stars in the central region of the Andromeda Galaxy for the first time. This led to the realization that there were two kinds of Cepheid variable stars and from there to a doubling of the assumed scale of the universe. The German American astronomer Rudolph Minkowski (1895-1976) joined with him in studying supernovae. He was a nephew of the German Jewish mathematician Hermann Minkowski, who did important work on four-dimensional spacetime.

The optician Bernhard Schmidt (1879-1935) was born off the coast of Tallinn, Estonia, in the Baltic Sea, then a part of the Russian Empire. He spoke Swedish and German and spent most of his adult life in Germany. During a journey to Hamburg in 1929 he discussed the possibility of making a special camera for wide angle sky photography with Walter Baade. He then developed the Schmidt camera and telescope in 1930, which permitted wide-angle views with little distortion and opened up new possibilities for astronomical research. Yrjö Väisälä (1891-1971), a meteorologist, astronomer and instrument maker from Finland, had been working on a related design before Schmidt but left the invention unpublished at the time.

Zwicky and Baade introduced the term “supernova” and suggested that these are completely different from ordinary novae. They proposed that after the turbulent collapse of a massive star, the residue of which would be an extremely compact neutron star, there would still be a large amount of energy left over. According to the book Cosmic Horizons:

Baade knew of several historical accounts of ‘new stars’ that had appeared as bright naked eye objects for several months before fading from view. The Danish astronomer Tycho Brahe, for example, had made careful observations of one in 1572. Zwicky and Baade thought that such events must be supernova explosions in our own Galaxy. At a scientific conference in 1933, they advanced three bold new ideas: (1) massive stars end their lives in stupendous explosions which blow them apart, (2) such explosions produce cosmic rays, and (3) they leave behind a collapsed star made of densely-packed neutrons. Zwicky reasoned that the violent collapse and explosion of a massive star would leave a dense ball of neutrons, formed by the crushing together of protons and electrons. Such an object, which he called a ‘neutron star,’ would be only several kilometers across but as dense as an atomic nucleus. This bizarre idea was met with great skepticism. Neutrons had only been discovered the year before. The notion that an entire star could be made of such an exotic form of matter was startling, to say the least.”

Astronomers readily accepted supernovas, but remained doubtful about neutron stars for many years, believing that such strange objects were unlikely to exist in real life. To transform protons and electrons into neutrons, the density would have to approach the incredible density of an atomic nucleus, about 1017 kg/m3. A neutron star of twice the mass of our Sun would have a diameter of only 20 kilometers and would therefore fit inside any major city on Earth. Despite the name, a neutron star is probably not composed solely of neutrons. As Neil F. Comins and William J. Kaufmann III state in their book Discovering the Universe:

“Its interior has a radius of about 10 km, with a core of superconducting protons and superfluid neutrons. A superconductor is a material in which electricity and heat flow without the system losing energy, whereas a superfluid has the strange property that it flows without any friction. Both superconductors and superfluids have been created in the laboratory. Surrounding a neutron star’s core is a layer of superfluid neutrons. The surface of the neutron star is a solid, brittle crust of dense nuclei and electrons about ⅓-km thick. The gravitational force of the neutron star is so great at its surface that climbing a bump there just 1-mm high would take more energy than it takes to climb Mount Everest. Neutron stars may also have atmospheres, as indicated by absorption lines in the spectrum of at least one of them.”

Neutron stars were first observed in the 1960s with the rapid development of non-optical astronomy. In 1967 the astrophysicist Jocelyn Bell (born 1943) and the radio astronomer Antony Hewish (born 1924) at Cambridge University in England discovered the first pulsar. They were looking for variations in the radio brightness of quasars and discovered a rapidly pulsating radio source. The radiation had to come from a source not larger than a planet. The Austrian-born, USA based Jewish astrophysicist Thomas Gold (1920-2004) identified these objects as rotating neutron stars, pulsars, with extremely powerful magnetic fields that sweep around many times per second as the stars rotate, making them appear as cosmic lighthouses.

Antony Hewish won the Nobel Prize for Physics in 1974, the first one awarded for astronomical research, although his graduate student Bell made the initial discovery. He shared the Prize with the prominent English radio astronomer Martin Ryle (1918-1984), who helped develop radar countermeasures for British defense during World War II and after the war became the first professor of radio astronomy in Britain. Ryle became a leading opponent of the steady state cosmological model proposed by the English astrophysicist Fred Hoyle.

The process of converting lower-mass chemical elements into higher-mass ones is called nucleosynthesis. One or more stars can be formed from a large cloud of gas and dust. As it slowly contracts due to gravity, the condensation releases energy which in turn heats up the central region of the cloud. The protostar continues to contract until the core temperature reaches about 10 million K, which constitutes the minimum temperature required for normal hydrogen-to-helium fusion to begin. A main sequence star is then born. When a star exhausts its hydrogen supply the pressure in its core falls and it begins to shrink, releasing energy and heating up further. The next step is core helium-to-carbon fusion, the triple-alpha process, which requires a central temperature of about 100 million K. Helium fusion also produces nuclei of oxygen 16 (8 protons and 8 neutrons) and neon 20 (10 protons and 10 neutrons).

At core temperatures of 600 million K, carbon 12 can fuse to form sodium 23 (11 protons, 12 neutrons) and magnesium 24 (12 protons, 12 neutrons), but not all stars can reach such temperatures. Stars with higher masses fuse more elements than stars with lower masses. High-mass stars have more than 8-9 solar masses; intermediate-mass ones 0.5 to 8 solar masses and low-mass stars 0.1 to 0.5 solar mass. After exhausting its central supply of hydrogen and helium, the core of a high-mass star undergoes a sequence of other thermonuclear reactions at increasingly faster pace, reaching higher and higher temperatures.

When helium fusion ends in the core of a star with more than 8 solar masses, gravitational compression collapses the carbon-oxygen core and drives up the temperature to above 600 million K. Helium fusion continues in a shell outside of the core, and this shell is itself surrounded by a hydrogen-fusing shell. At 1 billion K oxygen nuclei can fuse, producing silicon 28 (14 protons, 14 neutrons), phosphorus 31 (15 protons, 16 neutrons) and sulfur 32 (16 protons, 16 neutrons). Each stage goes faster and faster. At 2.7 billion K, silicon fusion begins. Every stage of fusion adds a new shell of matter outside the core, creating something resembling the layers of a massive onion. The outer layers are pushed further and further out.

Energy production in big stars can continue until the various fusion processes have reached nuclei of iron 56 (26 protons, 30 neutrons), which has one of the lowest existing masses per nucleon (nuclear particle, proton or neutron). The mass of an atomic nucleus is less than the sum of the individual masses of the protons and neutrons which constitute it. The difference is a measure of the nuclear binding energy which holds the nucleus together. Iron has the most tightly bound nuclei next to 62Ni, an isotope of nickel with 28 protons and 34 neutrons, and consequently has no excess binding energy available to release through fusion processes.

No star, regardless of how hot it is, can generate energy by fusing elements heavier than iron; iron nuclei represent a very stable form of matter. Fusion of elements lighter than this or splitting of heavier ones leads to a slight loss in mass and a net release of nuclear binding energy. The latter principle, nuclear fission, is employed in nuclear fission weapons (“atom bombs”) by splitting large, massive atomic nuclei such as those of uranium or plutonium, while nuclear fusion of lighter nuclei takes place in hydrogen bombs and in the stars.

When a star much more massive than our Sun has exhausted its fuel supplies it collapses and releases enormous amounts of gravitational energy converted into heat. It then becomes a (Type II) supernova. When the outer layers are thrown back into interstellar space, the material can be incorporated into clouds of gas and dust (nebulae) that form new stars and planets. The remaining core of the exploded star will become a neutron star or a black hole, depending upon how massive it is. It is believed that the heavy elements we find on Earth, for instance gold with atomic number 79, are the result of ancient supernova explosions and were once a part of the Solar Nebula that formed our Solar System almost 4.6 billion years ago.

Without any nuclear fusion reactions to create the temperatures and pressures needed to support the star, gravity takes over and the star collapses in a matter of seconds. Fowler and colleagues calculated that the energy generated within the collapsing star is so great that it provides the conditions needed to create all the elements heavier than iron. As the outer layers of such a star collapse and fall inwards they are met by a blast wave rebounding from the collapsing core. The meeting of these two intense pulses of energy creates a shock wave that is so extreme that iron nuclei absorb progressive numbers of neutrons, building all the heavier elements from iron to uranium. The blast wave continues to spread outwards, and in its final and perhaps finest flourish it creates a supernova explosion that blows the star apart.”

The Ukraine-born astrophysicist Iosif Shklovsky (1916-1985), who became a professor at Moscow University and a leading Soviet authority in radio astronomy and astrophysics, has proposed that cosmic rays from supernovae might have caused mass extinctions on Earth. The hypothesis is difficult to verify even if true, but such explosions are among the most violent events in the universe, and a nearby (in astronomical terms) supernova could theoretically cause such a disaster. Shklovsky made theoretical and radio studies of supernovas.

Since a star that dies passes along its heavier elements, this means that each successive generation contains a higher percentage of heavy elements than the former one. The Sun is a member of a generation of stars known as Population I. An older generation is called Population II. A hypothetical Population III of extremely massive, short-lived stars is thought to have existed in the early universe, but as of 2010 no such stars have been directly observed in distant galaxies. This constitutes an area of active astronomical research. If such objects are not found then we have to adjust our theoretical models. Astrophysicists currently believe that the young universe consisted entirely of hydrogen and helium with trace amounts of lithium and beryllium, all created through Big Bang or primordial nucleosynthesis. All other chemical elements have been created later through stellar nucleosynthesis and supernova explosions.

Although it took only about a decade for nuclear fission to go from weapons to be used for peaceful purposes in civilian power plants, this transition has been much slower for nuclear fusion. The American physicist Lyman Spitzer Jr., a graduate of the Princeton and Yale Universities, in 1951 founded the Princeton Plasma Physics Laboratory, a pioneering program in thermonuclear research to harness nuclear fusion as a clean source of energy. In Britain, the English Nobel laureate George Paget Thomson and his team began researching fusion. In the Soviet Union, similar efforts were led by the Russian physicists Andrei Sakharov and Igor Tamm. In 1968 a team there under the leadership of the Russian Lev Artsimovich (1909-1973) achieved temperatures of ten million degrees in a tokamak magnetic confinement device, which became the preferred device for experiments with controlled nuclear fusion.

Although progress has been made at sites in the USA, Europe and Japan, no fusion reactor has so far managed to generate more energy than has been put into it. ITER (International Thermonuclear Experimental Reactor), an expensive international tokamak fusion research project with European, North American, Russian, Indian, Chinese, Japanese and Korean participation, is scheduled to be completed in France around 2018. There is substantial disagreement over how close we are to achieving commercially viable energy production based on nuclear fusion. Pessimists say we are still a century away, while optimists point out that promising advances have been made in recent years using high-energy laser systems.

 

RE: Racially Conscious

I meant to say that these "anti-white white anti-racists" are in fact as racially conscious as white racists.  These two movements are different sides of the same coin, and both contribute to making race salient.  The more salient race is, the more racism is likely to occur, irrespective of which race is perpetrator and which is victim.  Unfortunately, these particular anti-racists are exacerbating the problem rather than solving it.

 

This is not to say that I approve of multi-racial societies or that I believe in deconstructing ethnic and racial consciousness.  I don't. 

Racially Conscious 3

@KA: Lowering the saliency of race is highly desirable when attempting to maintain race-blind institutions. But that is not what we do. Instead, we maintain an official ideology of racial equality and race-blindness at the same time as enforcing racial favoritism. Imposing race-based redistribution and favoritism inevitably raises the saliency of race.

Centrists ask us to suck it up and say to whites, "Even though you are subject to forced redistribution on a racial basis, don't complain because you are raising the saliency of race." Similar considerations apply to prevent discussion of racial differences in the commission of crimes and in sentencing for crimes. Despite the vigor of the tea party movement, we are a long way from any widespread rebellion against racial take-aways.

Even so-called conservatives are brainwashed to prefer harmony to liberty, which is the essence of modern liberalism per Hobhouse writing in the early 20th c. Thus they are reluctant to complain about the panoply of affirmative action requirements in employment, contracting, and education, not to mention criminal justice. If they complain they are labeled "racist," when it is racist policies they are complaining about.

Racially conscious

@KA:  Greetings.  You speak as if being racially conscious is a bad thing.  Just acknowledging the realities of race and other classifications does not imply particular laws or policies, does it?  Or maybe you were speaking in shorthand.  As a philo-Semite, I feel compelled to ask.

@Cogito

It's clearly intended to drive home European or Western achievements.  However it does preach to the choir.  There is a school of thought among white anti-racists that "whiteness" must be attacked or deconstructed in order to tear down racial barriers, and all other races must be promoted.  Rather than being truly color-blind, these whites are in fact racially conscious and attempting to satisfy themselves and others that they are "politically correct".  As both Zionist and anti-Zionist Jews have noted, philo-Semitism is not a cure for anti-Semitism, and is rather another manifestation or symptom of the same ill.  Yet these people tend to avoid the Brussels Journal...