- Home
- Clery, Daniel
Piece of the Sun : The Quest for Fusion Energy (9781468310412) Page 18
Piece of the Sun : The Quest for Fusion Energy (9781468310412) Read online
Page 18
The breakthrough came in 1951 when Stanisław Ulam devised some changes that allowed Teller to move forward and, a year later, Elugelab atol was blown out of the water. The design of the Sausage and the more compact H-bombs that came after it remains a military secret but a rough picture of what came to be known as the Teller-Ulam design has been pieced together from a number of sources, including declassified documents and unintentional leaks by former weapons designers. The key innovations of the Teller-Ulam design are that the A-bomb ‘primary’ should be separated from the ‘secondary,’ which is a mixture of fission and fusion device; and that it is the x-rays generated by the primary, not the heat, which causes the secondary to ignite.
In the Teller-Ulam design, x-rays from a fission A-bomb primary are used to compress the H-bomb secondary, sparking fusion.
(Courtesy of Wikimedia Commons)
The Sausage was that shape because it had a conventional A-bomb at one end and the hybrid secondary at the other. The multi-stage process of detonation must happen extremely quickly before the blast of the A-bomb blows everything apart. It goes something like this: The A-bomb is detonated and within 1 millionth of a second it is more than three times the temperature of the Sun’s core and emitting most of its energy as x-rays. The tubular container of the device is a radiation case, known as a hohlraum, made of a material that is opaque to x-rays, such as uranium. This briefly traps the x-rays and channels them up the tube towards the secondary. The hohlraum walls don’t actually reflect the x-rays but absorb them and quickly become so hot that they re-emit more x-rays.
The secondary is cylindrical in shape and multi-layered. The outermost layer is known as the pusher-tamper and is made of another x-ray absorber such as uranium-238 or lead. Inside that is a layer of fusion fuel. In the Sausage they used liquid deuterium and tritium, hence all the bulky cryogenic apparatus required. (Such a cumbersome device could never be deployed as a weapon so in later bombs lithium deuteride was used. This solid compound contains the deuterium needed for fusion and the lithium, when bombarded by neutrons produced during detonation, is transformed into the necessary tritium.) The innermost part of the secondary is known as the ‘sparkplug’ and is a hollow cylinder of fissile material such as plutonium-239 or uranium-235.
When the x-rays from the primary hit the pusher-tamper the outer layers are blasted off the surface at high speed causing the rest to recoil in towards the centre in what is called a radiation implosion. The imploding pusher-tamper compresses the fusion fuel to high pressure and that in turn squeezes the sparkplug. While it’s a hollow cylinder, the sparkplug is not a critical mass but once the implosion squashes all the plutonium or uranium into the centre it reaches criticality and a second fission explosion starts. It’s this second explosion, pushing outwards, that further compresses and heats the already highly compressed fusion fuel and gets the fusion burn started. The whole process takes a tiny fraction of a second and the resulting blast reduces the whole apparatus to atoms.
Although Teller had contributed to the breakthrough that made the H-bomb possible and was its greatest advocate, he was not chosen to lead the development effort that led to Ivy Mike, perhaps because of his reputation as a prickly personality. Feeling spurned by the Los Alamos lab, Teller moved in 1952 to the University of California, Berkeley, where with the help of Ernest O. Lawrence, director of the university’s Radiation Laboratory, he founded a new weapons design lab in the nearby town of Livermore. The intention was to provide competition for Los Alamos and also to investigate some of the more way-out physics concepts that Teller was attracted to.
From the outset the Livermore lab focused on more innovative weapons design and, perhaps as a result, its first three nuclear tests were duds. But the lab went on to design many of the warheads that were manufactured in their thousands during the Cold War. In pursuit of these weapons, Teller and Lawrence pioneered the use of computers and computer simulation as a way of predicting how a bomb design will behave. The lab often owned the most powerful computer in the world and its designers became expert in devising simulations, which they called ‘codes,’ of nuclear explosions.
In the summer of 1955 a 24-year-old physicist named John Nuckolls left Columbia University in New York to join the thermonuclear explosives design division at Livermore where he was initiated into the secrets of the Teller-Ulam design and the use of weapon design codes. A couple of years later his boss asked him to look into an unusual scenario: If you could excavate a cavity inside a mountain 300m across – probably using a nuclear explosive – would it be economically viable to fill the cavity with steam and then set off a half-megaton H-bomb to drive the steam out and through a turbine to generate electricity? (Teller was a great enthusiast for devising peaceful uses for nuclear bombs.) Nuckolls estimated that the value of the electricity generated would cover the cost of creating the cavity, building the bombs and operating the facility, but he couldn’t be sure how long the cavity would survive the repeated explosions. In any case, he couldn’t see what advantage such a scheme would have over a fission power plant or even a magnetically confined fusion reactor.
But Nuckolls was intrigued by the idea and kept working on it. What if, he wondered, you reduced the size of the explosion so that it could take place in a smaller, man-made cavity? To do that you would need to find something other than an A-bomb to act as the detonator. An A-bomb requires, at the very minimum, a critical mass of fissile material to detonate; any less and it simply doesn’t go off. ‘Little Boy,’ the bomb dropped on Hiroshima in 1945, contained 64 kilograms of uranium-235, not much more than its critical mass. So the smallest possible A-bomb detonator is still going to create a sizable blast. If some other way to set off a fusion explosive could be found, it would be possible to create much smaller explosions that could be contained in a controllable way. There is no critical mass for a fusion explosion; they can be as small as you want.
But how to produce the huge temperatures and pressures needed for fusion without the intense x-rays from a fission explosion? John Foster, head of another Livermore division dealing with fission weapon design, heard about Nuckolls’ investigations and invited him to meetings of a special group he had set up to deal with that precise problem. One of the group, Ray Kidder, had already estimated the sort of conditions that would be required to ignite a small amount of deuterium-tritium fuel confined inside a metal capsule, or pusher – this is similar to the pusher in the Teller-Ulam design but spherical rather than cylindrical.
Nuckolls took away what he learned from this non-nuclear primary group and began to devise a scheme that could be used to explode tiny spherical capsules of D-T. He imagined many potential candidates for the energy source, or ‘driver,’ to set off the implosion, including a plasma jet, a hypervelocity pellet gun, and a pulsed beam of charged particles. Using weapons designers’ codes he simulated a scenario where some driver caused the radiation implosion of a thin pusher capsule containing a tiny quantity – a millionth of a gram – of D-T fusion fuel.
The driver in his scheme would pump 6 million joules (6 MJ) of energy into the capsule in a pulse lasting just 10 billionths of a second (10 nanoseconds). The implosion squeezes the D-T fuel, raising its temperature to around 3 million °C. Nuckolls calculated that this would cause a burning fusion reaction in the fuel producing 50 MJ of energy, hence a gain of almost 10. Nuckolls had realised that compression was the key to getting the fusion to work. It would be possible to use the driver just to heat up the capsule, but imploding the fuel heats it up as a by-product. Nuckolls calculated that it was more energy efficient to heat it by compression, and by ending up with fuel that is hundreds of times as dense as lead you get many more collisions between ions that could result in fusion.
But Nuckolls knew that this first attempt wasn’t good enough to be an energy source. For that he would need to achieve a gain of at least 100, because producing the energy pulse of the driver is likely to be an inefficient process. You may have to put 60 MJ into the driver to
get a 6 MJ pulse, and converting the heat from the fusion reactions into electricity will involve more losses. A gain of 10 would certainly not be enough to come out with a profit. For a fusion power plant he would need a better design of fuel capsule, or target, able to produce more energy per explosion and cheap to manufacture – a commercial power plant would need lots of them. There were also exacting demands on the driver: it needed to produce high energy pulses of only a few nanoseconds’ duration; it would have to focus its energy down to a tiny spot size – millimetres or less across – from a distance of some metres away so it wouldn’t be damaged by the explosion; it would need to be efficient and low maintenance so it wasn’t too expensive to run; and over the typical thirty-year life of a power plant it would need to ignite billions of explosions to produce economic quantities of power.
Nuckolls set about devising a more finessed scheme that would produce higher gain. His original target had a very thin metal shell acting as the pusher and this was coated on the outside with a layer of beryllium as an ablator, the material that absorbs the energy of the driver radiation and flies off, pushing the pusher inwards. One problem was due to instabilities: as the pusher moved inwards during the implosion, any slight irregularity in the pusher’s thickness or in the force being applied by the ablator gets amplified and can end up with the pusher breaking up and allowing the pressurised fuel inside to escape. Another problem was the pusher itself which, being metal, weighed around a hundred times as much as the D-T gas it contained. Hence much of the energy of the driver was expended in accelerating the pusher rather than compressing the fuel.
So Nuckolls started doing simulations of targets made simply of a hollow sphere of frozen D-T fuel, dispensing with pushers and ablators entirely. Here the driver radiation falls directly on the surface of the D-T sphere and as some of it gets blown off it acts as its own ablator. With this sort of target it was not enough to have a strong pulse to get the pusher moving and then rely on its momentum to compress the fuel. With no pusher he had to tailor an extended pulse from the driver so that it keeps on pushing. The pulse would start off with low power and ramp up as the pressure within the imploding target increases. Nuckolls also manipulated the implosion so that the very centre of the compressed fuel got the hottest and the fusion burn would ignite there and then propagate outwards, consuming the rest of the D-T fuel.
From the amount of energy you could produce with such a fusion explosion, Nuckolls calculated that the targets would have to be very cheap, no more than a few cents each. A frozen D-T sphere might cost too much, so he simulated a scenario of using the equivalent of an eye-dropper to create droplets of liquid D-T. By further fine-tuning the length and shape of the driver pulse, Nuckolls was able to compress the droplet to a density of 1,000 grams/cm3 – 100 times as dense as lead – and a central temperature of tens of millions of °C. It was a tour-de-force, but it remained only a simulation.
Nuckolls’ fellow weapons designers didn’t take his work very seriously. They referred to the numerous internal memos documenting his progress as ‘Nuckolls’ Nickel Novels.’ In the strange world of the weapons labs, your designs aren’t considered worth much unless they are turned into physical form, taken to Nevada or a South Pacific island and exploded. Nuckolls had no way to test his designs, so his colleagues thought of them as science fiction.
One of the key things that Nuckolls’ schemes lacked was a driver, but the perfect thing was about to fall in his lap: the laser. Over the past few decades physicists had been studying the phenomenon of stimulated emission of electromagnetic radiation, such as light and microwaves. Stimulated emission occurs when one of the electrons around an atom is raised to a higher energy level and, rather than spontaneously jumping back down to a lower level, it hovers there briefly. Then if radiation with a particular wavelength should come along, its presence stimulates the electron to make the jump back down and emits its energy as more radiation. But this isn’t just any old radiation; it’s a perfectly minted copy of the wave that stimulated its creation – same direction, same wavelength and perfectly in step. Researchers realised that if you could somehow produce a large quantity of atoms with electrons in an elevated energy state, a small amount of radiation passing among them would quickly be joined by much more, all the same and in step – such radiation is said to be ‘coherent’ and is extremely useful.
In 1954 Charles Townes of Columbia University in New York with two graduate students succeeded in making a microwave amplifier using the stimulated emission principle. The excited material they used was ammonia gas and they named their device the ‘maser,’ an acronym for microwave amplification by stimulated emission of radiation. In Russia, Nikolay Basov and Alexander Prokhorov of the Lebedev Physical Institute in Moscow independently achieved the same feat. From then the race was on to do the same thing with visible light. At Columbia University, graduate student Gordon Gould jotted down some ideas for stimulated emission of light in November 1957, including the key idea of an open resonator – having the energised material sandwiched between two mirrors so that light bounces back and forth, stimulating many emissions before eventually escaping through an aperture as a narrow beam of coherent light. Gould had the great forethought of getting a notary to officially verify the date of his ideas. Townes meanwhile had teamed up with Arthur Schawlow of Bell Telephone Laboratories and a few months later in 1958 they filed a patent on a similar device and published a paper describing their ideas. Basov and Prokhorov were also closing in on an optical device and Prokhorov published a paper independently describing the open resonator concept the same year.
Gould presented his ideas at a conference in 1959 and coined the name ‘laser,’ using the same formula as for maser but amplifying light rather than microwaves. He filed a patent in April but it was rejected by the US Patent Office in favour of the rival patent from Bell Labs. This led to a bitter twenty-eight-year patent battle that was eventually won by Gould.
But in 1959, none of the competing groups was having much luck in building a working device. On 16th May, 1960, they were all beaten to the prize by Theodore Maiman, a physicist and electrical engineer whose doctorate concerned optical and microwave measurements of excited helium atoms. Inspired by Townes and Schawlow’s 1958 paper, he began to look for a suitable material for a laser medium. Working at Hughes Research Laboratories in Malibu, Maiman settled on synthetic ruby. He acquired a rod of ruby, put mirrors at either end and pumped it with light from flashlamps to create the necessary population of energised atoms. Because of the short-lived nature of the flashlamps, the ruby only produced short pulses, but the thin coherent beam of red light (wavelength 694 nanometres) had all the hallmarks of a laser.
Maiman announced his breakthrough in July and researchers at Livermore were instantly fascinated. The problems with existing sources of light was that the beams tended to diverge and the light contained a range of wavelengths, so when you tried to focus it with a lens onto a spot the different wavelengths focus at slightly different places and the spot is smeared out. A laser beam was as straight as an arrow and, as it contained only a single wavelength, all its light behaved in the same way and its energy could be focused onto a tiny spot.
Developments in laser research came thick and fast as scientists across the globe tried new laser materials and new pumping schemes to produce different wavelengths, demonstrated extremely short pulses (of the sort that would be necessary for fusion) and, most importantly, reached higher powers. By the spring of 1961 it was becoming clear that giant lasers might one day have enough power to drive radiation implosions. Laser light was ideal because the laser and all its focusing optics could be sited some metres away out of range of the explosion.
In September Nuckolls presented the idea of a ‘thermonuclear engine’ to John Foster, now the director of the Livermore lab. He described it as ‘the fusion analog of the cyclic internal combustion engine,’ with fusion capsules ‘burned in a series of tiny contained explosions.’ Nuckolls explained how
lasers made all of this possible and rounded off his memo by suggesting ‘possible applications for this engine are power production … or a thermonuclear rocket.’ Whatever the merits of Nuckolls’ plan it came at just the wrong time: global events soon meant that he had to put aside dreams of a fusion engine.
During the 1950s there was growing international concern about the rapidly accelerating nuclear arms race and the effect of radioactive fallout from testing in the atmosphere. In 1957 and early 1958 the United States and Soviet Union began to call for a moratorium on testing. A Conference of Experts was convened by the United Nations to investigate whether, if a treaty banned all nuclear tests, it would be possible to detect whether states were cheating and carrying out clandestine explosions. In August 1958 the conference reported that it would be possible to verify compliance with a network of 160 seismic monitoring stations spread around the globe. In October the United States, the Soviet Union and the United Kingdom – the three nuclear powers at the time – began negotiations in Geneva for a comprehensive test ban and agreed to stick to a one-year moratorium. The talks continued during 1959 but a major sticking point was over the extent to which nations could inspect each others’ territory for evidence of secret tests. International tensions were increasing and at the end of the year, when the testing moratorium expired, it was not renewed, although none of the powers started to test again immediately.
France’s first nuclear test, in the Sahara Desert in February 1960, further complicated the situation and the downing of an American U-2 spy plane over Russian territory in May meant there was no more progress in the negotiations that year. The new US administration of John F. Kennedy got the ball rolling again in March 1961 but when the US and UK put forward a draft treaty the Soviets again rejected the verification provisions. On 1st September, 1961, citing increased tensions and the French tests, Russia restarted nuclear testing. But it may have had another reason for abandoning the moratorium because two months later, on 30th October, Russia shocked the world by testing Tsar Bomba, the most powerful nuclear bomb ever detonated.