Nuclear Nemesis: will climate change revitalize atomic energy?
Despite polarization of research on its virtue or vice, there is increasing realization among climate change mitigation researchers that the energy density of nuclear power should not be neglected.
The quest for a holy grail of global energy supply remains elusive, but much research continues to be cultivated and curated according to preferences and assumptions about a desired outcome. A recent paper in Nature Energy reflects such proclivities in favor of renewable energy with a clear objective of marginalizing nuclear power. Despite a very elegant hypothesis-driven conceptual framework, the authors have designed a study that diminishes the carbon benefits of nuclear by using a regression analysis that is not well-suited to the core societal question at hand: is the future of nuclear power likely to assist with carbon mitigation? Instead of addressing this question, the authors use aggregate carbon emissions data for countries and compare nuclear energy versus renewable energy dominance for two historic periods until 2014. The correlations are based on asymmetric units of comparison (given that only 31 countries are nuclear power producers while the full sample of countries with renewable portfolios is 123 in their data set). What the analysis does usefully show is that a switch to renewable energy technologies has definitively led to reduced carbon emissions, and that there can be some competition between the energy sources in terms of investment prioritization. Yet, this is where geography becomes a determining factor in considering energy pathways. Nuclear may still be significant in many contexts where low energy density is not possible to deliver economic baseload options for renewable energy storage (since storage is essential for 100% renewables transition).
The findings give the paper a highly citable abstract proposition that the proportionate rise in nuclear energy does not correlate strongly with carbon emissions reduction, as does renewable energy dominance. Yet all this analysis can credibly claim is that past first generation nuclear dominance has not been a sufficient condition for carbon reduction, though in particular contexts it still may very well be a necessary condition -- if other renewable sources are not available due to geography. Thus the analysis is not particularly illuminating as a guidance for nuclear policy discussions, particularly since future nuclear power plants will have radically different technologies from those in the past. The authors have posted a detailed "behind the paper" blog which transparently lays forth their motivation for writing this article to partly redeem themselves from a retracted 2016 paper which had inadvertent errors in data analysis. Having published with the lead author myself, I have immense respect for the intellectual integrity of his research group but in this case feel that their invested analytics could have been better channeled towards evaluating prospective nuclear technologies rather than a reflexive and asymmetric regression.
The history of carbon comparisons research on nuclear is highly contentious as the range of life cycle analyses (LCA) and environmental product declarations (EPD) methods used to compare carbon footprints from mines to markets makes outputs astronomically different. Indeed, composite literature reviews conducted earlier reveal widely divergent assessments from 4 to 220 gCO2/kWh giving ample space for activist anti-nuclear scholars to pounce upon. As further analysis by the OECD's Nuclear Energy Agency has shown, much of the inflated carbon range with nuclear stems from assumptions about concentrations of uranium ore and the construction materials (specially concrete) of conventional plants. However, much less of this will likely be relevant with future nuclear development and that is where industrial ecological research investment should be made. The high capacity factors of nuclear as well as clearly demonstrable reduced carbon of future nuclear power has now been firmly acknowledged by the International Energy Agency. Even the safety risks in terms of human mortality of nuclear considering the overall 70 plus years record of nuclear technology is better than wind power and far better than fossil fuels. This may even get better with future technologies and costs may well go down once the risk analysis is more equivalently presented as other energy sources. Therefore, even if we do not consider nuclear power to be a potential panacea, we should certainly rethink its characterization as a pariah.
The Chemical Ecology of Nuclear Power
The polarized contemporary discussion about the opportunities and obstructions for continuing uptake of nuclear energy deserve attention. To harness the vast energy of the weak nuclear force, an element has to be “fissile” – meaning its nucleus should be large and unstable enough to be easily split with a bombardment of relatively low energy neutrons. Often the instability results from an odd number of neutrons in the atom. Thus Uranium 235 and 233 are both fissile whereas Uranium 238 (the more commonly occurring isotope of the element) is not. However, Uranium 238 is fertile, meaning that it can also absorb a neutron and convert to fissionable plutonium, which can be used for energy generation, and more ominously for nuclear bombs too! As current uranium-based nuclear power plants in the United States become economically unattractive due to high safety compliance costs, it may be time to consider the next generation of nuclear energy going back to a basic understanding of how nuclear power can be most efficiently harnessed through fission.
First, an orderly analysis of nuclear energy has to harken back to the source of the raw material needed. Minerals containing fissionable or fertile elements have to be mined from the earth. Given the natural occurrence of fissionable uranium 235, albeit in very small quantities, it has been the preferred pathway for sourcing nuclear energy. However, if a more structured approach to considering the full systems input and output benefits of using a fertile but not fissile material, such as thorium, had been up-scaled we might have ended up with a more promising future for nuclear power. This non-thorium trajectory was partly due to the Cold War nuclear arms race which made uranium mining preferred as a complementary source for weapons grade material. The unsettling decisions made by nuclear armed states linking nuclear energy and nuclear weapons continued in the case of tritium production from nuclear power plants for weapons-related work, even after the cold war.
Many of the safety challenges of current uranium-fueled nuclear power plants stem from the need to maintain high pressures in order to sustain the nuclear reaction with an accumulation of highly explosive hydrogen gas as a byproduct. Furthermore, there is a constant need to control temperature rise with a series of mitigation measures. Thus we are trying to force order on a system which is hell-bent on releasing its entropic power. The explosion we observed at the Fukushima Daichi power plant following the tsunami of 2011 was actually hydrogen gas exploding because the safety pumps failed and led to an uncontrollable set of reactions. Yet, misrepresentations of accidents and a the public's bias towards exaggerating episodic rather than chronic risks leads to excessive safety investments that make conventional nuclear uneconomical, while also not allowing future technologies from being considered with analytical composure.
Back to the Future of Thorium Reactors and Nuclear Wastes
The next generation of nuclear reactor design is noting these matters from a more orderly systems perspective than did the earlier nuclear power plants. Although, the new generation of power plants could continue to use uranium as a fuel, there is good reason to reconsider thorium, which was neglected earlier because it was deemed to not be directly fissile. Reactors with thorium as the primary fuel would also generate materials which would have much shorter-lived harmful radiation generation potential, owing to the shorter half-lives of the decay isotopes. Thorium is also three times more abundant than uranium in the earth’s crust and found far more widely in economically extractable deposits. The element is a byproduct of existing mines and hence for the foreseeable future no new thorium mining would be required to source thorium reactors. Many of the rare earths mining and processing operations produce thorium as a byproduct and it is labeled as a “waste,” whereas it has much potential to be a feedstock for the next generation of reactors. This is where contemporary environmental activism needs to apply the same metrics of sustainability that they do for any material recycling efforts.
The stigma of nuclear “waste” as an imponderable issue has led to a range of errant policies that continue to haunt the future of sustainable energy supply from this vital resource. Applying our understanding of entropic systems, nuclear “waste” materials are actually fission byproducts. The great value of how these byproducts are generated lies in the low entropy state in which we can contain them. This is the low entropy "deep time" approach the government of Finland has taken in its risk analysis in constructing a long-term nuclear waste repository. Thus even the most radioactively toxic byproducts of nuclear power are highly contained, whereas the byproducts of other energy sources such as natural gas burning get dispersed widely through emissions. Even with nascent carbon capture and storage devices on emissions stacks, we are unable to contain many of the byproducts of such combustion. For other sources, hydropower, wind and solar energy, the entropic effects are not in terms of direct waste generation but the extent of land or sea area needed for their production, and of course the material needs of producing and maintaining the infrastructure. Let me be clear, we cannot be glib about nuclear energy. There are still many technical hurdles and risk assurance issues to be addressed but a dispassionate review of the next generation of reactors is in order.
Power Density Imperative
The concept of “power density” – the rate of energy transfer per unit volume of fuel -- is crucial in terms of understanding the value of nuclear order in providing sustainable energy as compared with other sources. This point has been admirably advocated by the Canadian-Czech energy analyst Vaclav Smil in his numerous books on the topic. Building on his work, Jesse Ausubel from Rockefeller University has calculated that uranium in a light water reactor is at least four orders of magnitude, ten thousand times, denser than coals, oils, and hydrocarbon gases. A fast breeder reactor involving thorium would multiply the ratio another hundred times or more. One may wonder why there is continuing controversy about the economic viability of nuclear power and even controversies over its carbon emissions. There should be no doubt that the closing of nuclear power plants in the US is going to make our emissions targets harder to achieve -- this is acknowledged most recently by Leah Stokes in her notable book Short-circuiting Policy about special interest groups in U.S energy policy using the heuristic of "narwhal curves".
Much of the economic calculations on nuclear power have focused on current fissile reactors and their safety needs which require massive expensive concrete structures. Molten salt reactors, that do not require high pressures and also have a built-in disengagement mechanism to prevent accidents could be far less expensive. Such reactors are tentatively being developed through a range of companies, including Terra-Power, which was initiated by Bill Gates, alongside “traveling wave reactors” which use depleted uranium as a fuel. “Depleted” uranium connotes a byproduct of enrichment processes which is often used to make armor penetrating bullets and counterweights in aircraft. Given the challenging regulatory environment around nuclear energy in the United States, many of the new nuclear ventures have been developing their prototype projects in partnership with China. This is where particular notions of political order may come into conflict with a quest for sustainability through nuclear order. Terra-Power had reached an agreement in 2017 to build a demonstration reactor south of Beijing with the China National Nuclear Corporation -- which is a key player in China's carbon neutrality target by mid-century. However, the U.S. government imposed additional restrictions on joint ventures with China in 2019 which has prevented the company from moving forward with the plan as it seeks new partners. This is potentially an area of bipartisanship as well as of science diplomacy which the incoming Biden Administration should consider pursuing.
Billionaire Energy Brawls
Bill Gates and Elon Musk are in a literal power tussle over nuclear versus solar order for the future of energy. Musk has bet on solar with the argument that even if solar’s energy density is much lower than nuclear, it can be situated in areas with ample vacant land in deserts. The bigger issue with regard to solar power’s ascendance will be not the land area per se but the enormous material needs for building the infrastructure and the durability of the material. Currently solar panels have a warranty life of 30-year life span before losing efficiency. The metrics on the actual age of solar and wind infrastructure is one of the cruxes of the controversy that surrounded the release of Michael Moore and Jeff Gibbs infamous film Planet of the Humans in April 2020. It is also important to note the responses from the renewable energy industry to some of the accusations and data presented in this highly disputed film as well as some of the core systems arguments it misses.
The energy density calculations can also be impacted by the conversion needs of one form of energy to another for ultimate delivery. So for example, hydrogen as a fuel in aircraft is three times more energy dense than conventional jet fuel but its production energy and carbon footprint is still highly variable. “Grey hydrogen” uses methane as feedstock and hence production process has carbon emissions, which can be partially addressed through nascent carbon capture and storage technologies of what is often termed “blue hydrogen.” The so-called “green hydrogen” which Saudi Arabia is claiming as its central fuel in the half-trillion dollar, 1 million population futuristic city of Neom on the Red Sea would be produced through solar and wind power linked electrolysis of water. Thus the overall energy density for this still chimera city would be very low but perhaps if there is vast barren land on which these solar and wind farms are constructed it is still ecologically efficient.
Nuclear Renewable Complementarity
Improving the durability of solar infrastructure would make the initial material investment more purposeful and sustainable and address some of the issues around low power density noted in the film. Solar and nuclear could also find complementarity with the new breed of reactors which would allow for greater valve control of energy production from nuclear fission. Even with excellent battery storage infrastructure, solar power will always need some secondary backup supply to ensure high quality delivery for particular uses. Natural gas or biofuels have the advantage of easy valve control (switching on and off) as compared with conventional nuclear fission power where it takes considerable time to switch a reactor on and off. However, the new generation of molten salt reactors and other innovations will allow for more flexibility in this regard thereby allowing for more effective backup for solar and wind power.
An ironic connection between solar and nuclear recently came through as America’s newest and possibly the last conventional fission reactor opened at Watts Barr, Tennessee in 2017. One company that came to the area almost as a direct response to Watts Bar was Wacker Polysilicon. The company which makes polysilicon for solar panels started investing in Rhea County, Tennessee in 2009 after construction had started on the new unit at the power plant. Gary Farlow, CEO of the Bradley/Cleveland County Chamber of Commerce noted that “the biggest consideration for Wacker when they looked at our area was not only the capacity of the power system but also [it’s] quality and reliability”. The Wacker facility needs so much power that it requires 20% to 25% of the full capacity of a nuclear plant. The Wacker plant provides jobs for approximately 650 full time employees and has invested a total of $2.5 billion in to the local area since 2009, and is an intriguing example of how the energetics of human technology even for competing sources of power.
Don't give up on Nuclear Fusion
The most tangible connection between solar and nuclear lies within the sun itself. Our life-giving orb of plasma at the center of the solar system is a massive nuclear fusion reactor. Unlike nuclear fission reactors on earth, which release the power of the weak nuclear force, nuclear fusion reactors release the power of the strong nuclear force. The power density is dizzyingly enormous. The fusion of two hydrogen isotopes to form helium releases around four times as much energy as uranium fission and around 200 million times more energy than breaking hydrocarbon bonds. Energy released increases with square of the pressure.
The major challenge with fusion technology on Earth’s surface is to compensate for stellar gravitational and pressure conditions by increasing temperature in order to “ignite” the reaction. Thus the energy needed to start the reaction currently ends up being more than what is generated. Indeed, fusion scientists note with some measure of satisfaction that their reactors are the hottest places in the solar system as the ignition temperature to start a fusion reaction between hydrogen isotopes on earth is a staggering 100 million degrees Kelvin –this is at least 5 to six times more than temperature at the center of the sun! Human ingenuity has been able to concentrate energy in highly ordered form of lasers to reach such mind-boggling temperatures.
Fusion’s promise to supply relatively clean energy at stellar scales with relatively small inputs of hydrogen isotopes derived from water, and perhaps small amounts of oceanic lithium is enough to motivate major investment in further research. For the past decade, a $20 billion multi-country project to investigate thermonuclear fusion is being constructed in Provence, Southern Franc (The International Thermonuclear Reactor – ITER). This massive device will be a prototype for what scientists are hoping could lead to economically viable fusion power. We will revisit the unusual international alliance of the United States, India, China, Russia and S. Korea, which brought forth this unusual collaboration in our discussion of political order, in the final part of this book as well.
The win-win dream of getting limitless nuclear fusion energy at room temperature from the infamous experiment by Fleishman and Pons in the 1980s remains elusive. Fleischman died in 2012 but Pons moved from Utah to France, not far from where the ITER facility is being constructed. Although ITER will only focus on very hot fusion prospects, the case of cold fusion is not entirely closed either. In 2015, a team of researchers supported by Google published a review article in Nature which laid forth some specific chemical conditions under which hydride compounds of precious metals like palladium may be able to most effectively harness fusion power at lower temperatures. However, the most likely opportunity for nuclear fusion energy being extracted remains linked to electromagnetic manipulation at high temperatures.
Sense and Uncertainty in Nuclear Policy Planning
In their landmark book Risk and Culture Mary Douglas and Aaron Wildavsky alerted us to the deceptive objectification of risk. Ultimately, risk in a complex world of competing and intersecting phenomena is a social heuristic. Going back to the Bible they note that dietary laws of Leviticus may have stemmed from some degree of medical materialism but were ultimately about a cultural delineation of boundaries and the social construction of risk. By their analysis, excessive safety targeted at a particular technology like nuclear power implementation could undermine systems safety as alternatives can appear more attractive than they actually are when considering the full scale and scope of return on investment. The economic marginalization of nuclear power is an intriguing case in point. Massive safety upgrade requirements to existing nuclear power plants have rendered them uneconomical, thereby making the climate mitigation targets more challenging to obtain the short-term as other low carbon sources are up-scaled.
Sociologist Charles Perrow suggests two ways to mitigate the chance of accidents in such systems: a) looser coupling between various parts of the system so that an initial fault would be less likely to lead to a cascade of catastrophic failures; and b) reducing the complexity of relationship between components. If either of these conditions is met, accidents become much less likely. For example, my employer – a university – is interactively complex but only loosely coupled. Thus decisions may be influenced by factors that cannot be predicted which might otherwise cause “normal accidents” but effects are felt slowly. In contrast, modern factory production lines are often tightly coupled, with close and rapid transformations between various stages of a process, but the relationships between these stages is not complex (even though the machines may seem complicated). Neither of these systems is prone to accidents unlike a coal mine or a power plant. Perrow suggested that nuclear plants could be made marginally less complex if the spent storage pool were removed from the premises. Such pools often require constant cooling and attention and a reactor accident could force a complete loss of power to the fuel cooling system leading to a cascade of subsequent non-preventable events. Yet prevention of accidents through a reduction of complexity or looser coupling could also have other implications for design. The quest should be for systems that are able to harness the complexity and use it for improving outcomes for society.
There are many new scientific developments that are happening daily in this arena, and we should sensibly consider them as well as the uncertainties they present. Several science-based international policy primers continue to be published by the International Atomic Energy Agency, as well as established research institutions such as MIT, which still maintains a nuclear engineering department. Startup companies are also blossoming in this sector, including one started by a doctoral graduate from the MIT program called Oklo which is the first to get its application in review for an advanced new nuclear reactor design with the U.S. Nuclear Regulatory Commission in June, 2020. There are also growing synergies being found between the renewable energy sector and nuclear energy through the National Renewable Energy Laboratory (under the banner of Flexible Nuclear Energy for Clean Energy Systems). Even the fossil fuel industry's publications are recognizing that there is potential for a "green nuclear deal." The global energy transition will require us to resist positional temptations and be willing to embrace revisionism where needed as some environmentalists and systems scientists such as centenarian James Lovelock have argued with regard to nuclear power. With trepidation and without intransigence, let us keep our eyes on the atom with astute anticipation.
Saleem H. Ali is Blue and Gold Distinguished Professor of Energy and the Environment at the University of Delaware (joint tenured appointments in Geography and the Joseph Biden School of Public Policy); Senior Fellow at Columbia University's Center on Sustainable Investment; and a Professorial Research Fellow at the University of Queensland (Australia). He is also a member of the United Nations International Resource Panel and the Scientific and Technical Advisory Panel of the Global Environment Facility. Twitter @saleem_ali