More rain leads to fewer trees in the African savanna (PNAS)

Lone tree on savanna
More rain on African savanna leads to fewer trees, a Princeton study found. (Credit PEI)

by Angela Page for the Princeton Environmental Institute

In 2011, an influx of remote sensing data from satellites scanning the African savannas revealed a mystery: these rolling grasslands, with their heavy rainfalls and spells of drought, were home to significantly fewer trees than researchers had previously expected given the biome’s high annual precipitation. In fact, the 2011 study found that the more instances of heavy rainfall a savanna received, the fewer trees it had.

This paradox may finally have a solution due to new work from Princeton University recently published in the Proceeding of the National Academy of Sciences. In the study, researchers use mathematical equations to show that physiological differences between trees and grasses are enough to explain the curious phenomenon.

“A simple way to view this is to think of rainfall as annual income,” said Xiangtao Xu, a doctoral candidate in David Medvigy’s lab and first author on the paper. “Trees and grasses are competing over the amount of money the savanna gets every year and it matters how they use their funds.” Xu explained that when the bank is full and there is a lot of rain, the grasses, which build relatively cheap structures, thrive. When there is a deficit, the trees suffer less than grasses and therefore win out.

To establish these findings, Xu and his Princeton collaborators Medvigy, assistant professor in geosciences, and Ignacio Rodriguez-Iturbe, professor of civil and environmental engineering, created a numerical model that mimics the actual mechanistic functions of the trees and grasses. “We put in equations for how they photosynthesize, how they absorb water, how they steal water from each other—and then we coupled it all with a stochastic rainfall generator,” said Xu.

Whereas former analyses only considered total annual or monthly rainfall, understanding how rainfall is distributed across the days is critical here, Xu said, because it determines who will win in a competition between grasses and trees for the finite resource of water availability.

The stochastic rainfall generator draws on rainfall parameters derived from station observations across the savanna. By coupling it with the mechanistic equations describing how the trees and grasses function, the team was able to observe how the plants would respond under different local climate conditions.

The research team found that under very wet conditions, grasses have an advantage because they can quickly absorb water and support high photosynthesis rates. Trees, with their tougher leaves and roots, are able to survive better in dry periods because of their ability to withstand water stress. But this amounts to a disadvantage for trees in periods of intense rainfall, as they are comparatively less effective at utilizing the newly abundant water.

“We put realistic rainfall schemes into the model, then generated corresponding grass or tree abundance, and compared the numerical results with real-world observations,” Xu said. If the model looked like the real-world data, then they could say it offered a viable explanation for the unexpected phenomenon, which is not supported by traditional models—and that is exactly what they found. They tested the model using both field measurements from a well-studied savanna in Nylsvley, South Africa and nine other sites along the Kalahari Transect, as well as remote sensing data across the whole continent. With each site, the model accurately predicted observed tree abundances in those locations.

The work rejects the long held theory of root niche separation, which predicts that trees will outcompete grasses under intense rainfall when the soil becomes saturated, because their heavy roots penetrate deeper into the ground. “But this ignores the fact that grasses and trees have different abilities for absorbing and utilizing water,” Xu said. “And that’s one of the most important parts of what we found. Grasses are more efficient at absorbing water, so in a big rainfall event, grasses win.”

“Models are developed to understand and predict the past and present state — they offer a perspective on future states given the shift in climatic conditions,” said Gaby Katul, a Professor of Hydrology and Micrometeorology in the Nicholas School of the Environment at Duke University, who was not involved in the research. “This work offers evidence of how shifts in rainfall affect the tree-grass interaction because rainfall variations are large. The approach can be used not only to ‘diagnose’ the present state where rainfall pattern variations dominate but also offers a ‘prognosis’ as to what may happen in the future.”

Several high profile papers over the last decade predict that periods of intense rainfall like those described in the paper will become more frequent around the globe, especially in tropical areas, Xu said. His work suggests that these global climate changes will eventually lead to diminished tree abundance on the savannas.

“Because the savanna takes up a large area, which is home to an abundance of both wild animals and livestock, this will influence many people who live in those areas,” Xu said. “It’s important to understand how the biome would change under global climate change.”

Furthermore, the study highlights the importance of understanding the structure and pattern of rainfall, not just the total annual precipitation—which is where most research in this area has traditionally focused. Fifty years from now, a region may still experience the same overall depth of precipitation, but if the intensity has changed, that will induce changes to the abundance of grasses and trees. This, in turn, will influence the herbivores that subsist on them, and other animals in the biome — essentially, affecting the entire complex ecosystem.

Xu said it would be difficult to predict whether such changes would have positive or negative impacts. But he did say that more grasses mean more support for cows and horses and other herbivores. On the other hand, fewer trees mean less CO2 is captured out of the atmosphere, as well as diminished habitat for birds and other animals that rely on the trees for survival.

What the model does offer is an entry point for better policies and decisions to help communities adapt to future changes. “It’s just like with the weather,” Xu said. “If you don’t read the weather report, you have to take what nature gives you. But if you know in advance that it will rain tomorrow, you know to bring an umbrella.”

This work was supported by the Princeton Environmental Institute and the Andlinger Center for Energy and the Environment at Princeton University.

Read the abstract.

Xiangtao Xua, David Medvigy, and Ignacio Rodriguez-Iturbe. Relation between rainfall intensity and savanna tree abundance explained by water use strategies. Published online September 29, 2015, doi: 10.1073/pnas.1517382112. PNAS October 5, 2015.

Long-sought chiral anomaly detected in crystalline material (Science)

By Catherine Zandonella, Office of the Dean for Research

A study by Princeton researchers presents evidence for a long-sought phenomenon — first theorized in the 1960s and predicted to be found in crystals in 1983 — called the “chiral anomaly” in a metallic compound of sodium and bismuth. The additional finding of an increase in conductivity in the material may suggest ways to improve electrical conductance and minimize energy consumption in future electronic devices.

“Our research fulfills a famous prediction in physics for which confirmation seemed unattainable,” said N. Phuan Ong, Princeton’s Eugene Higgins Professor of Physics, who co-led the research with Robert Cava, Princeton’s Russell Wellman Moore Professor of Chemistry. “The increase in conductivity in the crystal and its dramatic appearance under the right conditions left little doubt that we had observed the long-sought chiral anomaly.”

The study was published online today in the journal Science.

Handedness
This sketch illustrates the concept of handedness, or chirality, which is found throughout nature. Most chemical structures and many elementary particles come in right- and left-handed forms. Source: Princeton University

The chiral anomaly – which describes how elementary particles can switch their orientation in the presence of electric and magnetic fields – stems from the observation that right- and left-handedness (or “chirality” after the Greek word for hand) is ubiquitous in nature. For example, most chemical structures and many elementary particles come in right- and left-handed forms that are mirror images of each other.

Early research leading up to the discovery of the anomaly goes back to the 1940s, when Hermann Weyl at the Institute for Advanced Study in Princeton, New Jersey, and others, discovered that all elementary particles that have zero mass (including neutrinos, despite their having an extremely small mass) strictly segregate into left- and right-handed populations that never intermix.

A few decades later, theorists discovered that the presence of electric and magnetic fields ruins the segregation of these particles, causing the two populations to transform into each other with observable consequences.

This field-induced mixing, which became known as the chiral anomaly, was first encountered in 1969 in work by Stephen Adler of the Institute for Advanced Study, John Bell of the European Organization for Nuclear Research (CERN) and Roman Jackiw of the Massachusetts Institute of Technology, who successfully explained why certain elementary particles, called neutral pions, decay much faster — by a factor of 300 million — than their charged cousins. Over the decades the anomaly has played an important if perplexing role in the grand quest to unify the four fundamental forces of nature.

The prediction that the chiral anomaly could also be observed in crystals came in 1983 from physicists Holger Bech Nielsen of the University of Copenhagen and Masao Ninomiya of the Okayama Institute for Quantum Physics. They suggested that it may be possible to detect the anomaly in a laboratory setting, which would enable researchers to apply intense magnetic fields to test predictions under conditions that would be impossible in high-energy particle colliders.

Recent progress in the development of certain kinds of crystals known as “topological” materials has paved the way toward realizing this prediction, Ong said. In the crystal of Na3Bi, which is a topological material known as a Dirac semi-metal, electrons occupy quantum states which mimic massless particles that segregate into left- and right-handed populations.

To see if they could observe the anomaly in Na3Bi, Jun Xiong, a graduate student in physics advised by Ong, cooled a crystal of Na3Bi grown by Satya Kushwaha, a postdoctoral research associate in chemistry who works with Cava, to cryogenic temperatures in the presence of a strong magnetic field that can be rotated relative to the direction of the applied electrical current in the crystal. When the magnetic field was aligned parallel to the current, the two chiral populations intermixed to produce a novel increase in conductivity, which the researchers call the “axial current plume.” The experiment confirmed the existence of the chiral anomaly in a crystal.

“One of the key findings in the experiment is that the intermixing leads to a charge current, or axial current, that resists depletion caused by scattering from impurities,” Ong said. “Understanding how to minimize the scattering of current-carrying electrons by impurities — which causes electronic devices to lose energy as heat — is important for realizing future electronic devices that are more energy-efficient. While these are early days, experiments on the long-lived axial current may help us to develop low-dissipation devices.”

The research was supported by the National Science Foundation, the Army Research Office and the Gordon and Betty Moore Foundation.

Read the abstract or paper.

The paper, “Evidence for the chiral anomaly in the Dirac semimetal Na3Bi,” was published online in the journal Science by Jun Xiong; Satya K. Kushwaha; Tian Liang; Jason W. Krizan; Max Hirschberger; Wudi Wang; Robert J. Cava; and N. Phuan Ong.

Grey Swans: Rare but predictable storms could pose big hazards (Nature Climate Change)

By John Sullivan, School of Engineering and Applied Science

Grey Swan events
Toward the end of this century (project here for the years 2068 to 2098) the possibility of storm surges of eight to 11 meters (26 to 36 feet) increases significantly in cities not usually expected to be vulnerable to tropical storms, such as Tampa, Florida, according to recent research in the journal Nature Climate Change.

Researchers at Princeton and MIT have used computer models to show that severe tropical cyclones could hit a number of coastal cities worldwide that are widely seen as unthreatened by such powerful storms.

The researchers call these potentially devastating storms Gray Swans in comparison with the term Black Swan, which has come to mean truly unpredicted events that have a major impact. Gray Swans are highly unlikely, the researchers said, but they can be predicted with a degree of confidence.

“We are considering extreme cases,” said Ning Lin, an assistant professor of civil and environmental engineering at Princeton. “These are relevant for policy making and planning, especially for critical infrastructure and nuclear power plants.”

In an article published Aug. 31 in Nature Climate Change, Lin and her coauthor Kerry Emanuel, a professor of atmospheric science at the Massachusetts Institute of Technology, examined potential storm hazards for three cities: Tampa, Fla.; Cairns, Australia; and Dubai, United Arab Emirates.

The researchers concluded that powerful storms could generate dangerous storm surge waters in all three cities. They estimated the levels of devastating storm surges occurring in these cities with odds of 1 in 10,000 in an average year, under current climate conditions.

Tampa Bay, for example, has experienced very few extremely damaging hurricanes in its history, the researchers said. The city, which lies on the central-west coast of Florida, was hit by major hurricanes in 1848 and in 1921.

The researchers entered Tampa Bay area climate data recorded between 1980 and 2005 into their model and ran 7,000 simulated hurricanes in the area. They concluded that, although unlikely, a Gray Swan storm could bring surges of up to roughly six meters (18 feet) to the Tampa Bay area. That level of storm surge could dwarf those of the storms of 1848 and 1921, which reached about 4.6 meters and 3.5 meters respectively.

The researchers said their model also indicates that the probability of such storms will increase as the climate changes.

“With climate change, these probabilities can increase significantly over the 21st century,” the researchers said. In Tampa, the current storm surge likelihood of 1 in 10,000 is projected to increase to between 1 in 3,000 and 1 in 1,100 by mid-century and between 1 in 2,500 and 1 in 700 by the end of the century.

The work was supported in part by Princeton’s Project X Fund, the Andlinger Center for Energy and the Environment’s Innovation Fund, and the National Science Foundation.

Read the abstract.

Ning Lin & Kerry Emanuel, “Grey swan tropical cyclones,” Nature Climate Change (2015); doi:10.1038/nclimate2777

On warmer Earth, most of Arctic may remove, not add, methane (ISME Journal)

Arctic
McGill Arctic Research Station during late-spring at Expedition Fjord, Axel Heiberg Island, Nunavut, Canada. (Photo by Nadia Mykytczuk, Laurentian University)

By Morgan Kelly, Office of Communications

In addition to melting icecaps and imperiled wildlife, a significant concern among scientists is that higher Arctic temperatures brought about by climate change could result in the release of massive amounts of carbon locked in the region’s frozen soil in the form of carbon dioxide and methane. Arctic permafrost is estimated to contain about a trillion tons of carbon, which would potentially accelerate global warming. Carbon emissions in the form of methane have been of particular concern because on a 100-year scale methane is about 25-times more potent than carbon dioxide at trapping heat.

However, new research led by Princeton University researchers and published in The ISME Journal in August suggests that, thanks to methane-hungry bacteria, the majority of Arctic soil might actually be able to absorb methane from the atmosphere rather than release it. Furthermore, that ability seems to become greater as temperatures rise.

The researchers found that Arctic soils containing low carbon content — which make up 87 percent of the soil in permafrost regions globally — not only remove methane from the atmosphere, but also become more efficient as temperatures increase. During a three-year period, a carbon-poor site on Axel Heiberg Island in Canada’s Arctic region consistently took up more methane as the ground temperature rose from 0 to 18 degrees Celsius (32 to 64.4 degrees Fahrenheit). The researchers project that should Arctic temperatures rise by 5 to 15 degrees Celsius over the next 100 years, the methane-absorbing capacity of “carbon-poor” soil could increase by five to 30 times.

The researchers found that this ability stems from an as-yet unknown species of bacteria in carbon-poor Arctic soil that consume methane in the atmosphere. The bacteria are related to a bacterial group known as Upland Soil Cluster Alpha, the dominant methane-consuming bacteria in carbon-poor Arctic soil. The bacteria the researchers studied remove the carbon from methane to produce methanol, a simple alcohol the bacteria process immediately. The carbon is used for growth or respiration, meaning that it either remains in bacterial cells or is released as carbon dioxide.

First author Chui Yim “Maggie” Lau, an associate research scholar in Princeton’s Department of Geosciences, said that although it’s too early to claim that the entire Arctic will be a massive methane “sink” in a warmer world, the study’s results do suggest that the Arctic could help mitigate the warming effect that would be caused by a rising amount of methane in the atmosphere. In immediate terms, climate models that project conditions on a warmer Earth could use this study to more accurately calculate the future methane content of the atmosphere, Lau said.

“At our study sites, we are more confident that these soils will continue to be a sink under future warming. In the future, the Arctic may not have atmospheric methane increase as much as the rest of the world,” Lau said. “We don’t have a direct answer as to whether these Arctic soils will offset global atmospheric methane or not, but they will certainly help the situation.”

The researchers want to study the bacteria’s physiology as well as test the upper temperature threshold and methane concentrations at which they can still efficiently process methane, Lau said. Field observations showed that the bacteria are still effective up to 18 degrees Celsius (64.4 degrees Fahrenheit) and can remove methane down to one-quarter of the methane level in the atmosphere, which is around 0.5 parts-per-million.

“If these bacteria can still work in a future warmer climate and are widespread in other Arctic permafrost areas, maybe they could regulate methane for the whole globe,” Lau said. “These regions may seem isolated from the world, but they may have been doing things to help the world.”

From Princeton, Lau worked with geoscience graduate student and second author Brandon Stackhouse; Nicholas Burton, who received his bachelor’s degree in geosciences in 2013; David Medvigy, an assistant professor of geosciences; and senior author Tullis Onstott, a professor of geosciences. Co-authors on the paper were from the University of Tennessee-Knoxville; the Oak Ridge National Laboratory; McGill University; Laurentian University in Canada; and the University of Texas at Austin.

The research was supported by the U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research (DE-SC0004902); the National Science Foundation (grant no. ARC-0909482); the Canada Foundation for Innovation (grant no. 206704); the Natural Sciences and Engineering Research Council of Canada Discovery Grant Program (grant no. 298520-05); and the Northern Research Supplements Program (grant no. 305490-05)

Read the abstract.

M.C.Y. Lau, B.T. Stackhouse, A.C. Layton, A. Chauhan, T. A. Vishnivetskaya, K. Chourey, J. Ronholm, N.C.S. Mykytczuk, P.C. Bennett, G. Lamarche-Gagnon, N. Burton, W.H. Pollard, C.R. Omelon, D.M. Medvigy, R.L. Hettich, S.M. Pfiffner, L.G. Whyte, and T.C. Onstott. 2015. An active atmospheric methane sink in high Arctic mineral cryosols. The ISME Journal. Article published in print August 2015. DOI:10.1038/ismej.2015.13.

 

 

New pathways for nickel chemistry (Nature)

By: Tien Nguyen, Department of Chemistry

Using a light-activated catalyst, researchers have unlocked a new pathway in nickel chemistry to construct carbon-oxygen (C-O) bonds that would be highly valuable to pharmaceutical and agrochemical industries.

“It was extraordinary to see the reaction go from zero to 91 percent yield just by adding a photocatalyst and switching on a light,” said David MacMillan, the James S. McDonnell Distinguished University Professor of Chemistry and principal investigator of the work published on August 12 in the journal Nature.

C-O coupling reaction scheme
C-O coupling reaction scheme

The article reported the first general C-O cross-coupling reaction, which connects ring-shaped molecules, called aromatics, to alcohol-containing molecules, using a dual nickel-photoredox catalyst system. Extending nickel’s reach to C-O coupling reactions has great potential given the tremendous impact nickel chemistry has had on analogous C-C coupling reactions.

For the most part, these C-O cross-coupling reactions have been unattainable by traditional nickel catalysis. That’s because the final bonding-forming step—called reductive elimination—in which nickel excises itself to leave behind a C-O bond, is fundamentally unfavorable. By introducing a photocatalyst, the research team was able to remove a single electron from the key nickel intermediate to access an elusive oxidation state of nickel that can readily form the desired bond.

Using a photocatalyst to effectively expand the possible oxidation states of nickel has significant implications beyond this specific transformation. “We assume that it’s not just nickel chemistry that you can dramatically change, but other metals as well,” MacMillan said. “That’s very exciting position to be in.”

To confirm their understanding of how the catalysts worked together to promote the reaction, Valerie Shurtleff, a graduate student in the MacMillan lab and co-author on the paper, performed a series of mechanistic experiments.

Shurtleff synthesized a model nickel complex that mimicked the key bond-forming intermediate, a nickel compound bridging the two coupling partners. She found that without the presence of both light and photocatalyst, the complex was unable to form the product. Further electrochemical experiments confirmed that the model nickel complex was well within the range of molecules with which the photocatalyst could theoretically interact.

The new nickel-photocatalyst combination also offers a mild alternative to similar existing methods that employ palladium or copper catalysts and can access complementary coupling partners.

The MacMillan group has made many major contributions in the area of photoredox catalysis, but has only recently begun discovering the possibilities that arise from combining photoredox with other forms of catalysis, such as nickel. “There are so many different avenues to explore,” Shurtleff said. “We’re really just getting started.”

Read the abstract.

Terrett, J. A.; Cuthbertson, J. D.; Shurtleff, V. W.; MacMillan, D. W. C. “Switching on Elusive Organometallic Mechanisms with Photoredox Catalysis.” 2015, Nature.

This work was supported by financial support from the National Institute of General Medical Services (R01 GM093213-01).

Scientists propose an explanation for electron heat loss in fusion plasmas (Physical Review Letters)

By Raphael Rosen, Princeton Plasma Physics Laboratory

Elena Belova
PPPL Scientist Elena Belova
Photo Credit: Elle Starkman, PPPL

Creating controlled fusion energy entails many challenges, but one of the most basic is heating plasma – hot gas composed of electrons and charged atoms – to extremely high temperatures and then maintaining those temperatures. Now scientist Elena Belova of the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and a team of collaborators have proposed an explanation for why the hot plasma within fusion facilities called tokamaks sometimes fails to reach the required temperature, even as researchers pump beams of fast-moving neutral atoms into the plasma in an effort to make it hotter.

The results, published in June in Physical Review Letters, could lead to improved control of temperature in future fusion devices, including ITER, the international fusion facility under construction in France to demonstrate the feasibility of fusion power. This work was supported by the DOE Office of Science (Office of Fusion Energy Sciences).

The researchers focused on the puzzling tendency of electron heat to leak from the core of the plasma to the plasma’s edge. “One of the largest remaining mysteries in plasma physics is how electron heat is transported out of plasma,” said Jon Menard, program director for PPPL’s major fusion experiment, the National Spherical Tokamak Experiment-Upgrade (NSTX-U), which is completing a $94 million upgrade.

Belova hit upon a possible answer while performing 3D simulations of past NSTX plasmas on computers at the National Energy Research Scientific Computing Center (NERSC), in Oakland, California. She saw that two kinds of waves found in fusion plasmas appear to form a chain that transfers the neutral-beam energy from the core of the plasma to the edge, where the heat dissipates. While physicists have long known that the coupling between the two kinds of waves – known as compressional Alfvén waves and kinetic Alfvén waves (KAWs) – can lead to energy dissipation in plasmas, Belova’s results were the first to demonstrate the process for beam-excited compressional Alfvén eigenmodes (CAEs) in tokamaks.

Her simulations showed that when researchers try to heat the plasma by injecting beams of energetic deuterium, a form of hydrogen, the beams excite CAE waves in the plasma’s core. Those waves then resonate with KAW waves, which occur primarily at the plasma’s edge. As a result, the energy is transported from the injection site deep within the plasma to the plasma’s edge.

“Originally, when scientists found that the electron temperature wouldn’t go up with increased beam power, everybody assumed that the electrons were getting heated at the plasma’s center and then were somehow losing that heat,” Belova said. “Our explanation is different. We propose that part of the beam energy goes into CAEs and then to KAWs. The energy then dissipates at the plasma’s edge.”

The simulations provided a broad perspective. “In simulations you can look everywhere in a plasma,” Belova said. “In the experiments, on the other hand, you are very limited in what and where you can measure inside the hot plasma.”

Belova’s findings reflect the growing collaboration between theoretical and experimental research at the Laboratory. “Her results uncover a novel loss mechanism for electron energy that could be important for NSTX-U plasmas,” said Amitava Bhattacharjee, head of the Theory Department at PPPL.

Belova plans to run more simulations to determine whether the mechanism she identified is the primary process that modifies the electron heating profile. She will also look for ways in which physicists can avoid this wave-induced change in the profile. In the meantime, she is driven by her desire to learn more physics. “We want to understand how these waves are excited by the beam ions,” she said, “and how to avoid them in the experiments.”

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

Read the abstract.

Belova, E.V., N.N. Gorelenkov, E.D. Fredrickson, K. Tritz and N. A. Crocker. “Coupling of Neutral-Beam-Driven Compressional Alfvén Eigenmodes to Kinetic Alfvén Waves in NSTX Tokamak and Energy Channeling.” Physical Review Letters. Published June 29, 2015. DOI: 10.1103/PhysRevLett.115.015001

 

Study calculates the speed of ice formation (PNAS)

ice_cube_bannerBy Catherine Zandonella, Office of the Dean for Research

Researchers at Princeton University have for the first time directly calculated the rate at which water crystallizes into ice in a realistic computer model of water molecules. The simulations, which were carried out on supercomputers, provide insight into the mechanism by which water transitions from a liquid to a crystalline solid.

Understanding ice formation adds to our knowledge of how cold temperatures affect both living and non-living systems, including how living cells respond to cold and how ice forms in clouds at high altitudes. A more precise knowledge of the initial steps of freezing could eventually help improve weather forecasts and climate models, as well as inform the development of better materials for seeding clouds to increase rainfall.

The researchers looked at the process by which, as the temperature drops, water molecules begin to cling to each other to form a blob of solid ice within the surrounding liquid. These blobs tend to disappear quickly after their formation. Occasionally, a large enough blob, known as a critical nucleus, emerges and is stable enough to grow rather than to melt. The process of forming such a critical nucleus is known as nucleation.

To study nucleation, the researchers used a computerized model of water that mimics the two atoms of hydrogen and one atom of oxygen found in real water. Through the computer simulations, the researchers calculated the average amount of time it takes for the first critical nucleus to form at a temperature of about 230 degrees Kelvin or minus 43 degrees Celsius, which is representative of conditions in high-altitude clouds.

They found that, for a cubic meter of pure water, the amount of time it will take for a nucleus to form is about one-millionth of a second. The study, conducted by Amir Haji-Akbari, a postdoctoral research associate, and Pablo Debenedetti, a professor of chemical and biological engineering, was published online this week in the journal Proceedings of the National Academy of Sciences.

“The main significance of this work is to show that it is possible to calculate the nucleation rate for relatively accurate models of water,” said Haji-Akbari.

Cubic ice
Cubic ice is made of double-diamond cages, each of which contains 14 water molecules arranged into seven interconnected six-member rings.
Hexagonal ice
Hexagonal ice is made of hexagonal cages, each of which contains 12 water molecules arranged into two six-membered rings that sit on top of each other.

In addition to calculating the nucleation rate, the researchers explored the origin of the two different crystalline shapes that ice can take at ambient pressure. The ice that we encounter in daily life is known as hexagonal ice. A second form, cubic ice, is less stable and can be found in high-altitude clouds. Both ices are made up of hexagonal rings, with an oxygen atom on each vertex, but the relative arrangement of the rings differs in the two structures.

“When water nucleates to form ice there is usually a combination of the cubic and hexagonal forms, but it was not well-understood why this would be the case,” said Haji-Akbari. “We were able to look at how the shapes of ice blobs change during the nucleation process, and one of the main findings of our work is to explain how a less stable form of ice is favored over the more stable hexagonal ice during the initial stages of the nucleation process.” (See figure below.)

Debenedetti added, “What we found in our simulations is that before we go to hexagonal ice we tend to form cubic ice, and that was very satisfying because this has been reported in experiments.” One of the strengths of the study, Debenedetti said, was the innovative method developed by Haji-Akbari to identify cubic and hexagonal forms in the computer simulation.

Computer models come in handy for studies of nucleation because conducting experiments at the precise temperatures and atmospheric conditions when water molecules nucleate is very difficult, said Debenedetti, who is Princeton’s Class of 1950 Professor in Engineering and Applied Science and Dean for Research. But these calculations take huge amounts of computer time.

Haji-Akbari found a way to complete the calculation, whereas previous attempts failed to do so. The technique for modeling ice formation involves looking at computer-simulated blobs of ice, known as crystallites, as they form. Normally the technique involves looking at the crystallites after every step in the simulation, but Haji-Akbari modified the procedure such that longer intervals of time could be examined, enabling the algorithm to converge to a solution and obtain a sequence of crystallites that eventually led to the formation of a critical nucleus.

Model of ice nucleation
Using a computer model to explore how water molecules connect and nucleate into ice crystals, the researchers found that two types of ice compete for dominance during nucleation: cubic ice (blue) which is less stable, and hexagonal ice (red), which is stable and forms the majority of ice on Earth. Nucleation occurs when water molecules come together to form blobs (pictured above), which grow over time (left to right). Eventually hexagonal ice wins out (not shown). The researchers found that adding new cubic features onto an existing crystalline blob gives rise to nuclei that are more spherical, and hence more stable. In contrast, adding hexagonal features tends to give rise to chains of hexagonal cages that make the nucleus less spherical, and hence less stable.

Even with the modifications, the technique took roughly 21 million computer processing unit (CPU) hours to track the behavior of 4,096 virtual water molecules in the model, which is known as TIP4P/Ice and is considered one of the most accurate molecular models of water. The calculations were carried out on several supercomputers, namely the Della and Tiger supercomputers at the Princeton Institute for Computational Science and Engineering; the Stampede supercomputer at the Texas Advanced Computing Center; the Gordon supercomputer at the San Diego Supercomputer Center; and the Blue Gene/Q supercomputer at the Rensselaer Polytechnic Institute.

Debenedetti noted that the rate of ice formation obtained in their calculations is much lower than what had been found by experiment. However, the computer calculations are extremely sensitive, meaning that small changes in certain parameters of the water model have very large effects on the calculated rate. The researchers were able to trace the discrepancy, which is 10 orders of magnitude, to aspects of the water model rather than to their method. As the modeling of water molecules improves, the researchers may be able to refine their calculations of the rate.

The research was funded by the National Science Foundation (Grant CHE-1213343) and the Carbon Mitigation Initiative at Princeton University.

Read the abstract: Haji-Akbari, Amir and Pablo G. Debenedetti. 2015. Direct calculation of ice homogenous nucleation rate for a molecular model of water. Proceedings of the National Academy of Sciences Early Edition. Published online August 3, 2015.

Images courtesy of Amir Haji-Akbari, Princeton University.

After extreme drought, forests take years to rebuild CO2 storage capacity (Science)

Drought image, provided by William AndereggBy Joe Rojas-Burke, University of Utah, and Morgan Kelly, Princeton University

In the virtual world of climate modeling, forests and other vegetation are assumed to quickly bounce back from extreme drought and resume their integral role in removing carbon dioxide from Earth’s atmosphere. Unfortunately, that assumption may be far off the mark, according to a new Princeton University-based study published in the journal Science.

An analysis of drought impacts at forest sites worldwide found that living trees took an average of two to four years to recover and resume normal growth rates — and thus carbon-dioxide absorption — after a drought ended, the researchers report. Forests help mitigate human-induced climate change by removing massive amounts of carbon-dioxide emissions from the atmosphere and incorporating the carbon into woody tissues.

The finding that drought stress sets back tree growth for years suggests that Earth’s forests are capable of storing less carbon than climate models have calculated, said lead author William Anderegg, a visiting associate research scholar in the Princeton Environmental Institute.

“This really matters because future droughts are expected to increase in frequency and severity due to climate change,” said Anderegg, who will start as an assistant professor of biology at the University of Utah in Aug. 2016. “Some forests could be in a race to recover before the next drought strikes. If forests are not as good at taking up carbon dioxide, this means climate change could speed up.”

Anderegg and colleagues measured the recovery of tree-stem growth after severe droughts at more than 1,300 forest sites around the world using records kept since 1948 by the International Tree Ring Data Bank. Tree rings provide a history of wood growth as well as carbon uptake from the surrounding ecosystem. They found that a few forests exhibited growth that was higher than predicted after drought, most prominently in parts of California and the Mediterranean.

In the majority of the world’s forests, however, trunk growth took two to four years on average to return to normal. Growth was about 9 percent slower than expected during the first year of recovery, and remained 5 percent slower in the second year. Long-lasting effects of drought were most prevalent in dry ecosystems, and among pines and tree species with low hydraulic safety margins, meaning these trees tend to keep using water at a high rate even as drought progresses, Anderegg said.

How drought causes such long-lasting harm remains unknown, but the researchers offered three possible answers: Loss of foliage and carbohydrate reserves during drought may impair growth in subsequent years; pests and diseases may accumulate in drought-stressed trees; or lasting damage to vascular tissues could impair water transport.

The researchers calculated that if a forest experiences a delayed recovery from drought, the carbon-storage capacity in semi-arid ecosystems alone would drop by about 1.6 metric gigatons over a century — an amount equal to about 25 percent of the total energy-related carbon emissions produced by the United States in a year. Yet, current climate models do not account for this massive carbon remnant of drought, Anderegg said.

“In most of our current models of ecosystems and climate, drought effects on forests switch on and off like a light,” Anderegg said. “When drought conditions go away, the models assume a forest’s recovery is complete and close to immediate. That’s not how the real world works.”

Droughts that include high temperatures—as opposed to only low precipitation—are a documented scourge to tree growth and health, Anderegg said. During the 2000-2003 drought in the American Southwest, for instance, the decrease in precipitation was comparable to earlier droughts, but the temperature was hotter than the long-term average by 3 to 6 degrees Fahrenheit.

“The higher temperatures really seemed to make the drought lethal to vegetation where previous droughts with the same rainfall deficit weren’t,” Anderegg said.

“Drought, especially the type that matters to forests, is about the balance between precipitation and evaporation, and evaporation is very strongly linked to temperature,” he said. “The fact that temperatures are going up suggests quite strongly that the western regions of the United States are going to have more frequent and more severe droughts, which would substantially reduce forests’ ability to pull carbon from the atmosphere.”

Anderegg co-authored the study with Princeton colleagues Stephen Pacala, the Frederick D. Petrie Professor in Ecology and Evolutionary Biology; Adam Wolf, an associate research scholar in ecology and evolutionary biology; and Elena Shevliakova, a senior climate modeler in ecology and evolutionary biology and in the National Oceanic and Atmospheric Administration’s (NOAA) Geophysical Fluid Dynamics Laboratory (GFDL) located on Princeton’s Forrestal Campus.

The research also included collaborators from Northern Arizona University, University of Nevada–Reno, Pyrenean Institute Of Ecology, University of New Mexico, Arizona State University, the U.S. Forest Service Rocky Mountain Research Station, and the Lamont-Doherty Earth Observatory of Columbia University.

Read the abstract.

The research was funded by the National Science Foundation (grant number DEB EF-1340270) and the NOAA Climate and Global Change Postdoctoral Fellowship program.

New chemistry makes strong bonds weak (JACS)

By Tien Nguyen, Department of Chemistry

Researchers at Princeton have developed a new chemical reaction that breaks the strongest bond in a molecule instead of the weakest, completely reversing the norm for reactions in which bonds are evenly split to form reactive intermediates.

Published on July 13 in the Journal of the American Chemical Society, the non-conventional reaction is a proof of concept that will allow chemists to access compounds that are normally off-limits to this pathway. The team used a two-component catalyst system that works in tandem to selectively activate the strongest bond in the molecule, a nitrogen-hydrogen (N-H) bond through a process known as proton-coupled electron transfer (PCET).

Catalytic alkene carboamination enabled by oxidative proton-coupled electron transfer
Catalytic alkene carboamination enabled by oxidative proton-coupled electron transfer

“This PCET chemistry was really interesting to us. In particular, the idea that you can use catalysts to modulate an intrinsic property of a molecule allows you to access chemical space that you couldn’t otherwise,” said Robert Knowles, an assistant professor of chemistry who led the research.

Using PCET as a way to break strong bonds is seen in many essential biological systems, including photosynthesis and respiration, he said. Though this phenomenon is known in biological and inorganic chemistry settings, it hasn’t been widely applied to making new molecules—something Knowles hopes to change.

Given the unexplored state of PCET catalysis, Knowles decided to turn to theory instead of the trial and error approach usually taken by synthetic chemists in the initial stages of reaction development. Using a simple mathematical formula, the researchers calculated, for any pair of catalysts, the pair’s combined “effective bond strength,” which is the strength of the strongest bond they could break. Because both molecules independently contribute to this value, the research team had a high degree of flexibility in designing the catalyst system.

When they tested the catalyst pairs in the lab, the researchers observed a striking correlation between the “effective bond strength” and the reaction efficiency. While effective bond strengths that were lower or higher than the target N-H bond strength gave low reaction yields, the researchers found that matching the strengths promoted the reaction in very high yield.

“To see this formula actually working was really inspiring,” said Gilbert Choi, a graduate student in the Knowles lab and lead author on the work. Once he identified a successful catalyst system, he explored the scope of the reaction and its mechanism.

Proposed catalytic cycle
Proposed catalytic cycle

The researchers think that the reaction starts with one of the catalysts, a compound called dibutylphosphate, tugging on a hydrogen atom, which lengthens and weakens the N-H bond. At the same time, the other catalyst, known as a light-activated iridium complex, targets the weakened bond and plucks off one electron from the two-electron bond, slicing it down the middle.

Once the bond is split, the reactive nitrogen intermediate goes on to form a new carbon-nitrogen bond, giving rise to structurally complex products. This finding builds on work the Knowles lab published earlier this year, also in the Journal of the American Chemical Society, on a similar reaction that used a more sensitive catalyst system.

Their research has laid a solid foundation for PCET catalysis as a platform for developing new reactions. “My sincere view is that ideas are a lot more valuable than reactions,” Knowles said. “I’m optimistic that people can use these ideas and do things that we hadn’t even considered.”

Read the abstract: Choi, G. J.; Knowles, R. R. “Catalytic Alkene Carboamination Enabled by Oxidative Proton-Coupled Electron Transfer.2015, J. Am. Chem. Soc., Article ASAP.

This work was supported by Princeton University and the National Institutes of Health (R01 GM113105).

X marks the spot: Researchers confirm novel method for controlling plasma rotation to improve fusion performance (Physical Review Letters)

Representative plasma geometries, with the X-point location circled in red. (Reprinted from T. Stoltzfus-Dueck et al., Phys. Rev. Lett. 114, 245001, 2015. Copyright 2015 by the American Physical Society.)
Representative plasma geometries, with the X-point location circled in red. (Reprinted from T. Stoltzfus-Dueck et al., Phys. Rev. Lett. 114, 245001, 2015. Copyright 2015 by the American Physical Society.)

By Raphael Rosen, Princeton Plasma Physics Laboratory

Rotation is key to the performance of salad spinners, toy tops, and centrifuges, but recent research suggests a way to harness rotation for the future of mankind’s energy supply. In papers published in Physics of Plasmas in May and Physical Review Letters this month, Timothy Stoltzfus-Dueck, a physicist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), demonstrated a novel method that scientists can use to manipulate the intrinsic – or self-generated – rotation of hot, charged plasma gas within fusion facilities called tokamaks. This work was supported by the DOE Office of Science.

Such a method could prove important for future facilities like ITER, the huge international tokamak under construction in France that will demonstrate the feasibility of fusion as a source of energy for generating electricity. ITER’s massive size will make it difficult for the facility to provide sufficient rotation through external means.

Rotation is essential to the performance of all tokamaks. Rotation can stabilize instabilities in plasma, and sheared rotation – the difference in velocities between two bands of rotating plasma – can suppress plasma turbulence, making it possible to maintain the gas’s high temperature with less power and reduced operating costs.

Today’s tokamaks produce rotation mainly by heating the plasma with neutral beams, which cause it to spin. In intrinsic rotation, however, rotating particles that leak from the edge of the plasma accelerate the plasma in the opposite direction, just as the expulsion of propellant drives a rocket forward.

Stoltzfus-Dueck and his team influenced intrinsic rotation by moving the so-called X-point – the dividing point between magnetically confined plasma and plasma that has leaked from confinement – on the Tokamak à Configuration Variable (TCV) in Lausanne, Switzerland. The experiments marked the first time that researchers had moved the X-point horizontally to study plasma rotation. The results confirmed calculations that Stoltzfus-Dueck had published in a 2012 paper showing that moving the X-point would cause the confined plasma to either halt its intrinsic rotation or begin rotating in the opposite direction. “The edge rotation behaved just as the theory predicted,” said Stoltzfus-Dueck.

A surprise also lay in store: Moving the X-point not only altered the edge rotation, but modified rotation within the superhot core of the plasma where fusion reactions occur. The results indicate that scientists can use the X-point as a “control knob” to adjust the inner workings of fusion plasmas, much like changing the settings on iTunes or a stereo lets one explore the behavior of music. This discovery gives fusion researchers a tool to access different intrinsic rotation profiles and learn more about intrinsic rotation itself and its effect on confinement.

The overall findings provided a “perfect example of a success story for theory-experiment collaboration,” said Olivier Sauter, senior scientist at École Polytechnique Fédérale de Lausanne and co-author of the paper.

Along with the practical applications of his research, Stoltzfus-Dueck enjoys the purely intellectual aspect of his work. “It’s just interesting,” he said. “Why do plasmas rotate in the way they do? It’s a puzzle.”

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Read the abstract.

Stoltzfus-Dueck, A. N. Karpushov, O. Sauter, B. P. Duval, B. Labit, H. Reimerdes, W. A. J. Vijvers, the TCV Team, and Y. Camenen. “X-Point-Position-Dependent Intrinsic Toroidal Rotation in the Edge of the TCV Tokamak.” Physical Review Letters 114, 245001 – Published 17 June 2015.