Movie caption: Researchers at Princeton studied the temperature dependence of the formation of the nucleolus, a cellular organelle. The movie shows the nuclei of intact fly cells as they are subjected to temperature changes in the surrounding fluid. As the temperature is shifted from low to high, the spontaneously assembled proteins dissolve, as can be seen in the disappearance of the bright spots.
By Catherine Zandonella, Office of the Dean for Research
Researchers at Princeton found that the nucleolus, a cellular organelle involved in RNA synthesis, assembles in part through the passive process of phase separation – the same type of process that causes oil to separate from water. The study, published in the journal Proceedings of the National Academy of Sciences, is the first to show that this happens in living, intact cells.
Understanding how cellular structures form could help explain how organelles change in response to diseases. For example, a hallmark of cancer cells is the swelling of the nucleolus.
To explore the role of passive processes – as opposed to active processes that involve energy consumption – in nucleolus formation, Hanieh Falahati, a graduate student in Princeton’s Lewis-Sigler Institute for Integrative Genomics, looked at the behavior of six nucleolus proteins under different temperature conditions. Phase separation is enhanced at lower temperatures, which is why salad dressing containing oil and vinegar separates when stored in the refrigerator. If phase separation were driving the assembly of proteins, the researchers should see the effect at low temperatures.
Falahati showed that four of the six proteins condensed and assembled into the nucleolus at low temperatures and reverted when the temperature rose, indicating that the passive process of phase separation was at work. However, the assembly of the other two proteins was irreversible, indicating that active processes were in play.
“It was kind of a surprising result, and it shows that cells can take advantage of spontaneous processes for some functions, but for other things, active processes may give the cell more control,” said Falahati, whose adviser is Eric Wieschaus, Princeton’s Squibb Professor in Molecular Biology and a professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics, and a Howard Hughes Medical Institute researcher.
The research was funded in part by grant 5R37HD15587 from the National Institute of Child Health and Human Development (NICHD), and by the Howard Hughes Medical Institute.
Researchers at Princeton, Yale, and the University of Zurich have proposed a theory-based approach to characterize a class of metals that possess exotic electronic properties that could help scientists find other, similarly-endowed materials.
Published in the journal Physical Review X, the study described a new class of metals based on their symmetry and a mathematical classification known as a topological number, which is predictive of special electronic properties. Topological materials have drawn intense research interest since the early 2000s culminating in last year’s Nobel Prize in Physics awarded to three physicists, including F. Duncan Haldane, Princeton’s Eugene Higgins Professor of Physics, for theoretical discoveries in this area.
“Topological classification is a very general way of looking at the properties of materials,” said Lukas Muechler, a Princeton graduate student in the laboratory of Roberto Car, Princeton’s Ralph W. *31 Dornte Professor in Chemistry and lead author on the article.
A popular way of explaining this abstract mathematical classification involves breakfast items. In topological classification, donuts and coffee cups are equivalent because they both have one hole and can be smoothly deformed into one another. Meanwhile donuts cannot deform into muffins which makes them inequivalent. The number of holes is an example of a topological invariant that is equal for the donut and coffee cup, but distinguishes between the donut and the muffin.
“The idea is that you don’t really care about the details. As long as two materials have the same topological invariants, we can say they are topologically equivalent,” he said.
Muechler and his colleagues’ interest in the topological classification of this new class of metals was sparked by a peculiar discovery in the neighboring laboratory of Robert Cava, Princeton’s Russell Wellman Moore Professor of Chemistry. While searching for superconductivity in a crystal called tungsten telluride (WTe2), the Cava lab instead found that the material could continually increase its resistance in response to ever stronger magnetic fields – a property that might be used to build a sensor of magnetic fields.
The origin of this property was, however, mysterious. “This material has very interesting properties, but there had been no theory around it,” Muechler said.
The researchers first considered the arrangement of the atoms in the WTe2 crystal. Patterns in the arrangement of atoms are known as symmetries, and they fall into two fundamentally different classes – symmorphic and nonsymmorphic – which lead to profound differences in electronic properties, such as the transport of current in an electromagnetic field.
While WTe2 is composed of many layers of atoms stacked upon each other, Car’s team found that a single layer of atoms has a particular nonsymmorphic symmetry, where the atomic arrangement is unchanged overall if it is first rotated and then translated by a fraction of the lattice period (see figure).
Having established the symmetry, the researchers mathematically characterized all possible electronic states having this symmetry, and classified those states that can be smoothly deformed into each other as topologically equivalent, just as a donut can be deformed into a cup. From this classification, they found WTe2 belongs to a new class of metals which they coined nonsymmorphic topological metals. These metals are characterized by a different electron number than the nonsymmorphic metals that have previously been studied.
In nonsymmorphic topological metals, the current-carrying electrons behave like relativistic particles, in other words, as particles traveling at nearly the speed of light. This property is not as susceptible to impurities and defects as ordinary metals, making them attractive candidates for electronic devices.
The abstract topological classification also led the researchers to suggest some explanations for some of the outstanding electronic properties of bulk WTe2, most importantly its perfect compensation, meaning that it has an equal number of holes and electrons. Through theoretical simulations, the researchers found that this property could be achieved in the three-dimensional crystalline stacking of the WTe2 monolayers, which was a surprising result, Muechler said.
“Usually in theory research there isn’t much that’s unexpected, but this just popped out,” he said. “This abstract classification directly led us to explaining this property. In this sense, it’s a very elegant way of looking at this compound and now you can actually understand or design new compounds with similar properties.”
Recent photoemission experiments have also shown that the electrons in WTe2 absorb right-handed photons differently than they would left-handed photons. The theory formulated by the researchers showed that these photoemission experiments on WTe2 can be understood based on the topological properties of this new class of metals.
In future studies, the theorists want to test whether these topological properties are also present in atomically-thin layers of these metals, which could be exfoliated from a larger crystal to make electronic devices. “The study of this phenomena has big implications for the electronics industry, but it’s still in its infant years,” Muechler said.
This work was supported by the U.S. Department of Energy (DE-FG02-05ER46201), the Yale Postdoctoral Prize Fellowship, the National Science Foundation (NSF CAREER DMR-095242 and NSF-MRSEC DMR-0819860), the Office of Naval Research (ONR-N00014-11-1- 0635), the U.S. Department of Defense (MURI-130-6082), the David and Lucile Packard Foundation, the W. M. Keck Foundation, and the Eric and Wendy Schmidt Transformative Technology Fund.
Photosynthetic algae have been refining their technique for capturing light for millions of years. As a result, these algae boast powerful light harvesting systems — proteins that absorb light to be turned into energy for the plants — that scientists have long aspired to understand and mimic for renewable energy applications.
Now, researchers at Princeton University have revealed a mechanism that enhances the light harvesting rates of the cryptophyte algae Chroomonas mesostigmatica. Published in the journal Chem on December 8, these findings provide valuable insights for the design of artificial light-harvesting systems such as molecular sensors and solar energy collectors.
Cryptophyte algae often live below other organisms that absorb most of the sun’s rays. In response, the algae have evolved to thrive on wavelengths of light that aren’t captured by their neighbors above, mainly the yellow-green colors. The algae collects this yellow-green light energy and passes it through a network of molecules that converts it into red light, which chlorophyll molecules need to perform important photosynthetic chemistry.
The speed of the energy transfer through the system has both impressed and perplexed the scientists that study them. In Gregory Scholes’ lab at Princeton University, predictions were always about three times slower than the observed rates. “The timescales that the energy is moved through the protein — we could never understand why the process so fast,” said Scholes, the William S. Tod Professor of Chemistry.
In 2010, Scholes’ team found evidence that the culprit behind these fast rates was a strange phenomenon called quantum coherence, in which molecules could share electronic excitation and transfer energy according to quantum mechanical probability laws instead of classical physics. But the research team couldn’t explain exactly how coherence worked to speed up the rates until now.
Using a sophisticated method enabled by ultrafast lasers, the researchers were able to measure the molecules’ light absorption and essentially track the energy flow through the system. Normally the absorption signals would overlap, making them impossible to assign to specific molecules within the protein complex, but the team was able to sharpen the signals by cooling the proteins down to very low temperatures, said Jacob Dean, lead author and postdoctoral researcher in the Scholes lab.
The researchers observed the system as energy was transferred from molecule to molecule, from high-energy green light to lower energy red light, with excess energy lost as vibrational energy. These experiments revealed a particular spectral pattern that was a ‘smoking gun’ for vibrational resonance, or vibrational matching, between the donor and acceptor molecules, Dean said.
This vibrational matching allowed energy to be transferred much faster than it otherwise would be by distributing the excitation between molecules. This effect provided a mechanism for the previously reported quantum coherence. Taking this redistribution into account, the researchers recalculated their prediction and landed on a rate that was about three times faster.
“Finally the prediction is in the right ballpark,” Scholes said. “Turns out that it required this quite different, surprising mechanism.”
The Scholes lab plans to study related proteins to investigate if this mechanism is operative in other photosynthetic organisms. Ultimately, scientists hope to create light-harvesting systems with perfect energy transfer by taking inspiration and design principles from these finely tuned yet extremely robust light-harvesting proteins. “This mechanism is one more powerful statement of the optimality of these proteins,” Scholes said.
By John Greenwald, Princeton Plasma Physics Laboratory Communications
Scientists have proposed a groundbreaking solution to a mystery that has puzzled physicists for decades. At issue is how magnetic reconnection, a universal process that sets off solar flares, northern lights and cosmic gamma-ray bursts, occurs so much faster than theory says should be possible. The answer, proposed by researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University, could aid forecasts of space storms, explain several high-energy astrophysical phenomena, and improve plasma confinement in doughnut-shaped magnetic devices called tokamaks designed to obtain energy from nuclear fusion.
Magnetic reconnection takes place when the magnetic field lines embedded in a plasma — the hot, charged gas that makes up 99 percent of the visible universe — converge, break apart and explosively reconnect. This process takes place in thin sheets in which electric current is strongly concentrated.
According to conventional theory, these sheets can be highly elongated and severely constrain the velocity of the magnetic field lines that join and split apart, making fast reconnection impossible. However, observation shows that rapid reconnection does exist, directly contradicting theoretical predictions.
Detailed theory for rapid reconnection
Now, physicists at PPPL and Princeton University have presented a detailed theory for the mechanism that leads to fast reconnection. Their paper, published in the journal Physics of Plasmas in October, focuses on a phenomenon called “plasmoid instability” to explain the onset of the rapid reconnection process. Support for this research comes from the National Science Foundation and the DOE Office of Science.
Plasmoid instability, which breaks up plasma current sheets into small magnetic islands called plasmoids, has generated considerable interest in recent years as a possible mechanism for fast reconnection. However, correct identification of the properties of the instability has been elusive.
The Physics of Plasmas paper addresses this crucial issue. It presents “a quantitative theory for the development of the plasmoid instability in plasma current sheets that can evolve in time” said Luca Comisso, lead author of the study. Co-authors are Manasvi Lingam and Yi-Min Huang of PPPL and Princeton, and Amitava Bhattacharjee, head of the Theory Department at PPPL and Princeton professor of astrophysical sciences.
Pierre de Fermat’s principle
The paper describes how the plasmoid instability begins in a slow linear phase that goes through a period of quiescence before accelerating into an explosive phase that triggers a dramatic increase in the speed of magnetic reconnection. To determine the most important features of this instability, the researchers adapted a variant of the 17th century “principle of least time” originated by the mathematician Pierre de Fermat.
Use of this principle enabled the researchers to derive equations for the duration of the linear phase, and for computing the growth rate and number of plasmoids created. Hence, this least-time approach led to a quantitative formula for the onset time of fast magnetic reconnection and the physics behind it.
The paper also produced a surprise. The authors found that such relationships do not reflect traditional power laws, in which one quantity varies as a power of another. “It is common in all realms of science to seek the existence of power laws,” the researchers wrote. “In contrast, we find that the scaling relations of the plasmoid instability are not true power laws – a result that has never been derived or predicted before.”
PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by Princeton University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
Terrestrial rainfall in the subtropics — including the southeastern United States — may not decline in response to increased greenhouse gases as much as it could over oceans, according to a study from Princeton University and the University of Miami (UM). The study challenges previous projections of how dry subtropical regions could become in the future, and it suggests that the impact of decreased rainfall on people living in these regions could be less severe than initially thought.
“The lack of rainfall decline over subtropical land is caused by the fact that land will warm much faster than the ocean in the future — a mechanism that has been overlooked in previous studies about subtropical precipitation change,” said first author Jie He, a postdoctoral research associate in Princeton’s Program in Atmospheric and Oceanic Sciences who works at the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory located on Princeton’s Forrestal Campus.
In the new study, published in the journal Nature Climate Change, He and co-author Brian Soden, a UM professor of atmospheric sciences, used an ensemble of climate models to show that rainfall decreases occur faster than global warming, and therefore another mechanism must be at play. They found that direct heating from increasing greenhouse gases is causing the land to warm faster than the ocean. The associated changes in atmospheric circulation are thus driving rainfall decline over the oceans rather than land.
Subtropical rainfall changes have been previously attributed to two mechanisms related to global warming: greater moisture content in air that is transported away from the subtropics, and a pole-ward shift in air circulation. While both mechanisms are present, this study shows that neither one is responsible for a decline in rainfall.
“It has been long accepted that climate models project a large-scale rainfall decline in the future over the subtropics. Since most of the subtropical regions are already suffering from rainfall scarcity, the possibility of future rainfall decline is of great concern,” Soden said. “However, most of this decline occurs over subtropical oceans, not land, due to changes in the atmospheric circulation induced by the more rapid warming of land than ocean.”
Most of the reduction in subtropical rainfall occurs instantaneously with an increase of greenhouse gases, independent of the warming of the Earth’s surface, which occurs much more slowly. According to the authors, this indicates that emission reductions would immediately mitigate subtropical rainfall decline, even though the surface will continue to warm for a long time.
He is supported by the Visiting Scientist Program at the department of Atmospheric and Oceanic Science, Princeton University.
Researchers at Princeton and Harvard Universities have developed a way to produce the tools for figuring out gene function faster and cheaper than current methods, according to new research in the journal Nature Communications.
The function of sizable chunks of many organisms’ genomes is a mystery, and figuring out how to fill these information gaps is one of the central questions in genetics research, said study author Buz Barstow, a Burroughs-Wellcome Fund Research Fellow in Princeton’s Department of Chemistry. “We have no idea what a large fraction of genes do,” he said.
One of the best strategies that scientists have to determine what a particular gene does is to remove it from the genome, and then evaluate what the organism can no longer do. The end result, known as a whole-genome knockout collection, provides full sets of genomic copies, or mutants, in which single genes have been deleted or “knocked out.” Researchers then test the entire knockout collection against a specific chemical reaction. If a mutant organism fails to perform the reaction that means it must be missing the particular gene responsible for that task.
It can take several years and millions of dollars to build a whole-genome knockout collection through targeted gene deletion. Because it’s so costly, whole-genome knockout collections only exist for a handful of organisms such as yeast and the bacterium Escherichia coli. Yet, these collections have proven to be incredibly useful as thousands of studies have been conducted on the yeast gene-deletion collection since its release.
The Princeton and Harvard researchers are the first to create a collection quickly and affordably, doing so in less than a month for several thousand dollars. Their strategy, called “Knockout Sudoku,” relies on a combination of randomized gene deletion and a powerful reconstruction algorithm. Though other research groups have attempted this randomized approach, none have come close to matching the speed and cost of Knockout Sudoku.
“We sort of see it as democratizing these powerful tools of genetics,” said Michael Baym, a co-author on the work and a Harvard Medical School postdoctoral researcher. “Hopefully it will allow the exploration of genetics outside of model organisms,” he said.
Their approach began with steep pizza bills and a technique called transposon mutagenesis that ‘knocks out’ genes by randomly inserting a single disruptive DNA sequence into the genome. This technique is applied to large colonies of microbes to ensure the likelihood that every single gene is disrupted. For example, the team started with a colony of about 40,000 microbes for the bacterium Shewanella oneidensis, which has approximately 3,600 genes in its genome.
Barstow recruited undergraduates and graduate students to manually transfer 40,000 mutants out of laboratory Petri dishes into separate wells using toothpicks. He offered pizza as an incentive, but after a full day of labor, they only managed to move a couple thousand mutants. “I thought to myself, ‘Wait a second, this pizza is going to ruin me,’” Barstow said.
Instead, they decided to rent a colony-picking robot. In just two days, the robot was able to transfer each mutant microbe to individual homes in 96-well plates, 417 plates in total.
But the true challenge and opportunity for innovation was in identifying and cataloging the mutants that could comprise a whole-genome knockout collection in a fast and practical way.
DNA amplification and sequencing is a straightforward way to identify each mutant, but doing it individually quickly gets very expensive and time-consuming. So the researchers’ proposed a scheme in which mutants could be combined into groups that would only require 61 amplification reactions and a single sequencing run.
But still, after sequencing each of the pools, the researchers had an incredible amount of data. They knew the identities of all the mutants, but now they had to figure exactly where each mutant came from in the grid of plates. This is where the Sudoku aspect of the method came in. The researchers built an algorithm that could deduce the location of individual mutants through its repeated appearance in various row, column, plate-row and plate-column pools.
But there’s a problem. Because the initial gene-disruption process is random, it’s possible that the same mutant is formed more than once, which means that playing Sudoku wouldn’t be simple. To find a solution for this issue, Barstow recalled watching the movie, “The Imitation Game,” about Alan Turing’s work on the enigma code, for inspiration.
“I felt like the problem in some ways was very similar to code breaking,” he said. There are simple codes that substitute one letter for another that can be easily solved by looking at the frequency of the letter, Barstow said. “For instance, in English the letter A is used 8.2 percent of the time. So, if you find that the letter X appears in the message about 8.2 percent of the time, you can tell this is supposed to be decoded as an A. This is a very simple example of Bayesian inference.”
With that same logic, Barstow and colleagues developed a statistical picture of what a real location assignment should look like based on a mutant that only appeared once and used that to rate the likelihood of possible locations being real.
“One of the things I really like about this technique is that it’s a prime example of designing a technique with the mathematics in mind at the outset which lets you do much more powerful things than you could do otherwise,” Baym said. “Because it was designed with the mathematics built in, it allows us to get much, much more data out of much less experiments,” he said.
Using their expedient strategy, the researchers created a collection for microbe Shewanella oneidensis. These microbes are especially good at transferring electrons and understanding their powers could prove highly valuable for developing sustainable energy sources, such as artificial photosynthesis, and for environmental remediation in the neutralization of radioactive waste.
Using the resultant collection, the team was able to recapitulate 15 years of research, Barstow said, bolstering their confidence in their method. In an early validation test, they noticed a startlingly poor accuracy rate. After finding no fault with the math, they looked at the original plates to realize that one of the researchers had grabbed the wrong sample. “The least reliable part of this is the human,” Barstow said.
The work was supported by a Career Award at the Scientific Interface from the Burroughs Wellcome Fund and Princeton University startup funds and Fred Fox Class of 1939 funds.
By Catherine Zandonella, Office of the Dean for Research
Genomic sequencing has provided an enormous amount of new information, but researchers haven’t always been able to use that data to understand living systems.
Now a group of researchers has used mathematical analysis to figure out whether two proteins interact with each other, just by looking at their sequences and without having to train their computer model using any known examples. The research, which was published online today in the journal Proceedings of the National Academy of Sciences, is a significant step forward because protein-protein interactions underlie a multitude of biological processes, from how bacteria sense their surroundings to how enzymes turn our food into cellular energy.
Although researchers have been able to use genomic analysis to obtain the sequences of amino acids that make up proteins, until now there has been no way to use those sequences to accurately predict protein-protein interactions. The main roadblock was that each cell can contain many similar copies of the same protein, called paralogs, and it wasn’t possible to predict which paralog from one protein family would interact with which paralog from another protein family. Instead, scientists have had to conduct extensive laboratory experiments involving sorting through protein paralogs one by one to see which ones stick.
In the current paper, the researchers use a mathematical procedure, or algorithm, to examine the possible interactions among paralogs and identify pairs of proteins that interact. The method was able to correctly predict 93% of the protein-protein paralog pairs that were present in a dataset of more than 20,000 known paired protein sequences, without being first provided any examples of correct pairs.
Interactions between proteins happen when two proteins come into physical contact and stick together via weak bonds. They may do this to form part of a larger piece of machinery used in cellular metabolism. Or two proteins might interact to pass a signal from the exterior of the cell to the DNA, to enable a bacterial organism to react to its environment.
When two proteins come together, some amino acids on one chain stick to the amino acids on the other chain. Each site on the chain contains one of 20 possible amino acids, yielding a very large number of possible amino-acid pairings. But not all such pairings are equally probable, because proteins that interact tend to evolve together over time, causing their sequences to be correlated.
The algorithm takes advantage of this correlation. It starts with two protein families, each with multiple paralogs in any given organism. The algorithm then pairs protein paralogs randomly within each organism and asks, do particular pairs of amino acids, one on each of the proteins, occur much more or less frequently than chance? Then using this information it asks, given an amino acid in a particular location on the first protein, which amino acids are especially favored at a particular location on the second protein, a technique known as direct coupling analysis. The algorithm in turn uses this information to calculate the strengths of interactions, or “interaction energies,” for all possible protein paralog pairs, and ranks them. It eliminates the unlikely pairings and then runs again using only the top most likely protein pairs.
The most challenging part of identifying protein-protein pairs arises from the fact that proteins fold and kink into complicated shapes that bring amino acids in proximity to others that are not close by in sequence, and that amino-acids may be correlated with each other via chains of interactions, not just when they are neighbors in 3D. The direct coupling analysis works surprisingly well at finding the true underlying couplings that occur between neighbors.
The work on the algorithm was initiated by Wingreen and Robert Dwyer, who earned his Ph.D. in the Department of Molecular Biology at Princeton in 2014, and was continued by first author Anne-Florence Bitbol, who was a postdoctoral researcher in the Lewis-Sigler Institute for Integrative Genomics and the Department of Physics at Princeton and is now a CNRS researcher at Universite Pierre et Marie Curie – Paris 6. Bitbol was advised by Wingreen and Colwell, an expert in this kind of analysis who joined the collaboration while a member at the Institute for Advanced Study in Princeton, NJ, and is now a lecturer in chemistry at the University of Cambridge.
The researchers thought that the algorithm would only work accurately if it first “learned” what makes a good protein-protein pair by studying ones discovered in experiments. This required that the researchers give the algorithm some known protein pairs, or “gold standards,” against which to compare new sequences. The team used two well-studied families of proteins, histidine kinases and response regulators, which interact as part of a signaling system in bacteria.
But known examples are often scarce, and there are tens of millions of undiscovered protein-protein interactions in cells. So the team decided to see if they could reduce the amount of training they gave the algorithm. They gradually lowered the number of known histidine kinase-response regulator pairs that they fed into the algorithm, and were surprised to find that the algorithm continued to work. Finally, they ran the algorithm without giving it any such training pairs, and it still predicted new pairs with 93 percent accuracy.
“The fact that we didn’t need a gold standard was a big surprise,” Wingreen said.
Upon further exploration, Wingreen and colleagues figured out that their algorithm’s good performance was due to the fact that true protein-protein interactions are relatively rare. There are many pairings that simply don’t work, and the algorithm quickly learned not to include them in future attempts. In other words, there is only a small number of distinctive ways that protein-protein interactions can happen, and a vast number of ways that they cannot happen. Moreover, the few successful pairings were found to repeat with little variation across many organisms. This it turns out, makes it relatively easy for the algorithm to reliably sort interactions from non-interactions.
Wingreen compared this observation – that correct pairs are more similar to one another than incorrect pairs are to each other – to the opening line of Leo Tolstoy’s Anna Karenina, which states, “All happy families are alike; each unhappy family is unhappy in its own way.”
The work was done using protein sequences from bacteria, and the researchers are now extending the technique to other organisms.
The approach has the potential to enhance the systematic study of biology, Wingreen said. “We know that living organisms are based on networks of interacting proteins,” he said. “Finally we can begin to use sequence data to explore these networks.”
The research was supported in part by the National Institutes of Health (Grant R01-GM082938) and the National Science Foundation (Grant PHY–1305525).
The paper, “Inferring interaction partners from protein sequences,” by Anne-Florence Bitbol, Robert S. Dwyerd, Lucy J. Colwell and Ned S. Wingreen, was published in the Early Edition of the journal Proceedings of the National Academy of Sciences on September 23, 2016.
Princeton University researchers have compiled 30 years of data to construct the first ice core-based record of atmospheric oxygen concentrations spanning the past 800,000 years, according to a paper published today in the journal Science.
The record shows that atmospheric oxygen has declined 0.7 percent relative to current atmospheric-oxygen concentrations, a reasonable pace by geological standards, the researchers said. During the past 100 years, however, atmospheric oxygen has declined by a comparatively speedy 0.1 percent because of the burning of fossil fuels, which consumes oxygen and produces carbon dioxide.
Curiously, the decline in atmospheric oxygen over the past 800,000 years was not accompanied by any significant increase in the average amount of carbon dioxide in the atmosphere, though carbon dioxide concentrations do vary over individual ice age cycles. To explain this apparent paradox, the researchers called upon a theory for how the global carbon cycle, atmospheric carbon dioxide and Earth’s temperature are linked on geologic timescales.
“The planet has various processes that can keep carbon dioxide levels in check,” said first author Daniel Stolper, a postdoctoral research associate in Princeton’s Department of Geosciences. The researchers discuss a process known as silicate weathering in particular, wherein carbon dioxide reacts with exposed rock to produce, eventually, calcium carbonate minerals, which trap carbon dioxide in a solid form. As temperatures rise due to higher carbon dioxide in the atmosphere, silicate-weathering rates are hypothesized to increase and remove carbon dioxide from the atmosphere faster.
Stolper and his co-authors suggest that the extra carbon dioxide emitted due to declining oxygen concentrations in the atmosphere stimulated silicate weathering, which stabilized carbon dioxide but allowed oxygen to continue to decline.
“The oxygen record is telling us there’s also a change in the amount of carbon dioxide [that was created when oxygen was removed] entering the atmosphere and ocean,” said co-author John Higgins, Princeton assistant professor of geosciences. “However, atmospheric carbon dioxide levels aren’t changing because the Earth has had time to respond via increased silicate-weathering rates.
“The Earth can take care of extra carbon dioxide when it has hundreds of thousands or millions of years to get its act together. In contrast, humankind is releasing carbon dioxide today so quickly that silicate weathering can’t possibly respond fast enough,” Higgins continued. “The Earth has these long processes that humankind has short-circuited.”
The researchers built their history of atmospheric oxygen using measured ratios of oxygen-to-nitrogen found in air trapped in Antarctic ice. This method was established by co-author Michael Bender, professor of geosciences, emeritus, at Princeton.
Because oxygen is critical to many forms of life and geochemical processes, numerous models and indirect proxies for the oxygen content in the atmosphere have been developed over the years, but there was no consensus on whether oxygen concentrations were rising, falling or flat during the past million years (and before fossil fuel burning). The Princeton team analyzed the ice-core data to create a single account of how atmospheric oxygen has changed during the past 800,000 years.
“This record represents an important benchmark for the study of the history of atmospheric oxygen,” Higgins said. “Understanding the history of oxygen in Earth’s atmosphere is intimately connected to understanding the evolution of complex life. It’s one of these big, fundamental ongoing questions in Earth science.”
Daniel A. Stolper, Michael L. Bender, Gabrielle B. Dreyfus, Yuzhen Yan, and John A. Higgins. 2016. A Pleistocene ice core record of atmospheric oxygen concentrations. Science. Article published Sept. 22, 2016. DOI: 10.1126/science.aaf5445
The work was supported by a National Oceanic and Atmospheric Administration Climate and Global Change postdoctoral fellowship, and the National Science Foundation (grant no. ANT-1443263).
By John Greenwald, Princeton Plasma Physics Laboratory
Among the top puzzles in the development of fusion energy is the best shape for the magnetic facility — or “bottle” — that will provide the next steps in the development of fusion reactors. Leading candidates include spherical tokamaks, compact machines that are shaped like cored apples, compared with the doughnut-like shape of conventional tokamaks. The spherical design produces high-pressure plasmas — essential ingredients for fusion reactions — with relatively low and cost-effective magnetic fields.
A possible next step is a device called a Fusion Nuclear Science Facility (FNSF) that could develop the materials and components for a fusion reactor. Such a device could precede a pilot plant that would demonstrate the ability to produce net energy.
Spherical tokamaks as excellent models
Spherical tokamaks could be excellent models for an FNSF, according to a paper published online in the journal Nuclear Fusion on August 16. The two most advanced spherical tokamaks in the world today are the recently completed National Spherical Torus Experiment-Upgrade (NSTX-U) at the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL), which is managed by Princeton University, and the Mega Ampere Spherical Tokamak (MAST), which is being upgraded at the Culham Center for Fusion Energy in the United Kingdom.
“We are opening up new options for future plants,” said Jonathan Menard, program director for the NSTX-U and lead author of the paper, which discusses the fitness of both spherical tokamaks as possible models. Support for this work comes from the DOE Office of Science.
The 43-page paper considers the spherical design for a combined next-step bottle: an FNSF that could become a pilot plant and serve as a forerunner for a commercial fusion reactor. Such a facility could provide a pathway leading from ITER, the international tokamak under construction in France to demonstrate the feasibility of fusion power, to a commercial fusion power plant.
A key issue for this bottle is the size of the hole in the center of the tokamak that holds and shapes the plasma. In spherical tokamaks, this hole can be half the size of the hole in conventional tokamaks. These differences, reflected in the shape of the magnetic field that confines the superhot plasma, have a profound effect on how the plasma behaves.
Designs for the Fusion Nuclear Science Facility
First up for a next-step device would be the FNSF. It would test the materials that must face and withstand the neutron bombardment that fusion reactions produce, while also generating a sufficient amount of its own fusion fuel. According to the paper, recent studies have for the first time identified integrated designs that would be up to the task.
These integrated capabilities include:
A blanket system able to breed tritium, a rare isotope — or form — of hydrogen that fuses with deuterium, another isotope of the atom, to generate the fusion reactions. The spherical design could breed approximately one isotope of tritium for each isotope consumed in the reaction, producing tritium self-sufficiency.
A lengthy configuration of the magnetic field that vents exhaust heat from the tokamak. This configuration, called a “divertor,” would reduce the amount of heat that strikes and could damage the interior wall of the tokamak.
A vertical maintenance scheme in which the central magnet and the blanket structures that breed tritium can be removed independently from the tokamak for installation, maintenance, and repair. Maintenance of these complex nuclear facilities represents a significant design challenge. Once a tokamak operates with fusion fuel, this maintenance must be done with remote-handling robots; the new paper describes how this can be accomplished.
For pilot plant use, superconducting coils that operate at high temperature would replace the copper coils in the FNSF to reduce power loss. The plant would generate a small amount of net electricity in a facility that would be as compact as possible and could more easily scale to a commercial fusion power station.
High-temperature superconductors could have both positive and negative effects. While they would reduce power loss, they would require additional shielding to protect the magnets from heating and radiation damage. This would make the machine larger and less compact.
Recent advances in high-temperature superconductors could help overcome this problem. The advances enable higher magnetic fields, using much thinner magnets than are presently achievable, leading to reduction in the refrigeration power needed to cool the magnets. Such superconducting magnets open the possibility that all FNSF and associated pilot plants based on the spherical tokamak design could help minimize the mass and cost of the main confinement magnets.
For now, the increased power of the NSTX-U and the soon-to-be-completed MAST facility moves them closer to the capability of a commercial plant that will create safe, clean and virtually limitless energy. “NSTX-U and MAST-U will push the physics frontier, expand our knowledge of high temperature plasmas, and, if successful, lay the scientific foundation for fusion development paths based on more compact designs,” said PPPL Director Stewart Prager.
Twice the power and five times the pulse length
The NSTX-U has twice the power and five times the pulse length of its predecessor and will explore how plasma confinement and sustainment are influenced by higher plasma pressure in the spherical geometry. The MAST upgrade will have comparable prowess and will explore a new, state-of-the art method for exhausting plasmas that are hotter than the core of the sun without damaging the machine.
“The main reason we research spherical tokamaks is to find a way to produce fusion at much less cost than conventional tokamaks require,” said Ian Chapman, the newly appointed chief executive of the United Kingdom Atomic Energy Authority and leader of the UK’s magnetic confinement fusion research program at the Culham Science Center.
The ability of these machines to create high plasma performance within their compact geometries demonstrates their fitness as possible models for next-step fusion facilities. The wide range of considerations, calculations and figures detailed in this study strongly support the concept of a combined FNSF and pilot plant based on the spherical design. The NSTX-U and MAST-U devices must now successfully prototype the necessary high-performance scenarios.
J.E. Menard, T. Brown, L. El-Guebaly, M. Boyer, J. Canik, B. Colling, R. Raman, Z. Wang, Y. Zhai,P. Buxton, B. Covele, C. D’Angelo, A. Davis, S. Gerhardt, M. Gryaznevich, M. Harb, T.C. Hender,S. Kaye, D. Kingham, M. Kotschenreuther, S. Mahajan, R. Maingi, E. Marriott, E.T. Meier, L. Mynsberge, C. Neumeyer, M. Ono, J.-K. Park, S.A. Sabbagh, V. Soukhanovskii, P. Valanju and R. Woolley. Fusion nuclear science facilities and pilot plants based on the spherical tokamak. Nucl. Fusion 56 (2016) — Published 16 August 2016.
PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
By John Greenwald, Princeton Plasma Physics Laboratory Communications
Among the intriguing issues in plasma physics are those surrounding X-ray pulsars — collapsed stars that orbit around a cosmic companion and beam light at regular intervals, like lighthouses in the sky. Physicists want to know the strength of the magnetic field and density of the plasma that surrounds these pulsars, which can be millions of times greater than the density of plasma in stars like the sun.
Researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have developed a theory of plasma waves that can infer these properties in greater detail than in standard approaches. The new research analyzes the plasma surrounding the pulsar by coupling Einstein’s theory of relativity with quantum mechanics, which describes the motion of subatomic particles such as the atomic nuclei — or ions — and electrons in plasma. Supporting this work is the DOE Office of Science.
Quantum field theory
The key insight comes from quantum field theory, which describes charged particles that are relativistic, meaning that they travel at near the speed of light. “Quantum theory can describe certain details of the propagation of waves in plasma,” said Yuan Shi, a graduate student at Princeton University in the Department of Astrophysics’Princeton Program in Plasma Physics, and lead author of a paper published July 29 in the journal Physical Review A. Understanding the interactions behind the propagation can then reveal the composition of the plasma.
In pulsars, relativistic particles in the magnetosphere, which is the magnetized atmosphere surrounding the pulsar, absorb light waves, and this absorption displays peaks. “The question is, what do these peaks mean?” asks Shi. Analysis of the peaks with equations from special relativity and quantum field theory, he found, can determine the density and field strength of the magnetosphere.
Combining physics techniques
The process combines the techniques of high-energy physics, condensed matter physics, and plasma physics. In high-energy physics, researchers use quantum field theory to describe the interaction of a handful of particles. In condensed matter physics, people use quantum mechanics to describe the states of a large collection of particles. Plasma physics uses model equations to explain the collective movement of millions of particles. The new method utilizes aspects of all three techniques to analyze the plasma waves in pulsars.
The same technique can be used to infer the density of the plasma and strength of the magnetic field created by inertial confinement fusion experiments. Such experiments use lasers to ablate — or vaporize —a target that contains plasma fuel. The ablation then causes an implosion that compresses the fuel into plasma and produces fusion reactions.
Standard formulas give inconsistent answers
Researchers want to know the precise density, temperature and field strength of the plasma that this process creates. Standard mathematical formulas give inconsistent answers when lasers of different color are used to measure the plasma parameters. This is because the extreme density of the plasma gives rise to quantum effects, while the high energy density of the magnetic field gives rise to relativistic effects, says Shi. So formulations that draw upon both fields are needed to reconcile the results.
For Shi, the new technique shows the benefits of combining physics disciplines that don’t often interact. Says he: “Putting fields together gives tremendous power to explain things that we couldn’t understand before.”
Yuan Shi, Nathaniel J. Fisch, and Hong Qin. Effective-action approach to wave propagation in scalar QED plasmas. Phys. Rev. A 94, 012124 – Published 29 July 2016.
PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by Princeton University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.