Ultrafast lasers reveal light-harvesting secrets of photosynthetic algae (Chem)

Research photo
Credit: Scholes’ lab

By Tien Nguyen

Photosynthetic algae have been refining their technique for capturing light for millions of years. As a result, these algae boast powerful light harvesting systems — proteins that absorb light to be turned into energy for the plants — that scientists have long aspired to understand and mimic for renewable energy applications.

Cryptophyte algae
Microscopy image of cryptophyte algae. Credit: Desmond Toa

Now, researchers at Princeton University have revealed a mechanism that enhances the light harvesting rates of the cryptophyte algae Chroomonas mesostigmatica. Published in the journal Chem on December 8, these findings provide valuable insights for the design of artificial light-harvesting systems such as molecular sensors and solar energy collectors.

Cryptophyte algae often live below other organisms that absorb most of the sun’s rays. In response, the algae have evolved to thrive on wavelengths of light that aren’t captured by their neighbors above, mainly the yellow-green colors. The algae collects this yellow-green light energy and passes it through a network of molecules that converts it into red light, which chlorophyll molecules need to perform important photosynthetic chemistry.

Graduate student Desmond Toa (left) and Jacob Dean, a postdoctoral research associate and lecturer in chemistry with the laser set-up, Credit: C. Todd Reichart
Graduate student Desmond Toa (left) and Jacob Dean, a postdoctoral research associate and lecturer in chemistry, with the laser set-up, Credit: C. Todd Reichart

The speed of the energy transfer through the system has both impressed and perplexed the scientists that study them. In Gregory Scholes’ lab at Princeton University, predictions were always about three times slower than the observed rates. “The timescales that the energy is moved through the protein — we could never understand why the process so fast,” said Scholes, the William S. Tod Professor of Chemistry.

In 2010, Scholes’ team found evidence that the culprit behind these fast rates was a strange phenomenon called quantum coherence, in which molecules could share electronic excitation and transfer energy according to quantum mechanical probability laws instead of classical physics. But the research team couldn’t explain exactly how coherence worked to speed up the rates until now.

Using a sophisticated method enabled by ultrafast lasers, the researchers were able to measure the molecules’ light absorption and essentially track the energy flow through the system. Normally the absorption signals would overlap, making them impossible to assign to specific molecules within the protein complex, but the team was able to sharpen the signals by cooling the proteins down to very low temperatures, said Jacob Dean, lead author and postdoctoral researcher in the Scholes lab.

The researchers observed the system as energy was transferred from molecule to molecule, from high-energy green light to lower energy red light, with excess energy lost as vibrational energy. These experiments revealed a particular spectral pattern that was a ‘smoking gun’ for vibrational resonance, or vibrational matching, between the donor and acceptor molecules, Dean said.

This vibrational matching allowed energy to be transferred much faster than it otherwise would be by distributing the excitation between molecules. This effect provided a mechanism for the previously reported quantum coherence. Taking this redistribution into account, the researchers recalculated their prediction and landed on a rate that was about three times faster.

“Finally the prediction is in the right ballpark,” Scholes said. “Turns out that it required this quite different, surprising mechanism.”

The Scholes lab plans to study related proteins to investigate if this mechanism is operative in other photosynthetic organisms. Ultimately, scientists hope to create light-harvesting systems with perfect energy transfer by taking inspiration and design principles from these finely tuned yet extremely robust light-harvesting proteins. “This mechanism is one more powerful statement of the optimality of these proteins,” Scholes said.

Read the full article here:

Dean, J. C.; Mirkovic, T.; Toa, Z. S. D.; Oblinsky, D. G.; Scholes, G. D. “Vibronic Enhancement of Algae Light Harvesting.Chem 2016, 1, 858.

 

An explanation for the mysterious onset of a universal process (Physics of Plasmas)

Solar flares
Magnetic reconnection happens in solar flares on the surface in the sun, as well as in experimental fusion energy reactors here on Earth. Image credit: NASA.

By John Greenwald, Princeton Plasma Physics Laboratory Communications

Scientists have proposed a groundbreaking solution to a mystery that has puzzled physicists for decades. At issue is how magnetic reconnection, a universal process that sets off solar flares, northern lights and cosmic gamma-ray bursts, occurs so much faster than theory says should be possible. The answer, proposed by researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University, could aid forecasts of space storms, explain several high-energy astrophysical phenomena, and improve plasma confinement in doughnut-shaped magnetic devices called tokamaks designed to obtain energy from nuclear fusion.

Magnetic reconnection takes place when the magnetic field lines embedded in a plasma — the hot, charged gas that makes up 99 percent of the visible universe — converge, break apart and explosively reconnect. This process takes place in thin sheets in which electric current is strongly concentrated.

According to conventional theory, these sheets can be highly elongated and severely constrain the velocity of the magnetic field lines that join and split apart, making fast reconnection impossible. However, observation shows that rapid reconnection does exist, directly contradicting theoretical predictions.

Detailed theory for rapid reconnection

Now, physicists at PPPL and Princeton University have presented a detailed theory for the mechanism that leads to fast reconnection. Their paper, published in the journal Physics of Plasmas in October, focuses on a phenomenon called “plasmoid instability” to explain the onset of the rapid reconnection process. Support for this research comes from the National Science Foundation and the DOE Office of Science.

Plasmoid instability, which breaks up plasma current sheets into small magnetic islands called plasmoids, has generated considerable interest in recent years as a possible mechanism for fast reconnection. However, correct identification of the properties of the instability has been elusive.

Luca Comisson, PPPL
Luca Comisso, lead author of the study. Photo courtesy of PPPL.

The Physics of Plasmas paper addresses this crucial issue. It presents “a quantitative theory for the development of the plasmoid instability in plasma current sheets that can evolve in time” said Luca Comisso, lead author of the study. Co-authors are Manasvi Lingam and Yi-Min Huang of PPPL and Princeton, and Amitava Bhattacharjee, head of the Theory Department at PPPL and Princeton professor of astrophysical sciences.

Pierre de Fermat’s principle

The paper describes how the plasmoid instability begins in a slow linear phase that goes through a period of quiescence before accelerating into an explosive phase that triggers a dramatic increase in the speed of magnetic reconnection. To determine the most important features of this instability, the researchers adapted a variant of the 17th century “principle of least time” originated by the mathematician Pierre de Fermat.

Use of this principle enabled the researchers to derive equations for the duration of the linear phase, and for computing the growth rate and number of plasmoids created. Hence, this least-time approach led to a quantitative formula for the onset time of fast magnetic reconnection and the physics behind it.

The paper also produced a surprise. The authors found that such relationships do not reflect traditional power laws, in which one quantity varies as a power of another. “It is common in all realms of science to seek the existence of power laws,” the researchers wrote. “In contrast, we find that the scaling relations of the plasmoid instability are not true power laws – a result that has never been derived or predicted before.”

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by Princeton University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Read the abstract here: Comisso, L.; Lingam, M.; Huang, Y.-M.; Bhattacharjee, A. General theory of the plasmoid instability. Physics of Plasmas 23, 2016. DOI: 10.1063/1.4964481

 

 

 

 

Outlook for subtropical rainfall under climate change not so gloomy (Nature Climate Change)

Researchers found a clear difference in the rate of global surface warming (left panel) and the rate of subtropical rainfall decline (indicated by the brown shading in the right panel) when forced with an instantaneous increase of CO2. This is the main evidence to show that the subtropical rainfall decline is unrelated to the global surface warming. Credit: Jie He, Ph.D., Princeton University and Brian J. Soden, Ph.D., University of Miami Rosenstiel School of Marine and Atmospheric Science
Researchers found a clear difference in the rate of global surface warming (left panel) and the rate of subtropical rainfall decline (indicated by the brown shading in the right panel) when forced with an instantaneous increase of CO2. (Credit: Jie He, Ph.D., Princeton University and Brian J. Soden, Ph.D., University of Miami Rosenstiel School of Marine and Atmospheric Science)

By Diana Udel, University of Miami

Terrestrial rainfall in the subtropics — including the southeastern United States — may not decline in response to increased greenhouse gases as much as it could over oceans, according to a study from Princeton University and the University of Miami (UM). The study challenges previous projections of how dry subtropical regions could become in the future, and it suggests that the impact of decreased rainfall on people living in these regions could be less severe than initially thought.

“The lack of rainfall decline over subtropical land is caused by the fact that land will warm much faster than the ocean in the future — a mechanism that has been overlooked in previous studies about subtropical precipitation change,” said first author Jie He, a postdoctoral research associate in Princeton’s Program in Atmospheric and Oceanic Sciences who works at the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory located on Princeton’s Forrestal Campus.

In the new study, published in the journal Nature Climate Change, He and co-author Brian Soden, a UM professor of atmospheric sciences, used an ensemble of climate models to show that rainfall decreases occur faster than global warming, and therefore another mechanism must be at play. They found that direct heating from increasing greenhouse gases is causing the land to warm faster than the ocean. The associated changes in atmospheric circulation are thus driving rainfall decline over the oceans rather than land.

Subtropical rainfall changes have been previously attributed to two mechanisms related to global warming: greater moisture content in air that is transported away from the subtropics, and a pole-ward shift in air circulation. While both mechanisms are present, this study shows that neither one is responsible for a decline in rainfall.

“It has been long accepted that climate models project a large-scale rainfall decline in the future over the subtropics. Since most of the subtropical regions are already suffering from rainfall scarcity, the possibility of future rainfall decline is of great concern,” Soden said. “However, most of this decline occurs over subtropical oceans, not land, due to changes in the atmospheric circulation induced by the more rapid warming of land than ocean.”

Most of the reduction in subtropical rainfall occurs instantaneously with an increase of greenhouse gases, independent of the warming of the Earth’s surface, which occurs much more slowly. According to the authors, this indicates that emission reductions would immediately mitigate subtropical rainfall decline, even though the surface will continue to warm for a long time.

He is supported by the Visiting Scientist Program at the department of Atmospheric and Oceanic Science, Princeton University.

Read the abstract:

The study, “A re-examination of the projected subtropical precipitation decline,” was published in the Nov. 14 issue of the journal Nature Climate Change.

Researchers’ Sudoku strategy democratizes powerful tool for genetics research (Nature Communications)

Princeton University researchers Buz Barstow (left), graduate student Kemi Adesina and undergraduate researcher Isao Anzai ’17,
Princeton University researchers Buz Barstow (left), graduate student Kemi Adesina and undergraduate researcher Isao Anzai, Class of 2017, with colleagues at Harvard Universiy, have developed a strategy called “Knockout Sodoku” for figuring out gene function.

By Tien Nguyen, Department of Chemistry

Researchers at Princeton and Harvard Universities have developed a way to produce the tools for figuring out gene function faster and cheaper than current methods, according to new research in the journal Nature Communications.

The function of sizable chunks of many organisms’ genomes is a mystery, and figuring out how to fill these information gaps is one of the central questions in genetics research, said study author Buz Barstow, a Burroughs-Wellcome Fund Research Fellow in Princeton’s Department of Chemistry. “We have no idea what a large fraction of genes do,” he said.

One of the best strategies that scientists have to determine what a particular gene does is to remove it from the genome, and then evaluate what the organism can no longer do. The end result, known as a whole-genome knockout collection, provides full sets of genomic copies, or mutants, in which single genes have been deleted or “knocked out.” Researchers then test the entire knockout collection against a specific chemical reaction. If a mutant organism fails to perform the reaction that means it must be missing the particular gene responsible for that task.

It can take several years and millions of dollars to build a whole-genome knockout collection through targeted gene deletion. Because it’s so costly, whole-genome knockout collections only exist for a handful of organisms such as yeast and the bacterium Escherichia coli. Yet, these collections have proven to be incredibly useful as thousands of studies have been conducted on the yeast gene-deletion collection since its release.

The Princeton and Harvard researchers are the first to create a collection quickly and affordably, doing so in less than a month for several thousand dollars. Their strategy, called “Knockout Sudoku,” relies on a combination of randomized gene deletion and a powerful reconstruction algorithm. Though other research groups have attempted this randomized approach, none have come close to matching the speed and cost of Knockout Sudoku.

“We sort of see it as democratizing these powerful tools of genetics,” said Michael Baym, a co-author on the work and a Harvard Medical School postdoctoral researcher. “Hopefully it will allow the exploration of genetics outside of model organisms,” he said.

Their approach began with steep pizza bills and a technique called transposon mutagenesis that ‘knocks out’ genes by randomly inserting a single disruptive DNA sequence into the genome. This technique is applied to large colonies of microbes to ensure the likelihood that every single gene is disrupted. For example, the team started with a colony of about 40,000 microbes for the bacterium Shewanella oneidensis, which has approximately 3,600 genes in its genome.

Barstow recruited undergraduates and graduate students to manually transfer 40,000 mutants out of laboratory Petri dishes into separate wells using toothpicks. He offered pizza as an incentive, but after a full day of labor, they only managed to move a couple thousand mutants. “I thought to myself, ‘Wait a second, this pizza is going to ruin me,’” Barstow said.

Instead, they decided to rent a colony-picking robot. In just two days, the robot was able to transfer each mutant microbe to individual homes in 96-well plates, 417 plates in total.

But the true challenge and opportunity for innovation was in identifying and cataloging the mutants that could comprise a whole-genome knockout collection in a fast and practical way.

DNA amplification and sequencing is a straightforward way to identify each mutant, but doing it individually quickly gets very expensive and time-consuming. So the researchers’ proposed a scheme in which mutants could be combined into groups that would only require 61 amplification reactions and a single sequencing run.

But still, after sequencing each of the pools, the researchers had an incredible amount of data. They knew the identities of all the mutants, but now they had to figure exactly where each mutant came from in the grid of plates. This is where the Sudoku aspect of the method came in. The researchers built an algorithm that could deduce the location of individual mutants through its repeated appearance in various row, column, plate-row and plate-column pools.

Knockout sodoku helps find genes' functions.

But there’s a problem. Because the initial gene-disruption process is random, it’s possible that the same mutant is formed more than once, which means that playing Sudoku wouldn’t be simple. To find a solution for this issue, Barstow recalled watching the movie, “The Imitation Game,” about Alan Turing’s work on the enigma code, for inspiration.

“I felt like the problem in some ways was very similar to code breaking,” he said. There are simple codes that substitute one letter for another that can be easily solved by looking at the frequency of the letter, Barstow said. “For instance, in English the letter A is used 8.2 percent of the time. So, if you find that the letter X appears in the message about 8.2 percent of the time, you can tell this is supposed to be decoded as an A. This is a very simple example of Bayesian inference.”

With that same logic, Barstow and colleagues developed a statistical picture of what a real location assignment should look like based on a mutant that only appeared once and used that to rate the likelihood of possible locations being real.

“One of the things I really like about this technique is that it’s a prime example of designing a technique with the mathematics in mind at the outset which lets you do much more powerful things than you could do otherwise,” Baym said. “Because it was designed with the mathematics built in, it allows us to get much, much more data out of much less experiments,” he said.

Using their expedient strategy, the researchers created a collection for microbe Shewanella oneidensis. These microbes are especially good at transferring electrons and understanding their powers could prove highly valuable for developing sustainable energy sources, such as artificial photosynthesis, and for environmental remediation in the neutralization of radioactive waste.

Using the resultant collection, the team was able to recapitulate 15 years of research, Barstow said, bolstering their confidence in their method. In an early validation test, they noticed a startlingly poor accuracy rate. After finding no fault with the math, they looked at the original plates to realize that one of the researchers had grabbed the wrong sample. “The least reliable part of this is the human,” Barstow said.

The work was supported by a Career Award at the Scientific Interface from the Burroughs Wellcome Fund and Princeton University startup funds and Fred Fox Class of 1939 funds.

Read the full article here:

Baym, M.; Shaker, L.; Anzai, I. A.; Adesina, O.; Barstow, B. “Rapid construction of a whole-genome transposon insertion collection for Shewanella oneidensis by Knockout Sudoku.” Nature Comm. Available online on Nov. 10, 2016.

New method identifies protein-protein interactions on basis of sequence alone (PNAS)

By Catherine Zandonella, Office of the Dean for Research

Protein-protein interaction
Researchers can now identify which proteins will interact just by looking at their sequences. Pictured are surface representations of a histidine kinase dimer (HK, top) and a response regulator (RR, bottom), two proteins that interact with each other to carry out cellular signaling functions. (Image based on work by Casino, et. al. credit: Bitbol et. al 2016/PNAS.)

Genomic sequencing has provided an enormous amount of new information, but researchers haven’t always been able to use that data to understand living systems.

Now a group of researchers has used mathematical analysis to figure out whether two proteins interact with each other, just by looking at their sequences and without having to train their computer model using any known examples. The research, which was published online today in the journal Proceedings of the National Academy of Sciences, is a significant step forward because protein-protein interactions underlie a multitude of biological processes, from how bacteria sense their surroundings to how enzymes turn our food into cellular energy.

“We hadn’t dreamed we’d be able to address this,” said Ned Wingreen, Princeton University‘s Howard A. Prior Professor in the Life Sciences, and a professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics, and a senior co-author of the study with Lucy Colwell of the University of Cambridge. “We can now figure out which protein families interact with which other protein families, just by looking at their sequences,” he said.

Although researchers have been able to use genomic analysis to obtain the sequences of amino acids that make up proteins, until now there has been no way to use those sequences to accurately predict protein-protein interactions. The main roadblock was that each cell can contain many similar copies of the same protein, called paralogs, and it wasn’t possible to predict which paralog from one protein family would interact with which paralog from another protein family.  Instead, scientists have had to conduct extensive laboratory experiments involving sorting through protein paralogs one by one to see which ones stick.

In the current paper, the researchers use a mathematical procedure, or algorithm, to examine the possible interactions among paralogs and identify pairs of proteins that interact. The method was able to correctly predict 93% of the protein-protein paralog pairs that were present in a dataset of more than 20,000 known paired protein sequences, without being first provided any examples of correct pairs.

Interactions between proteins happen when two proteins come into physical contact and stick together via weak bonds. They may do this to form part of a larger piece of machinery used in cellular metabolism. Or two proteins might interact to pass a signal from the exterior of the cell to the DNA, to enable a bacterial organism to react to its environment.

When two proteins come together, some amino acids on one chain stick to the amino acids on the other chain. Each site on the chain contains one of 20 possible amino acids, yielding a very large number of possible amino-acid pairings. But not all such pairings are equally probable, because proteins that interact tend to evolve together over time, causing their sequences to be correlated.

The algorithm takes advantage of this correlation. It starts with two protein families, each with multiple paralogs in any given organism. The algorithm then pairs protein paralogs randomly within each organism and asks, do particular pairs of amino acids, one on each of the proteins, occur much more or less frequently than chance? Then using this information it asks, given an amino acid in a particular location on the first protein, which amino acids are especially favored at a particular location on the second protein, a technique known as direct coupling analysis. The algorithm in turn uses this information to calculate the strengths of interactions, or “interaction energies,” for all possible protein paralog pairs, and ranks them. It eliminates the unlikely pairings and then runs again using only the top most likely protein pairs.

The most challenging part of identifying protein-protein pairs arises from the fact that proteins fold and kink into complicated shapes that bring amino acids in proximity to others that are not close by in sequence, and that amino-acids may be correlated with each other via chains of interactions, not just when they are neighbors in 3D. The direct coupling analysis works surprisingly well at finding the true underlying couplings that occur between neighbors.

The work on the algorithm was initiated by Wingreen and Robert Dwyer, who earned his Ph.D. in the Department of Molecular Biology at Princeton in 2014, and was continued by first author Anne-Florence Bitbol, who was a postdoctoral researcher in the Lewis-Sigler Institute for Integrative Genomics and the Department of Physics at Princeton and is now a CNRS researcher at Universite Pierre et Marie Curie – Paris 6. Bitbol was advised by Wingreen and Colwell, an expert in this kind of analysis who joined the collaboration while a member at the Institute for Advanced Study in Princeton, NJ, and is now a lecturer in chemistry at the University of Cambridge.

The researchers thought that the algorithm would only work accurately if it first “learned” what makes a good protein-protein pair by studying ones discovered in experiments. This required that the researchers give the algorithm some known protein pairs, or “gold standards,” against which to compare new sequences. The team used two well-studied families of proteins, histidine kinases and response regulators, which interact as part of a signaling system in bacteria.

But known examples are often scarce, and there are tens of millions of undiscovered protein-protein interactions in cells. So the team decided to see if they could reduce the amount of training they gave the algorithm. They gradually lowered the number of known histidine kinase-response regulator pairs that they fed into the algorithm, and were surprised to find that the algorithm continued to work. Finally, they ran the algorithm without giving it any such training pairs, and it still predicted new pairs with 93 percent accuracy.

“The fact that we didn’t need a gold standard was a big surprise,” Wingreen said.

Upon further exploration, Wingreen and colleagues figured out that their algorithm’s good performance was due to the fact that true protein-protein interactions are relatively rare. There are many pairings that simply don’t work, and the algorithm quickly learned not to include them in future attempts. In other words, there is only a small number of distinctive ways that protein-protein interactions can happen, and a vast number of ways that they cannot happen. Moreover, the few successful pairings were found to repeat with little variation across many organisms. This it turns out, makes it relatively easy for the algorithm to reliably sort interactions from non-interactions.

Wingreen compared this observation – that correct pairs are more similar to one another than incorrect pairs are to each other – to the opening line of Leo Tolstoy’s Anna Karenina, which states, “All happy families are alike; each unhappy family is unhappy in its own way.”

The work was done using protein sequences from bacteria, and the researchers are now extending the technique to other organisms.

The approach has the potential to enhance the systematic study of biology, Wingreen said. “We know that living organisms are based on networks of interacting proteins,” he said. “Finally we can begin to use sequence data to explore these networks.”

The research was supported in part by the National Institutes of Health (Grant R01-GM082938) and the National Science Foundation (Grant PHY–1305525).

Read the abstract.

The paper, “Inferring interaction partners from protein sequences,” by Anne-Florence Bitbol, Robert S. Dwyerd, Lucy J. Colwell and Ned S. Wingreen, was published in the Early Edition of the journal Proceedings of the National Academy of Sciences on September 23, 2016.
doi: 10.1073/pnas.1606762113

Ice cores reveal a slow decline in atmospheric oxygen over the last 800,000 years (Science)

Princeton University researchers used ice cores collected in Greenland to study 800,000 years of atmospheric oxygen. Image source: Stolper, et al.
Princeton University researchers used ice cores collected in Greenland (pictured here) and Antarctica to study 800,000 years of atmospheric oxygen. Image source: Stolper, et al.

By Morgan Kelly, Office of Communications

Princeton University researchers have compiled 30 years of data to construct the first ice core-based record of atmospheric oxygen concentrations spanning the past 800,000 years, according to a paper published today in the journal Science.

The record shows that atmospheric oxygen has declined 0.7 percent relative to current atmospheric-oxygen concentrations, a reasonable pace by geological standards, the researchers said. During the past 100 years, however, atmospheric oxygen has declined by a comparatively speedy 0.1 percent because of the burning of fossil fuels, which consumes oxygen and produces carbon dioxide.

Curiously, the decline in atmospheric oxygen over the past 800,000 years was not accompanied by any significant increase in the average amount of carbon dioxide in the atmosphere, though carbon dioxide concentrations do vary over individual ice age cycles. To explain this apparent paradox, the researchers called upon a theory for how the global carbon cycle, atmospheric carbon dioxide and Earth’s temperature are linked on geologic timescales.

“The planet has various processes that can keep carbon dioxide levels in check,” said first author Daniel Stolper, a postdoctoral research associate in Princeton’s Department of Geosciences. The researchers discuss a process known as silicate weathering in particular, wherein carbon dioxide reacts with exposed rock to produce, eventually, calcium carbonate minerals, which trap carbon dioxide in a solid form. As temperatures rise due to higher carbon dioxide in the atmosphere, silicate-weathering rates are hypothesized to increase and remove carbon dioxide from the atmosphere faster.

Researchers at Princeton University analyzed ice cores collected in Greenland and Antarctica to determine levels of atmospheric oxygen over the last 800,000 years. (Image: Stolper, et al.)
Researchers at Princeton University analyzed ice cores collected in Greenland (pictured here) and Antarctica to determine levels of atmospheric oxygen over the last 800,000 years. (Image: Stolper, et al.)

Stolper and his co-authors suggest that the extra carbon dioxide emitted due to declining oxygen concentrations in the atmosphere stimulated silicate weathering, which stabilized carbon dioxide but allowed oxygen to continue to decline.

“The oxygen record is telling us there’s also a change in the amount of carbon dioxide [that was created when oxygen was removed] entering the atmosphere and ocean,” said co-author John Higgins, Princeton assistant professor of geosciences. “However, atmospheric carbon dioxide levels aren’t changing because the Earth has had time to respond via increased silicate-weathering rates.

“The Earth can take care of extra carbon dioxide when it has hundreds of thousands or millions of years to get its act together. In contrast, humankind is releasing carbon dioxide today so quickly that silicate weathering can’t possibly respond fast enough,” Higgins continued. “The Earth has these long processes that humankind has short-circuited.”

The researchers built their history of atmospheric oxygen using measured ratios of oxygen-to-nitrogen found in air trapped in Antarctic ice. This method was established by co-author Michael Bender, professor of geosciences, emeritus, at Princeton.

Because oxygen is critical to many forms of life and geochemical processes, numerous models and indirect proxies for the oxygen content in the atmosphere have been developed over the years, but there was no consensus on whether oxygen concentrations were rising, falling or flat during the past million years (and before fossil fuel burning). The Princeton team analyzed the ice-core data to create a single account of how atmospheric oxygen has changed during the past 800,000 years.

“This record represents an important benchmark for the study of the history of atmospheric oxygen,” Higgins said. “Understanding the history of oxygen in Earth’s atmosphere is intimately connected to understanding the evolution of complex life. It’s one of these big, fundamental ongoing questions in Earth science.”

Read the abstract

Daniel A. Stolper, Michael L. Bender, Gabrielle B. Dreyfus, Yuzhen Yan, and John A. Higgins. 2016. A Pleistocene ice core record of atmospheric oxygen concentrations. Science. Arti­cle pub­lished Sept. 22, 2016. DOI: 10.1126/science.aaf5445

The work was supported by a National Oceanic and Atmospheric Administration Climate and Global Change postdoctoral fellowship, and the National Science Foundation (grant no. ANT-1443263).

Major next steps proposed for fusion energy based on the spherical tokamak design (Nuclear Fusion)

Test cell of the NSTX-U with tokamak in the center (Credit: Princeton Plasma Physics Laboratory)
Test cell of the NSTX-U with tokamak in the center (Credit: Princeton Plasma Physics Laboratory)

By John Greenwald, Princeton Plasma Physics Laboratory

Among the top puzzles in the development of fusion energy is the best shape for the magnetic facility — or “bottle” — that will provide the next steps in the development of fusion reactors. Leading candidates include spherical tokamaks, compact machines that are shaped like cored apples, compared with the doughnut-like shape of conventional tokamaks.  The spherical design produces high-pressure plasmas — essential ingredients for fusion reactions — with relatively low and cost-effective magnetic fields.

A possible next step is a device called a Fusion Nuclear Science Facility (FNSF) that could develop the materials and components for a fusion reactor. Such a device could precede a pilot plant that would demonstrate the ability to produce net energy.

Spherical tokamaks as excellent models

Spherical tokamaks could be excellent models for an FNSF, according to a paper published online in the journal Nuclear Fusion on August 16. The two most advanced spherical tokamaks in the world today are the recently completed National Spherical Torus Experiment-Upgrade (NSTX-U) at the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL), which is managed by Princeton University, and the Mega Ampere Spherical Tokamak (MAST), which is being upgraded at the Culham Center for Fusion Energy in the United Kingdom.

“We are opening up new options for future plants,” said Jonathan Menard, program director for the NSTX-U and lead author of the paper, which discusses the fitness of both spherical tokamaks as possible models. Support for this work comes from the DOE Office of Science.

Jonathan Menard, program director for the NSTX-U and lead author of the paper (Credit: Elle Stark, PPPL)
Jonathan Menard, program director for the NSTX-U and lead author of the paper (Credit: Elle Stark, PPPL)

The 43-page paper considers the spherical design for a combined next-step bottle: an FNSF that could become a pilot plant and serve as a forerunner for a commercial fusion reactor. Such a facility could provide a pathway leading from ITER, the international tokamak under construction in France to demonstrate the feasibility of fusion power, to a commercial fusion power plant.

A key issue for this bottle is the size of the hole in the center of the tokamak that holds and shapes the plasma. In spherical tokamaks, this hole can be half the size of the hole in conventional tokamaks. These differences, reflected in the shape of the magnetic field that confines the superhot plasma, have a profound effect on how the plasma behaves.

Designs for the Fusion Nuclear Science Facility

First up for a next-step device would be the FNSF. It would test the materials that must face and withstand the neutron bombardment that fusion reactions produce, while also generating a sufficient amount of its own fusion fuel. According to the paper, recent studies have for the first time identified integrated designs that would be up to the task.

These integrated capabilities include:

  • A blanket system able to breed tritium, a rare isotope — or form — of hydrogen that fuses with deuterium, another isotope of the atom, to generate the fusion reactions.  The spherical design could breed approximately one isotope of tritium for each isotope consumed in the reaction, producing tritium self-sufficiency.
  • A lengthy configuration of the magnetic field that vents exhaust heat from the tokamak. This configuration, called a “divertor,” would reduce the amount of heat that strikes and could damage the interior wall of the tokamak.
  • A vertical maintenance scheme in which the central magnet and the blanket structures that breed tritium can be removed independently from the tokamak for installation, maintenance, and repair. Maintenance of these complex nuclear facilities represents a significant design challenge. Once a tokamak operates with fusion fuel, this maintenance must be done with remote-handling robots; the new paper describes how this can be accomplished.

For pilot plant use, superconducting coils that operate at high temperature would replace the copper coils in the FNSF to reduce power loss. The plant would generate a small amount of net electricity in a facility that would be as compact as possible and could more easily scale to a commercial fusion power station.

High-temperature superconductors

High-temperature superconductors could have both positive and negative effects. While they would reduce power loss, they would require additional shielding to protect the magnets from heating and radiation damage. This would make the machine larger and less compact.

Recent advances in high-temperature superconductors could help overcome this problem. The advances enable higher magnetic fields, using much thinner magnets than are presently achievable, leading to reduction in the refrigeration power needed to cool the magnets. Such superconducting magnets open the possibility that all FNSF and associated pilot plants based on the spherical tokamak design could help minimize the mass and cost of the main confinement magnets.

For now, the increased power of the NSTX-U and the soon-to-be-completed MAST facility moves them closer to the capability of a commercial plant that will create safe, clean and virtually limitless energy. “NSTX-U and MAST-U will push the physics frontier, expand our knowledge of high temperature plasmas, and, if successful, lay the scientific foundation for fusion development paths based on more compact designs,” said PPPL Director Stewart Prager.

Twice the power and five times the pulse length

The NSTX-U has twice the power and five times the pulse length of its predecessor and will explore how plasma confinement and sustainment are influenced by higher plasma pressure in the spherical geometry. The MAST upgrade will have comparable prowess and will explore a new, state-of-the art method for exhausting plasmas that are hotter than the core of the sun without damaging the machine.

“The main reason we research spherical tokamaks is to find a way to produce fusion at much less cost than conventional tokamaks require,” said Ian Chapman, the newly appointed chief executive of the United Kingdom Atomic Energy Authority and leader of the UK’s magnetic confinement fusion research program at the Culham Science Center.

The ability of these machines to create high plasma performance within their compact geometries demonstrates their fitness as possible models for next-step fusion facilities. The wide range of considerations, calculations and figures detailed in this study strongly support the concept of a combined FNSF and pilot plant based on the spherical design. The NSTX-U and MAST-U devices must now successfully prototype the necessary high-performance scenarios.

Read the abstract

J.E. Menard, T. Brown, L. El-Guebaly, M. Boyer, J. Canik, B. Colling, R. Raman, Z. Wang, Y. Zhai,P. Buxton, B. Covele, C. D’Angelo, A. Davis, S. Gerhardt, M. Gryaznevich, M. Harb, T.C. Hender,S. Kaye, D. Kingham, M. Kotschenreuther, S. Mahajan, R. Maingi, E. Marriott, E.T. Meier, L. Mynsberge, C. Neumeyer, M. Ono, J.-K. Park, S.A. Sabbagh, V. Soukhanovskii, P. Valanju and R. Woolley. Fusion nuclear science facilities and pilot plants based on the spherical tokamak. Nucl. Fusion 56 (2016) — Published 16 August 2016.

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

PPPL researchers combine quantum mechanics and Einstein’s theory of special relativity to clear up puzzles in plasma physics (Phys. Rev. A)

Sketch of a pulsar, center, in binary star system (Photo credit: NASA Goddard Space Flight Center)
Sketch of a pulsar, center, in binary star system (Photo credit: NASA Goddard Space Flight Center)

By John Greenwald, Princeton Plasma Physics Laboratory Communications

Among the intriguing issues in plasma physics are those surrounding X-ray pulsars — collapsed stars that orbit around a cosmic companion and beam light at regular intervals, like lighthouses in the sky.  Physicists want to know the strength of the magnetic field and density of the plasma that surrounds these pulsars, which can be millions of times greater than the density of plasma in stars like the sun.

Researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have developed a theory of plasma waves that can infer these properties in greater detail than in standard approaches. The new research analyzes the plasma surrounding the pulsar by coupling Einstein’s theory of relativity with quantum mechanics, which describes the motion of subatomic particles such as the atomic nuclei — or ions — and electrons in plasma. Supporting this work is the DOE Office of Science.

Quantum field theory

Graduate student Yuan Shi Graduate student Yuan Shi (Photo by Elle Starkman/PPPL Office of Communications)
Graduate student Yuan Shi (Photo by Elle Starkman/PPPL Office of Communications)

The key insight comes from quantum field theory, which describes charged particles that are relativistic, meaning that they travel at near the speed of light. “Quantum theory can describe certain details of the propagation of waves in plasma,” said Yuan Shi, a graduate student at Princeton University in the Department of Astrophysics’ Princeton Program in Plasma Physics, and lead author of a paper published July 29 in the journal Physical Review A.  Understanding the interactions behind the propagation can then reveal the composition of the plasma.

Shi developed the paper with assistance from co-authors Nathaniel Fisch, director of the Princeton Program in Plasma Physics and professor and associate chair of astrophysical sciences at Princeton University, and Hong Qin, a physicist at PPPL and executive dean of the School of Nuclear Science and Technology at the University of Science and Technology of China.  “When I worked out the mathematics they showed me how to apply it,” said Shi. 

In pulsars, relativistic particles in the magnetosphere, which is the magnetized atmosphere surrounding the pulsar, absorb light waves, and this absorption displays peaks. “The question is, what do these peaks mean?” asks Shi. Analysis of the peaks with equations from special relativity and quantum field theory, he found, can determine the density and field strength of the magnetosphere.

Combining physics techniques

The process combines the techniques of high-energy physics, condensed matter physics, and plasma physics.  In high-energy physics, researchers use quantum field theory to describe the interaction of a handful of particles. In condensed matter physics, people use quantum mechanics to describe the states of a large collection of particles. Plasma physics uses model equations to explain the collective movement of millions of particles. The new method utilizes aspects of all three techniques to analyze the plasma waves in pulsars.

The same technique can be used to infer the density of the plasma and strength of the magnetic field created by inertial confinement fusion experiments. Such experiments use lasers to ablate — or vaporize —a target that contains plasma fuel. The ablation then causes an implosion that compresses the fuel into plasma and produces fusion reactions.

Standard formulas give inconsistent answers

Researchers want to know the precise density, temperature and field strength of the plasma that this process creates. Standard mathematical formulas give inconsistent answers when lasers of different color are used to measure the plasma parameters. This is because the extreme density of the plasma gives rise to quantum effects, while the high energy density of the magnetic field gives rise to relativistic effects, says Shi. So formulations that draw upon both fields are needed to reconcile the results.

For Shi, the new technique shows the benefits of combining physics disciplines that don’t often interact. Says he: “Putting fields together gives tremendous power to explain things that we couldn’t understand before.”

Read the abstract

Yuan Shi, Nathaniel J. Fisch, and Hong Qin. Effective-action approach to wave propagation in scalar QED plasmas. Phys. Rev. A 94, 012124 – Published 29 July 2016.

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by Princeton University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

Unconventional quasiparticles predicted in conventional crystals (Science)

Fermi arcs on the surface of uncoventional materials
Two electronic states known as Fermi arcs, localized on the surface of a material, stem out of the projection of a 3-fold degenerate bulk new fermion. This new fermion is a cousin of the Weyl fermion discovered last year in another class of topological semimetals. The new fermion has a spin-1, a reflection of the 3- fold degeneracy, unlike the spin-½ that the recently discovered Weyl fermions have.

By Staff

An international team of researchers has predicted the existence of several previously unknown types of quantum particles in materials. The particles — which belong to the class of particles known as fermions — can be distinguished by several intrinsic properties, such as their responses to applied magnetic and electric fields. In several cases, fermions in the interior of the material show their presence on the surface via the appearance of electron states called Fermi arcs, which link the different types of fermion states in the material’s bulk.

The research, published online this week in the journal Science, was conducted by a team at Princeton University in collaboration with researchers at the Donostia International Physics Center (DIPC) in Spain and the Max Planck Institute for Chemical Physics of Solids in Germany. The investigators propose that many of the materials hosting the new types of fermions are “protected metals,” which are metals that do not allow, in most circumstances, an insulating state to develop. This research represents the newest avenue in the physics of “topological materials,” an area of science that has already fundamentally changed the way researchers see and interpret states of matter.

The team at Princeton included Barry Bradlyn and Jennifer Cano, both associate research scholars at the Princeton Center for Theoretical Science; Zhijun Wang, a postdoctoral research associate in the Department of Physics, Robert Cava, the Russell Wellman Moore Professor of Chemistry; and B. Andrei Bernevig, associate professor of physics. The research team also included Maia Vergniory, a postdoctoral research fellow at DIPC, and Claudia Felser, a professor of physics and chemistry and director of the Max Planck Institute for Chemical Physics of Solids.

For the past century, gapless fermions, which are quantum particles with no energy gap between their highest filled and lowest unfilled states, were thought to come in three varieties: Dirac, Majorana and Weyl. Condensed matter physics, which pioneers the study of quantum phases of matter, has become fertile ground for the discovery of these fermions in different materials through experiments conducted in crystals. These experiments enable researchers to explore exotic particles using relatively inexpensive laboratory equipment rather than large particle accelerators.

In the past four years, all three varieties of gapless fermions have been theoretically predicted and experimentally observed in different types of crystalline materials grown in laboratories around the world. The Weyl fermion was thought to be last of the group of predicted quasiparticles in nature. Research published earlier this year in the journal Nature (Wang et al., doi:10.1038/nature17410) has shown, however, that this is not the case, with the discovery of a bulk insulator which hosts an exotic surface fermion.

In the current paper, the team predicted and classified the possible exotic fermions that can appear in the bulk of materials. The energy of these fermions can be characterized as a function of their momentum into so-called energy bands, or branches. Unlike the Weyl and Dirac fermions, which, roughly speaking, exhibit an energy spectrum with 2- and 4-fold branches of allowed energy states, the new fermions can exhibit 3-, 6- and 8-fold branches. The 3-, 6-, or 8-fold branches meet up at points – called degeneracy points – in the Brillouin zone, which is the parameter space where the fermion momentum takes its values.

“Symmetries are essential to keep the fermions well-defined, as well as to uncover their physical properties,” Bradlyn said. “Locally, by inspecting the physics close to the degeneracy points, one can think of them as new particles, but this is only part of the story,” he said.

Cano added, “The new fermions know about the global topology of the material. Crucially, they connect to other points in the Brillouin zone in nontrivial ways.”

During the search for materials exhibiting the new fermions, the team uncovered a fundamentally new and systematic way of finding metals in nature. Until now, searching for metals involved performing detailed calculations of the electronic states of matter.

“The presence of the new fermions allows for a much easier way to determine whether a given system is a protected metal or not, in some cases without the need to do a detailed calculation,” Wang said.

Verginory added, “One can just count the number of electrons of a crystal, and figure out, based on symmetry, if a new fermion exists within observable range.”

The researchers suggest that this is because the new fermions require multiple electronic states to meet in energy: The 8-branch fermion requires the presence of 8 electronic states. As such, a system with only 4 electrons can only occupy half of those states and cannot be insulating, thereby creating a protected metal.

“The interplay between symmetry, topology and material science hinted by the presence of the new fermions is likely to play a more fundamental role in our future understanding of topological materials – both semimetals and insulators,” Cava said.

Felser added, “We all envision a future for quantum physical chemistry where one can write down the formula of a material, look at both the symmetries of the crystal lattice and at the valence orbitals of each element, and, without a calculation, be able to tell whether the material is a topological insulator or a protected metal.”

Read the abstract.

Funding for this study was provided by the US Army Research Office Multidisciplinary University Research Initiative, the US Office of Naval Research, the National Science Foundation, the David and Lucile Packard Foundation, the W. M. Keck Foundation, and the Spanish Ministry of Economy and Competitiveness.

Study Models How the Immune System Might Evolve to Conquer HIV (PLOS Genetics)

By Katherine Unger Baillie, courtesy of the University of Pennsylvania

It has remained frustratingly difficult to develop a vaccine for HIV/AIDS, in part because the virus, once in our bodies, rapidly reproduces and evolves to escape being killed by the immune system.

“The viruses are constantly producing mutants that evade detection,” said Joshua Plotkin, a professor in the University of Pennsylvania’s Department of Biology in the School of Arts & Sciences. “A single person with HIV may have millions of strains of the virus circulating in the body.”

Yet the body’s immune system can also evolve. Antibody-secreting B-cells compete among themselves to survive and proliferate depending on how well they bind to foreign invaders. They dynamically produce diverse types of antibodies during the course of an infection.

In a new paper in PLOS Genetics, Plotkin, along with postdoctoral researcher Jakub Otwinowski and Armita Nourmohammad, an associate research scholar at Princeton University’s Lewis-Sigler Institute for Integrative Genomics, mathematically modeled these dueling evolutionary processes to understand the conditions that influence how antibodies and viruses interact and adapt to one another over the course of a chronic infection.

Notably, the researchers considered the conditions under which the immune system gives rise to broadly neutralizing antibodies, which can defeat broad swaths of viral strains by targeting the most vital and immutable parts of the viral genome. Their findings, which suggest that presenting the immune system with a large diversity of viral antigens may be the best way to encourage the emergence of such potent antibodies, have implications for designing vaccines against HIV and other chronic infections.

“This isn’t a prescription for how to design an HIV vaccine,” Plotkin said, “but our work provides some quantitative guidance for how to prompt the immune system to elicit broadly neutralizing antibodies.”

The biggest challenge in attempting to model the co-evolution of antibodies and viruses is keeping track of the vast quantity of different genomic sequences that arise in each population during the course of an infection. So the researchers focused on the statistics of the binding interactions between the virus and antibodies.

“This is the key analytical trick to simplify the problem,” said Otwinowski. “It would otherwise be impossible to track and write equations for all the interactions.”

The researchers constructed a model to examine how mutations would affect the binding affinity between antibodies and viruses. Their model calculated the average binding affinities between the entire population of viral strains and the repertoire of antibodies over time to understand how they co-evolve.

“It’s one of the things that is unique about our work,” said Nourmohammad. “We’re not only looking at one virus binding to one antibody but the whole diversity of interactions that occur over the course of a chronic infection.”

What they saw was an S-shaped curve, in which sometimes the immune system appeared to control the infection with high levels of binding, but subsequently a viral mutation would arise that could evade neutralization, and then binding affinities would go down.

“The immune system does well if there is active binding between antibodies and virus,” Plotkin said, “and the virus does well if there is not strong binding.”

Such a signature is indicative of a system that is out of equilibrium where the viruses are responding to the antibodies and vice versa. The researchers note that this signature is likely common to many antagonistically co-evolving populations.

To see how well their model matched with data from an actual infection, the researchers looked at time-shifted experimental data from two HIV patients, in which their antibodies were collected at different time points and then “competed” against the viruses that had been in their bodies at different times during their infections.

They saw that these patient data are consistent with their model: Viruses from earlier time points would be largely neutralized by antibodies collected at later time points but could outcompete antibodies collected earlier in infection.

Finally, the researchers used the model to try to understand the conditions under which broadly neutralizing antibodies, which could defeat most strains of virus, would emerge and rise to prominence.

“Despite the effectiveness of broadly neutralizing antibodies, none of the patients with these antibodies has been cured of HIV,” Plotkin said. “It’s just that by the time they develop them, it’s too late and their T-cell repertoire is depleted. This raises the intriguing idea that, if only they could develop these antibodies earlier in infection, they might be prepared to combat an evolving target.”

“The model that we built,” Nourmohammad said, “was able to show that, if viral diversity is very large, the chance that these broadly neutralizing antibodies outcompete more specifically targeted antibodies and proliferate goes up.”

The finding suggests that, in order for a vaccine to elicit these antibodies, it should present a diverse set of viral antigens to the host. That way no one specialist antibody would have a significant fitness advantage, leaving room for the generalist, broadly neutralizing antibodies to succeed.

The researchers said that there has been little theoretical modeling of co-evolutionary systems such as this one. As such, their work could have implications for other co-evolution scenarios.

“Our theory can also apply to other systems, such as bacteria-phage co-evolution,” said Otwinowski, in which viruses infect bacteria, a process that drives bacterial evolution and ecology.

“It could also shed light on the co-evolution of the influenza virus in the context of evolving global immune systems,” Nourmohammad said.

Read the article.

The work was supported by funding from the U.S. National Science Foundation, James S. McDonnell Foundation, David and Lucile Packard Foundation, U.S. Army Research Office and National Institutes of Health.