New method identifies protein-protein interactions on basis of sequence alone (PNAS)

By Catherine Zandonella, Office of the Dean for Research

Protein-protein interaction
Researchers can now identify which proteins will interact just by looking at their sequences. Pictured are surface representations of a histidine kinase dimer (HK, top) and a response regulator (RR, bottom), two proteins that interact with each other to carry out cellular signaling functions. (Image based on work by Casino, et. al. credit: Bitbol et. al 2016/PNAS.)

Genomic sequencing has provided an enormous amount of new information, but researchers haven’t always been able to use that data to understand living systems.

Now a group of researchers has used mathematical analysis to figure out whether two proteins interact with each other, just by looking at their sequences and without having to train their computer model using any known examples. The research, which was published online today in the journal Proceedings of the National Academy of Sciences, is a significant step forward because protein-protein interactions underlie a multitude of biological processes, from how bacteria sense their surroundings to how enzymes turn our food into cellular energy.

“We hadn’t dreamed we’d be able to address this,” said Ned Wingreen, Princeton University‘s Howard A. Prior Professor in the Life Sciences, and a professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics, and a senior co-author of the study with Lucy Colwell of the University of Cambridge. “We can now figure out which protein families interact with which other protein families, just by looking at their sequences,” he said.

Although researchers have been able to use genomic analysis to obtain the sequences of amino acids that make up proteins, until now there has been no way to use those sequences to accurately predict protein-protein interactions. The main roadblock was that each cell can contain many similar copies of the same protein, called paralogs, and it wasn’t possible to predict which paralog from one protein family would interact with which paralog from another protein family.  Instead, scientists have had to conduct extensive laboratory experiments involving sorting through protein paralogs one by one to see which ones stick.

In the current paper, the researchers use a mathematical procedure, or algorithm, to examine the possible interactions among paralogs and identify pairs of proteins that interact. The method was able to correctly predict 93% of the protein-protein paralog pairs that were present in a dataset of more than 20,000 known paired protein sequences, without being first provided any examples of correct pairs.

Interactions between proteins happen when two proteins come into physical contact and stick together via weak bonds. They may do this to form part of a larger piece of machinery used in cellular metabolism. Or two proteins might interact to pass a signal from the exterior of the cell to the DNA, to enable a bacterial organism to react to its environment.

When two proteins come together, some amino acids on one chain stick to the amino acids on the other chain. Each site on the chain contains one of 20 possible amino acids, yielding a very large number of possible amino-acid pairings. But not all such pairings are equally probable, because proteins that interact tend to evolve together over time, causing their sequences to be correlated.

The algorithm takes advantage of this correlation. It starts with two protein families, each with multiple paralogs in any given organism. The algorithm then pairs protein paralogs randomly within each organism and asks, do particular pairs of amino acids, one on each of the proteins, occur much more or less frequently than chance? Then using this information it asks, given an amino acid in a particular location on the first protein, which amino acids are especially favored at a particular location on the second protein, a technique known as direct coupling analysis. The algorithm in turn uses this information to calculate the strengths of interactions, or “interaction energies,” for all possible protein paralog pairs, and ranks them. It eliminates the unlikely pairings and then runs again using only the top most likely protein pairs.

The most challenging part of identifying protein-protein pairs arises from the fact that proteins fold and kink into complicated shapes that bring amino acids in proximity to others that are not close by in sequence, and that amino-acids may be correlated with each other via chains of interactions, not just when they are neighbors in 3D. The direct coupling analysis works surprisingly well at finding the true underlying couplings that occur between neighbors.

The work on the algorithm was initiated by Wingreen and Robert Dwyer, who earned his Ph.D. in the Department of Molecular Biology at Princeton in 2014, and was continued by first author Anne-Florence Bitbol, who was a postdoctoral researcher in the Lewis-Sigler Institute for Integrative Genomics and the Department of Physics at Princeton and is now a CNRS researcher at Universite Pierre et Marie Curie – Paris 6. Bitbol was advised by Wingreen and Colwell, an expert in this kind of analysis who joined the collaboration while a member at the Institute for Advanced Study in Princeton, NJ, and is now a lecturer in chemistry at the University of Cambridge.

The researchers thought that the algorithm would only work accurately if it first “learned” what makes a good protein-protein pair by studying ones discovered in experiments. This required that the researchers give the algorithm some known protein pairs, or “gold standards,” against which to compare new sequences. The team used two well-studied families of proteins, histidine kinases and response regulators, which interact as part of a signaling system in bacteria.

But known examples are often scarce, and there are tens of millions of undiscovered protein-protein interactions in cells. So the team decided to see if they could reduce the amount of training they gave the algorithm. They gradually lowered the number of known histidine kinase-response regulator pairs that they fed into the algorithm, and were surprised to find that the algorithm continued to work. Finally, they ran the algorithm without giving it any such training pairs, and it still predicted new pairs with 93 percent accuracy.

“The fact that we didn’t need a gold standard was a big surprise,” Wingreen said.

Upon further exploration, Wingreen and colleagues figured out that their algorithm’s good performance was due to the fact that true protein-protein interactions are relatively rare. There are many pairings that simply don’t work, and the algorithm quickly learned not to include them in future attempts. In other words, there is only a small number of distinctive ways that protein-protein interactions can happen, and a vast number of ways that they cannot happen. Moreover, the few successful pairings were found to repeat with little variation across many organisms. This it turns out, makes it relatively easy for the algorithm to reliably sort interactions from non-interactions.

Wingreen compared this observation – that correct pairs are more similar to one another than incorrect pairs are to each other – to the opening line of Leo Tolstoy’s Anna Karenina, which states, “All happy families are alike; each unhappy family is unhappy in its own way.”

The work was done using protein sequences from bacteria, and the researchers are now extending the technique to other organisms.

The approach has the potential to enhance the systematic study of biology, Wingreen said. “We know that living organisms are based on networks of interacting proteins,” he said. “Finally we can begin to use sequence data to explore these networks.”

The research was supported in part by the National Institutes of Health (Grant R01-GM082938) and the National Science Foundation (Grant PHY–1305525).

Read the abstract.

The paper, “Inferring interaction partners from protein sequences,” by Anne-Florence Bitbol, Robert S. Dwyerd, Lucy J. Colwell and Ned S. Wingreen, was published in the Early Edition of the journal Proceedings of the National Academy of Sciences on September 23, 2016.
doi: 10.1073/pnas.1606762113

Ice cores reveal a slow decline in atmospheric oxygen over the last 800,000 years (Science)

Princeton University researchers used ice cores collected in Greenland to study 800,000 years of atmospheric oxygen. Image source: Stolper, et al.
Princeton University researchers used ice cores collected in Greenland (pictured here) and Antarctica to study 800,000 years of atmospheric oxygen. Image source: Stolper, et al.

By Morgan Kelly, Office of Communications

Princeton University researchers have compiled 30 years of data to construct the first ice core-based record of atmospheric oxygen concentrations spanning the past 800,000 years, according to a paper published today in the journal Science.

The record shows that atmospheric oxygen has declined 0.7 percent relative to current atmospheric-oxygen concentrations, a reasonable pace by geological standards, the researchers said. During the past 100 years, however, atmospheric oxygen has declined by a comparatively speedy 0.1 percent because of the burning of fossil fuels, which consumes oxygen and produces carbon dioxide.

Curiously, the decline in atmospheric oxygen over the past 800,000 years was not accompanied by any significant increase in the average amount of carbon dioxide in the atmosphere, though carbon dioxide concentrations do vary over individual ice age cycles. To explain this apparent paradox, the researchers called upon a theory for how the global carbon cycle, atmospheric carbon dioxide and Earth’s temperature are linked on geologic timescales.

“The planet has various processes that can keep carbon dioxide levels in check,” said first author Daniel Stolper, a postdoctoral research associate in Princeton’s Department of Geosciences. The researchers discuss a process known as silicate weathering in particular, wherein carbon dioxide reacts with exposed rock to produce, eventually, calcium carbonate minerals, which trap carbon dioxide in a solid form. As temperatures rise due to higher carbon dioxide in the atmosphere, silicate-weathering rates are hypothesized to increase and remove carbon dioxide from the atmosphere faster.

Researchers at Princeton University analyzed ice cores collected in Greenland and Antarctica to determine levels of atmospheric oxygen over the last 800,000 years. (Image: Stolper, et al.)
Researchers at Princeton University analyzed ice cores collected in Greenland (pictured here) and Antarctica to determine levels of atmospheric oxygen over the last 800,000 years. (Image: Stolper, et al.)

Stolper and his co-authors suggest that the extra carbon dioxide emitted due to declining oxygen concentrations in the atmosphere stimulated silicate weathering, which stabilized carbon dioxide but allowed oxygen to continue to decline.

“The oxygen record is telling us there’s also a change in the amount of carbon dioxide [that was created when oxygen was removed] entering the atmosphere and ocean,” said co-author John Higgins, Princeton assistant professor of geosciences. “However, atmospheric carbon dioxide levels aren’t changing because the Earth has had time to respond via increased silicate-weathering rates.

“The Earth can take care of extra carbon dioxide when it has hundreds of thousands or millions of years to get its act together. In contrast, humankind is releasing carbon dioxide today so quickly that silicate weathering can’t possibly respond fast enough,” Higgins continued. “The Earth has these long processes that humankind has short-circuited.”

The researchers built their history of atmospheric oxygen using measured ratios of oxygen-to-nitrogen found in air trapped in Antarctic ice. This method was established by co-author Michael Bender, professor of geosciences, emeritus, at Princeton.

Because oxygen is critical to many forms of life and geochemical processes, numerous models and indirect proxies for the oxygen content in the atmosphere have been developed over the years, but there was no consensus on whether oxygen concentrations were rising, falling or flat during the past million years (and before fossil fuel burning). The Princeton team analyzed the ice-core data to create a single account of how atmospheric oxygen has changed during the past 800,000 years.

“This record represents an important benchmark for the study of the history of atmospheric oxygen,” Higgins said. “Understanding the history of oxygen in Earth’s atmosphere is intimately connected to understanding the evolution of complex life. It’s one of these big, fundamental ongoing questions in Earth science.”

Read the abstract

Daniel A. Stolper, Michael L. Bender, Gabrielle B. Dreyfus, Yuzhen Yan, and John A. Higgins. 2016. A Pleistocene ice core record of atmospheric oxygen concentrations. Science. Arti­cle pub­lished Sept. 22, 2016. DOI: 10.1126/science.aaf5445

The work was supported by a National Oceanic and Atmospheric Administration Climate and Global Change postdoctoral fellowship, and the National Science Foundation (grant no. ANT-1443263).

Major next steps proposed for fusion energy based on the spherical tokamak design (Nuclear Fusion)

Test cell of the NSTX-U with tokamak in the center (Credit: Princeton Plasma Physics Laboratory)
Test cell of the NSTX-U with tokamak in the center (Credit: Princeton Plasma Physics Laboratory)

By John Greenwald, Princeton Plasma Physics Laboratory

Among the top puzzles in the development of fusion energy is the best shape for the magnetic facility — or “bottle” — that will provide the next steps in the development of fusion reactors. Leading candidates include spherical tokamaks, compact machines that are shaped like cored apples, compared with the doughnut-like shape of conventional tokamaks.  The spherical design produces high-pressure plasmas — essential ingredients for fusion reactions — with relatively low and cost-effective magnetic fields.

A possible next step is a device called a Fusion Nuclear Science Facility (FNSF) that could develop the materials and components for a fusion reactor. Such a device could precede a pilot plant that would demonstrate the ability to produce net energy.

Spherical tokamaks as excellent models

Spherical tokamaks could be excellent models for an FNSF, according to a paper published online in the journal Nuclear Fusion on August 16. The two most advanced spherical tokamaks in the world today are the recently completed National Spherical Torus Experiment-Upgrade (NSTX-U) at the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL), which is managed by Princeton University, and the Mega Ampere Spherical Tokamak (MAST), which is being upgraded at the Culham Center for Fusion Energy in the United Kingdom.

“We are opening up new options for future plants,” said Jonathan Menard, program director for the NSTX-U and lead author of the paper, which discusses the fitness of both spherical tokamaks as possible models. Support for this work comes from the DOE Office of Science.

Jonathan Menard, program director for the NSTX-U and lead author of the paper (Credit: Elle Stark, PPPL)
Jonathan Menard, program director for the NSTX-U and lead author of the paper (Credit: Elle Stark, PPPL)

The 43-page paper considers the spherical design for a combined next-step bottle: an FNSF that could become a pilot plant and serve as a forerunner for a commercial fusion reactor. Such a facility could provide a pathway leading from ITER, the international tokamak under construction in France to demonstrate the feasibility of fusion power, to a commercial fusion power plant.

A key issue for this bottle is the size of the hole in the center of the tokamak that holds and shapes the plasma. In spherical tokamaks, this hole can be half the size of the hole in conventional tokamaks. These differences, reflected in the shape of the magnetic field that confines the superhot plasma, have a profound effect on how the plasma behaves.

Designs for the Fusion Nuclear Science Facility

First up for a next-step device would be the FNSF. It would test the materials that must face and withstand the neutron bombardment that fusion reactions produce, while also generating a sufficient amount of its own fusion fuel. According to the paper, recent studies have for the first time identified integrated designs that would be up to the task.

These integrated capabilities include:

  • A blanket system able to breed tritium, a rare isotope — or form — of hydrogen that fuses with deuterium, another isotope of the atom, to generate the fusion reactions.  The spherical design could breed approximately one isotope of tritium for each isotope consumed in the reaction, producing tritium self-sufficiency.
  • A lengthy configuration of the magnetic field that vents exhaust heat from the tokamak. This configuration, called a “divertor,” would reduce the amount of heat that strikes and could damage the interior wall of the tokamak.
  • A vertical maintenance scheme in which the central magnet and the blanket structures that breed tritium can be removed independently from the tokamak for installation, maintenance, and repair. Maintenance of these complex nuclear facilities represents a significant design challenge. Once a tokamak operates with fusion fuel, this maintenance must be done with remote-handling robots; the new paper describes how this can be accomplished.

For pilot plant use, superconducting coils that operate at high temperature would replace the copper coils in the FNSF to reduce power loss. The plant would generate a small amount of net electricity in a facility that would be as compact as possible and could more easily scale to a commercial fusion power station.

High-temperature superconductors

High-temperature superconductors could have both positive and negative effects. While they would reduce power loss, they would require additional shielding to protect the magnets from heating and radiation damage. This would make the machine larger and less compact.

Recent advances in high-temperature superconductors could help overcome this problem. The advances enable higher magnetic fields, using much thinner magnets than are presently achievable, leading to reduction in the refrigeration power needed to cool the magnets. Such superconducting magnets open the possibility that all FNSF and associated pilot plants based on the spherical tokamak design could help minimize the mass and cost of the main confinement magnets.

For now, the increased power of the NSTX-U and the soon-to-be-completed MAST facility moves them closer to the capability of a commercial plant that will create safe, clean and virtually limitless energy. “NSTX-U and MAST-U will push the physics frontier, expand our knowledge of high temperature plasmas, and, if successful, lay the scientific foundation for fusion development paths based on more compact designs,” said PPPL Director Stewart Prager.

Twice the power and five times the pulse length

The NSTX-U has twice the power and five times the pulse length of its predecessor and will explore how plasma confinement and sustainment are influenced by higher plasma pressure in the spherical geometry. The MAST upgrade will have comparable prowess and will explore a new, state-of-the art method for exhausting plasmas that are hotter than the core of the sun without damaging the machine.

“The main reason we research spherical tokamaks is to find a way to produce fusion at much less cost than conventional tokamaks require,” said Ian Chapman, the newly appointed chief executive of the United Kingdom Atomic Energy Authority and leader of the UK’s magnetic confinement fusion research program at the Culham Science Center.

The ability of these machines to create high plasma performance within their compact geometries demonstrates their fitness as possible models for next-step fusion facilities. The wide range of considerations, calculations and figures detailed in this study strongly support the concept of a combined FNSF and pilot plant based on the spherical design. The NSTX-U and MAST-U devices must now successfully prototype the necessary high-performance scenarios.

Read the abstract

J.E. Menard, T. Brown, L. El-Guebaly, M. Boyer, J. Canik, B. Colling, R. Raman, Z. Wang, Y. Zhai,P. Buxton, B. Covele, C. D’Angelo, A. Davis, S. Gerhardt, M. Gryaznevich, M. Harb, T.C. Hender,S. Kaye, D. Kingham, M. Kotschenreuther, S. Mahajan, R. Maingi, E. Marriott, E.T. Meier, L. Mynsberge, C. Neumeyer, M. Ono, J.-K. Park, S.A. Sabbagh, V. Soukhanovskii, P. Valanju and R. Woolley. Fusion nuclear science facilities and pilot plants based on the spherical tokamak. Nucl. Fusion 56 (2016) — Published 16 August 2016.

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit

PPPL researchers combine quantum mechanics and Einstein’s theory of special relativity to clear up puzzles in plasma physics (Phys. Rev. A)

Sketch of a pulsar, center, in binary star system (Photo credit: NASA Goddard Space Flight Center)
Sketch of a pulsar, center, in binary star system (Photo credit: NASA Goddard Space Flight Center)

By John Greenwald, Princeton Plasma Physics Laboratory Communications

Among the intriguing issues in plasma physics are those surrounding X-ray pulsars — collapsed stars that orbit around a cosmic companion and beam light at regular intervals, like lighthouses in the sky.  Physicists want to know the strength of the magnetic field and density of the plasma that surrounds these pulsars, which can be millions of times greater than the density of plasma in stars like the sun.

Researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have developed a theory of plasma waves that can infer these properties in greater detail than in standard approaches. The new research analyzes the plasma surrounding the pulsar by coupling Einstein’s theory of relativity with quantum mechanics, which describes the motion of subatomic particles such as the atomic nuclei — or ions — and electrons in plasma. Supporting this work is the DOE Office of Science.

Quantum field theory

Graduate student Yuan Shi Graduate student Yuan Shi (Photo by Elle Starkman/PPPL Office of Communications)
Graduate student Yuan Shi (Photo by Elle Starkman/PPPL Office of Communications)

The key insight comes from quantum field theory, which describes charged particles that are relativistic, meaning that they travel at near the speed of light. “Quantum theory can describe certain details of the propagation of waves in plasma,” said Yuan Shi, a graduate student at Princeton University in the Department of Astrophysics’ Princeton Program in Plasma Physics, and lead author of a paper published July 29 in the journal Physical Review A.  Understanding the interactions behind the propagation can then reveal the composition of the plasma.

Shi developed the paper with assistance from co-authors Nathaniel Fisch, director of the Princeton Program in Plasma Physics and professor and associate chair of astrophysical sciences at Princeton University, and Hong Qin, a physicist at PPPL and executive dean of the School of Nuclear Science and Technology at the University of Science and Technology of China.  “When I worked out the mathematics they showed me how to apply it,” said Shi. 

In pulsars, relativistic particles in the magnetosphere, which is the magnetized atmosphere surrounding the pulsar, absorb light waves, and this absorption displays peaks. “The question is, what do these peaks mean?” asks Shi. Analysis of the peaks with equations from special relativity and quantum field theory, he found, can determine the density and field strength of the magnetosphere.

Combining physics techniques

The process combines the techniques of high-energy physics, condensed matter physics, and plasma physics.  In high-energy physics, researchers use quantum field theory to describe the interaction of a handful of particles. In condensed matter physics, people use quantum mechanics to describe the states of a large collection of particles. Plasma physics uses model equations to explain the collective movement of millions of particles. The new method utilizes aspects of all three techniques to analyze the plasma waves in pulsars.

The same technique can be used to infer the density of the plasma and strength of the magnetic field created by inertial confinement fusion experiments. Such experiments use lasers to ablate — or vaporize —a target that contains plasma fuel. The ablation then causes an implosion that compresses the fuel into plasma and produces fusion reactions.

Standard formulas give inconsistent answers

Researchers want to know the precise density, temperature and field strength of the plasma that this process creates. Standard mathematical formulas give inconsistent answers when lasers of different color are used to measure the plasma parameters. This is because the extreme density of the plasma gives rise to quantum effects, while the high energy density of the magnetic field gives rise to relativistic effects, says Shi. So formulations that draw upon both fields are needed to reconcile the results.

For Shi, the new technique shows the benefits of combining physics disciplines that don’t often interact. Says he: “Putting fields together gives tremendous power to explain things that we couldn’t understand before.”

Read the abstract

Yuan Shi, Nathaniel J. Fisch, and Hong Qin. Effective-action approach to wave propagation in scalar QED plasmas. Phys. Rev. A 94, 012124 – Published 29 July 2016.

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by Princeton University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit

Unconventional quasiparticles predicted in conventional crystals (Science)

Fermi arcs on the surface of uncoventional materials
Two electronic states known as Fermi arcs, localized on the surface of a material, stem out of the projection of a 3-fold degenerate bulk new fermion. This new fermion is a cousin of the Weyl fermion discovered last year in another class of topological semimetals. The new fermion has a spin-1, a reflection of the 3- fold degeneracy, unlike the spin-½ that the recently discovered Weyl fermions have.

By Staff

An international team of researchers has predicted the existence of several previously unknown types of quantum particles in materials. The particles — which belong to the class of particles known as fermions — can be distinguished by several intrinsic properties, such as their responses to applied magnetic and electric fields. In several cases, fermions in the interior of the material show their presence on the surface via the appearance of electron states called Fermi arcs, which link the different types of fermion states in the material’s bulk.

The research, published online this week in the journal Science, was conducted by a team at Princeton University in collaboration with researchers at the Donostia International Physics Center (DIPC) in Spain and the Max Planck Institute for Chemical Physics of Solids in Germany. The investigators propose that many of the materials hosting the new types of fermions are “protected metals,” which are metals that do not allow, in most circumstances, an insulating state to develop. This research represents the newest avenue in the physics of “topological materials,” an area of science that has already fundamentally changed the way researchers see and interpret states of matter.

The team at Princeton included Barry Bradlyn and Jennifer Cano, both associate research scholars at the Princeton Center for Theoretical Science; Zhijun Wang, a postdoctoral research associate in the Department of Physics, Robert Cava, the Russell Wellman Moore Professor of Chemistry; and B. Andrei Bernevig, associate professor of physics. The research team also included Maia Vergniory, a postdoctoral research fellow at DIPC, and Claudia Felser, a professor of physics and chemistry and director of the Max Planck Institute for Chemical Physics of Solids.

For the past century, gapless fermions, which are quantum particles with no energy gap between their highest filled and lowest unfilled states, were thought to come in three varieties: Dirac, Majorana and Weyl. Condensed matter physics, which pioneers the study of quantum phases of matter, has become fertile ground for the discovery of these fermions in different materials through experiments conducted in crystals. These experiments enable researchers to explore exotic particles using relatively inexpensive laboratory equipment rather than large particle accelerators.

In the past four years, all three varieties of gapless fermions have been theoretically predicted and experimentally observed in different types of crystalline materials grown in laboratories around the world. The Weyl fermion was thought to be last of the group of predicted quasiparticles in nature. Research published earlier this year in the journal Nature (Wang et al., doi:10.1038/nature17410) has shown, however, that this is not the case, with the discovery of a bulk insulator which hosts an exotic surface fermion.

In the current paper, the team predicted and classified the possible exotic fermions that can appear in the bulk of materials. The energy of these fermions can be characterized as a function of their momentum into so-called energy bands, or branches. Unlike the Weyl and Dirac fermions, which, roughly speaking, exhibit an energy spectrum with 2- and 4-fold branches of allowed energy states, the new fermions can exhibit 3-, 6- and 8-fold branches. The 3-, 6-, or 8-fold branches meet up at points – called degeneracy points – in the Brillouin zone, which is the parameter space where the fermion momentum takes its values.

“Symmetries are essential to keep the fermions well-defined, as well as to uncover their physical properties,” Bradlyn said. “Locally, by inspecting the physics close to the degeneracy points, one can think of them as new particles, but this is only part of the story,” he said.

Cano added, “The new fermions know about the global topology of the material. Crucially, they connect to other points in the Brillouin zone in nontrivial ways.”

During the search for materials exhibiting the new fermions, the team uncovered a fundamentally new and systematic way of finding metals in nature. Until now, searching for metals involved performing detailed calculations of the electronic states of matter.

“The presence of the new fermions allows for a much easier way to determine whether a given system is a protected metal or not, in some cases without the need to do a detailed calculation,” Wang said.

Verginory added, “One can just count the number of electrons of a crystal, and figure out, based on symmetry, if a new fermion exists within observable range.”

The researchers suggest that this is because the new fermions require multiple electronic states to meet in energy: The 8-branch fermion requires the presence of 8 electronic states. As such, a system with only 4 electrons can only occupy half of those states and cannot be insulating, thereby creating a protected metal.

“The interplay between symmetry, topology and material science hinted by the presence of the new fermions is likely to play a more fundamental role in our future understanding of topological materials – both semimetals and insulators,” Cava said.

Felser added, “We all envision a future for quantum physical chemistry where one can write down the formula of a material, look at both the symmetries of the crystal lattice and at the valence orbitals of each element, and, without a calculation, be able to tell whether the material is a topological insulator or a protected metal.”

Read the abstract.

Funding for this study was provided by the US Army Research Office Multidisciplinary University Research Initiative, the US Office of Naval Research, the National Science Foundation, the David and Lucile Packard Foundation, the W. M. Keck Foundation, and the Spanish Ministry of Economy and Competitiveness.

Study Models How the Immune System Might Evolve to Conquer HIV (PLOS Genetics)

By Katherine Unger Baillie, courtesy of the University of Pennsylvania

It has remained frustratingly difficult to develop a vaccine for HIV/AIDS, in part because the virus, once in our bodies, rapidly reproduces and evolves to escape being killed by the immune system.

“The viruses are constantly producing mutants that evade detection,” said Joshua Plotkin, a professor in the University of Pennsylvania’s Department of Biology in the School of Arts & Sciences. “A single person with HIV may have millions of strains of the virus circulating in the body.”

Yet the body’s immune system can also evolve. Antibody-secreting B-cells compete among themselves to survive and proliferate depending on how well they bind to foreign invaders. They dynamically produce diverse types of antibodies during the course of an infection.

In a new paper in PLOS Genetics, Plotkin, along with postdoctoral researcher Jakub Otwinowski and Armita Nourmohammad, an associate research scholar at Princeton University’s Lewis-Sigler Institute for Integrative Genomics, mathematically modeled these dueling evolutionary processes to understand the conditions that influence how antibodies and viruses interact and adapt to one another over the course of a chronic infection.

Notably, the researchers considered the conditions under which the immune system gives rise to broadly neutralizing antibodies, which can defeat broad swaths of viral strains by targeting the most vital and immutable parts of the viral genome. Their findings, which suggest that presenting the immune system with a large diversity of viral antigens may be the best way to encourage the emergence of such potent antibodies, have implications for designing vaccines against HIV and other chronic infections.

“This isn’t a prescription for how to design an HIV vaccine,” Plotkin said, “but our work provides some quantitative guidance for how to prompt the immune system to elicit broadly neutralizing antibodies.”

The biggest challenge in attempting to model the co-evolution of antibodies and viruses is keeping track of the vast quantity of different genomic sequences that arise in each population during the course of an infection. So the researchers focused on the statistics of the binding interactions between the virus and antibodies.

“This is the key analytical trick to simplify the problem,” said Otwinowski. “It would otherwise be impossible to track and write equations for all the interactions.”

The researchers constructed a model to examine how mutations would affect the binding affinity between antibodies and viruses. Their model calculated the average binding affinities between the entire population of viral strains and the repertoire of antibodies over time to understand how they co-evolve.

“It’s one of the things that is unique about our work,” said Nourmohammad. “We’re not only looking at one virus binding to one antibody but the whole diversity of interactions that occur over the course of a chronic infection.”

What they saw was an S-shaped curve, in which sometimes the immune system appeared to control the infection with high levels of binding, but subsequently a viral mutation would arise that could evade neutralization, and then binding affinities would go down.

“The immune system does well if there is active binding between antibodies and virus,” Plotkin said, “and the virus does well if there is not strong binding.”

Such a signature is indicative of a system that is out of equilibrium where the viruses are responding to the antibodies and vice versa. The researchers note that this signature is likely common to many antagonistically co-evolving populations.

To see how well their model matched with data from an actual infection, the researchers looked at time-shifted experimental data from two HIV patients, in which their antibodies were collected at different time points and then “competed” against the viruses that had been in their bodies at different times during their infections.

They saw that these patient data are consistent with their model: Viruses from earlier time points would be largely neutralized by antibodies collected at later time points but could outcompete antibodies collected earlier in infection.

Finally, the researchers used the model to try to understand the conditions under which broadly neutralizing antibodies, which could defeat most strains of virus, would emerge and rise to prominence.

“Despite the effectiveness of broadly neutralizing antibodies, none of the patients with these antibodies has been cured of HIV,” Plotkin said. “It’s just that by the time they develop them, it’s too late and their T-cell repertoire is depleted. This raises the intriguing idea that, if only they could develop these antibodies earlier in infection, they might be prepared to combat an evolving target.”

“The model that we built,” Nourmohammad said, “was able to show that, if viral diversity is very large, the chance that these broadly neutralizing antibodies outcompete more specifically targeted antibodies and proliferate goes up.”

The finding suggests that, in order for a vaccine to elicit these antibodies, it should present a diverse set of viral antigens to the host. That way no one specialist antibody would have a significant fitness advantage, leaving room for the generalist, broadly neutralizing antibodies to succeed.

The researchers said that there has been little theoretical modeling of co-evolutionary systems such as this one. As such, their work could have implications for other co-evolution scenarios.

“Our theory can also apply to other systems, such as bacteria-phage co-evolution,” said Otwinowski, in which viruses infect bacteria, a process that drives bacterial evolution and ecology.

“It could also shed light on the co-evolution of the influenza virus in the context of evolving global immune systems,” Nourmohammad said.

Read the article.

The work was supported by funding from the U.S. National Science Foundation, James S. McDonnell Foundation, David and Lucile Packard Foundation, U.S. Army Research Office and National Institutes of Health.


Role for enhancers in bursts of gene activity (Cell)


By Marisa Sanders for the Office of the Dean for Research

A new study by researchers at Princeton University suggests that sporadic bursts of gene activity may be important features of genetic regulation rather than just occasional mishaps. The researchers found that snippets of DNA called enhancers can boost the frequency of bursts, suggesting that these bursts play a role in gene control.

The researchers analyzed videos of Drosophila fly embryos undergoing DNA transcription, the first step in the activation of genes to make proteins. In a study published on July 14 in the journal Cell, the researchers found that placing enhancers in different positions relative to their target genes resulted in dramatic changes in the frequency of the bursts.

“The importance of transcriptional bursts is controversial,” said Michael Levine, Princeton’s Anthony B. Evnin ’62 Professor in Genomics and director of the Lewis-Sigler Institute for Integrative Genomics. “While our study doesn’t prove that all genes undergo transcriptional bursting, we did find that every gene we looked at showed bursting, and these are the critical genes that define what the embryo is going to become. If we see bursting here, the odds are we are going to see it elsewhere.”

The transcription of DNA occurs when an enzyme known as RNA polymerase converts the DNA code into a corresponding RNA code, which is later translated into a protein. Researchers were puzzled to find about ten years ago that transcription can be sporadic and variable rather than smooth and continuous.

In the current study, Takashi Fukaya, a postdoctoral research fellow, and Bomyi Lim, a postdoctoral research associate, both working with Levine, explored the role of enhancers on transcriptional bursting. Enhancers are recognized by DNA-binding proteins to augment or diminish transcription rates, but the exact mechanisms are poorly understood.

Until recently, visualizing transcription in living embryos was impossible due to limits in the sensitivity and resolution of light microscopes. A new method developed three years ago has now made that possible. The technique, developed by two separate research groups, one at Princeton led by Thomas Gregor, associate professor of physics and the Lewis-Sigler Institute for Integrative Genomics, and the other led by Nathalie Dostatni at the Curie Institute in Paris, involves placing fluorescent tags on RNA molecules to make them visible under the microscope.

The researchers used this live-imaging technique to study fly embryos at a key stage in their development, approximately two hours after the onset of embryonic life where the genes undergo fast and furious transcription for about one hour. During this period, the researchers observed a significant ramping up of bursting, in which the RNA polymerase enzymes cranked out a newly transcribed segment of RNA every 10 or 15 seconds over a period of perhaps 4 or 5 minutes per burst. The genes then relaxed for a few minutes, followed by another episode of bursting.

The team then looked at whether the location of the enhancer – either upstream from the gene or downstream – influenced the amount of bursting. In two different experiments, Fukaya placed the enhancer either upstream of the gene’s promoter, or downstream of the gene and saw that the different enhancer positions resulted in distinct responses. When the researchers positioned the enhancer downstream of the gene, they observed periodic bursts of transcription. However when they positioned the enhancer upstream of the gene, the researchers saw some fluctuations but no discrete bursts. They found that the closer the enhancer is to the promoter, the more frequent the bursting.

To confirm their observations, Lim applied further data analysis methods to tally the amount of bursting that they saw in the videos. The team found that the frequency of the bursts was related to the strength of the enhancer in upregulating gene expression. Strong enhancers produced more bursts than weak enhancers. The team also showed that inserting a segment of DNA called an insulator reduced the number of bursts and dampened gene expression.

In a second series of experiments, Fukaya showed that a single enhancer can activate simultaneously two genes that are located some distance apart on the genome and have separate promoters. It was originally thought that such an enhancer would facilitate bursting at one promoter at a time—that is, it would arrive at a promoter, linger, produce a burst, and come off. Then, it would randomly select one of the two genes for another round of bursting. However, what was instead observed was bursting occurring simultaneously at both genes.

“We were surprised by this result,” Levine said. “Back to the drawing board! This means that traditional models for enhancer-promoter looping interactions are just not quite correct,” Levine said. “It may be that the promoters can move to the enhancer due to the formation of chromosomal loops. That is the next area to explore in the future.”

The study was funded by grants from the National Institutes of Health (U01EB021239 and GM46638).

Access the paper here:

Takashi Fukaya, Bomyi Lim & Michael Levine. Enhancer Control of Transcriptional Bursting, Cell (2016), Published July 14. EPub ahead of print June 9.

Study of individual neurons in flies reveals memory-related changes in gene activity (Cell Reports)

Image of the Drosophila brain (magenta) with a subset of mushroom body neurons expressing green fluorescent protein (GFP) via a genetic marker. This marker was used to harvest these neurons following the learning and memory assay. (Credit: Crocker, et al.)
Image of the Drosophila brain (magenta) with a subset of mushroom body neurons expressing green fluorescent protein (GFP) via a genetic marker. (Credit: Janelia Farm/HHMI – FlyLight)

By Kristin Qian for the Office of the Dean for Research

Researchers at Princeton University have developed a highly sensitive and precise method to explore genes important for memory formation within single neurons of the Drosophila fly brain. With this method, the researchers found an unexpected result: certain genes involved in creating long-term memories in the brain are the same ones that the eye uses for sensing light.

The study, published in the May 17 issue of the journal Cell Reports, demonstrated the utility of the new method and also identified new patterns of gene expression that drive long-term memory formation.

“Ultimately, to understand the brain, we want to know what individual neurons are doing,” said Mala Murthy, assistant professor in the Princeton Neuroscience Institute and the Department of Molecular Biology. “We found that single neurons can be defined by their pattern of their gene expression, even if they’re all in the same brain network.”

To their surprise, the researchers found that many of the active genes in these neurons produce proteins that are best known for their roles in detecting light in the fly’s eye or sensing odor in the fly’s nose. “It is possible that these sensory proteins have been repurposed by the brain for a different function,” Murthy said.

“Even though the paper is focused on the methodology, which I think will be impactful for the field, there is this new science here—a whole new class of molecules we found that is in the central brain and seems to be involved in memory formation,” Murthy said.

Researchers have known that genes “turn on,” or start making proteins, during the formation of long-term memories in Drosophila, a widely used organism in studies of neurobiology, but they didn’t know exactly which genes in which neurons were involved.

To investigate this question, the researchers first trained flies to form long-term memories. Then they extracted single neurons from the fly brains and evaluated all of the gene readouts, or transcripts, which encode proteins. By comparing the transcripts of the memory-trained flies to those of non-trained flies, researchers were able to identify genes involved in long-term memory formation.

The task was complicated by the tiny size of the fly’s head, which is just one millimeter across, and contains fewer than 100,000 neurons. Murthy’s team focused on neuron types in one part of the brain, the mushroom body, named for its distinctive shape.

First author Amanda Crocker, a former postdoctoral fellow in Murthy’s lab and now an assistant professor of neuroscience at Middlebury College, conducted the experiments in collaboration with co-authors Xiao-Juan Guan, a senior research specialist in the Princeton Neuroscience Institute; Coleen Murphy, professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics; and Murthy.

“Our work opens up the ability to use Drosophila as a way to study how gene expression in single neurons relates to brain function,” Crocker said. “This has been a challenge because the fly brain is very small and contains fewer neurons than other organisms that neuroscientists study. The advantage of using flies is that they have significantly less redundancy in the neurons that they do have. We can look at specific neurons and gene expression, and ask what the genes are doing in that cell to cause the behavior.”

The researchers trained the flies to form long-term memories by exposing them to an odor – either an earthy, mushroom-like smell (3-octanol) or a menthol-like smell (4-methylcyclohexanol) – while simultaneously delivering a negative stimulus in the form of an electric shock.

Flies experience two odor spaces in each tube. If neither odor has been paired with electric shock, flies spend an equal amount of time on both sides of the tube (control). If one of the odors is paired with electric shock, flies avoid that side of the tube - for example, flies trained to associate the odor 3-OCT with electric shock, avoid the red side (containing 3-OCT) of the tube. (Credit: Crocker, et al.)
Flies experience two odor spaces in each tube. If neither odor has been paired with electric shock, flies spend an equal amount of time on both sides of the tube (control). If one of the odors is paired with electric shock, flies avoid that side of the tube. For example, flies trained to associate the odor 3-OCT with electric shock avoided the red side (containing 3-OCT) of the tube. (Credit: Murthy lab, Princeton University)

The training took place in a tube containing the two odors, one at each end of the tube. Researchers paired one of the odors with the electric shock, and as a result the fly avoided that end of the tube. The assay was conducted in the dark, so that the flies could use only their sense of smell, not their vision, to navigate the tube.

A second group of flies received the electric shock and the odor, but not at the same time, so they did not form the memory that linked odor to shock.

The researchers then isolated single neurons from the fly brains using tiny glass tubes to suction out the cells. Harvesting neurons using this technique is not common, Murthy said, and it had not been combined with a complete analysis of gene activity in fly neurons before. With this novel method, they were able to use only 10 to 90 femtograms – a quintillionth of a kilogram – of genetic material.

They evaluated gene activity by looking at the production of messenger ribonucleic acid (mRNA), an intermediary between DNA and proteins. The result is a “transcriptome,” or readout of all of the genetic messages that the cell uses to produce proteins. The researchers then read the transcriptome to see which genes produced proteins in the memory-trained flies versus the non-trained flies, and found that some of the active genes in memory-trained flies were the same as ones used in the sensory organs to detect light, odors and taste.

To follow-up, the researchers bred mutant flies that lacked genes for some of the light-sensing proteins and thus could not see. The same memory experiments as before were carried out, and the researchers confirmed that the flies lacking light-sensing proteins were both unable to see and unable to form long-term memories.

The discovery of the expression of genes for classical ‘light-sensing’ proteins, such as rhodopsin, as well as other sensory-related proteins for odor and taste detection, was unexpected because these proteins were not known to be utilized in mushroom bodies, Murthy said. Although studies in other organisms, including humans, have detected sensory genes in areas of the brain unrelated to the sensory organ itself, this may be the first study to link these genes to memory formation.

The study was funded by a National Institutes of Health Ruth L. Kirschstein Institutional National Research Service Award, the Alfred P. Sloan Foundation, the Human Frontier Science Program, a National Science Foundation (NSF) CAREER award, the McKnight Endowment Fund for Neuroscience, the Klingenstein Foundation, a National Institutes of Health New Innovator award, and an NSF BRAIN Initiative EAGER award. The study was also funded in part through Princeton’s Glenn Center for Quantitative Aging Research, directed by Coleen Murphy.

The paper, “Cell-Type-Specific Transcriptome Analysis in the Drosophila Mushroom Body Reveals Memory-Related Changes in Gene Expression,” was published in the May 17 issue of Cell Reports.

Read the journal article.

Scientists capture the elusive structure of essential digestive enzyme (JACS)

Stylized graphic of data on the structure of an active form of an important digestive enzyme, phenylalanine hydolase. The cyan cross-section shows the elution profile and magenta cross-section shows scattering profile. At right is the structure of the activated phenylalanine hydroxylase. Image source: Ando et al.
Stylized graphic of data on the structure of an important digestive enzyme, phenylalanine hydroxylase. At right is the structure of the activated enzyme. Image source: Ando et al.

By Tien Nguyen, Department of Chemistry

Using a powerful combination of techniques from biophysics to mathematics, researchers have revealed new insights into the mechanism of a liver enzyme that is critical for human health. The enzyme, phenylalanine hydroxylase, turns the essential amino acid phenylalanine – found in eggs, beef and many other foods and as an additive in diet soda —into tyrosine, a precursor for multiple important neurotransmitters.

“We need phenylalanine hydroxylase to control levels of phenylalanine in the blood because too much is toxic to the brain,” said Steve Meisburger, lead author on the study and a post-doctoral researcher in the Ando lab. Genetic mutations in phenylalanine hydroxylase can lead to disorders such as phenylketonuria, an inherited condition that can cause intellectual and behavioral disabilities unless detected at birth and managed through dietary restrictions.

Published earlier this month in the Journal of the American Chemical Society, the article presented detailed structural data on the enzyme’s active state – the shape it adopts when performing its chemical duties – that has eluded scientists for years.

“It’s a floppy enzyme which means it’s dynamic,” said Nozomi Ando, an assistant professor of chemistry at Princeton and corresponding author on the paper. “That also means it doesn’t like to crystallize,” she said. This is problematic for the classic method used to study enzymatic structure, known as x-ray crystallography, which requires solid crystal samples. Efforts to crystallize phenylalanine hydroxylase have just recently met success, but still only captured the enzyme in its inactive state.

The researchers in the Ando lab were able to bypass the tricky task of growing crystals of the active enzyme by using their expertise in a special technique akin to crystallography, called small angle x-ray scattering (SAXS), which allows scientists to study enzymes in a solution. And because the enzyme is susceptible to aggregation or clumping up in solution, the researchers coupled their scattering method with a purification technique called size exclusion chromatography (SEC), in which different species in a sample flow through a column at different speeds based on their size.

Steve Meisburger (left) and Nozomi Ando (right)
Steve Meisburger (left) and Nozomi Ando (right)

“Pairing SEC with SAXS is an emergent technique. Our contribution is that we saw a clever way to use it,” Ando said. The experiment is highly specialized and relies on powerful x-rays emitted by particles speeding around the circular track at a synchrotron facility. The research team traveled from Princeton to the Cornell High Energy Synchrotron Source in Ithaca, New York, for multiple intensive data-collection sessions. “Any time on the machine that is available, we use it. Not a single photon gets wasted,” Ando said.

As the enzyme solution passes through the purification technique, flowing across the path of the x-ray beam, researchers record snapshots of the x-ray scattering patterns. The resulting dataset is quite complex as the sample also contains phenylalanine, the compound that “turns on” phenylalanine hydroxylase so that researchers can catch the dynamic enzyme in action.

“Current approaches for analyzing this type of dataset are very crude,” Meisburger said. Essentially, these methods assume that each signal – known as an elution peak – represents a single species, when each peak is actually a mixture of species. In this work, the team used an advanced linear algebra method known as evolving factor analysis that allowed them to separate the scattering components. “We can use these linear algebra methods to ‘un-mix’ species that are overlapping,” Meisburger said, “That’s the piece that I think is really exciting.”

By applying their unique approach, the researchers were able to provide evidence for a model of the active structure of phenylalanine hydroxylase that builds upon recent work by their collaborators in Paul Fitzpatrick’s group at UT Health Science Center at San Antonio. In this model, two phenylalanine molecules dock to a pair of sites on the enzyme, bringing a pair of arms together and freeing up the active sites for doing chemistry once more phenylalanine molecules come along.

“I’m very proud that this is our first paper [published since Ando joined the faculty at Princeton]. We wanted it to be very quantitative and heavy on the biochemistry plus heavy on the physical chemistry. I’m really pleased with the way it turned out,” Ando said.

This work was supported by National Health Institutes grants GM100008 and GM098140 and Welch Foundation grant AQ-1245.

Access the paper here:

Meisburger, S. P.; Taylor, A. B.; Khan, C. A.; Zhang, S.; Fitzpatrick, P. F.; Ando, N. “Domain movements upon activation of phenylalanine hydroxylase characterized by crystallography and chromatography-coupled small-angle X-ray scattering.J. Am. Chem. Soc., 2016, 138 (20), pp 6506–6516.DOI: 10.1021/jacs.6b01563. Published online May 4, 2016.



Theorists smooth the way to solving one of quantum mechanics oldest problems: Modeling quantum friction (J. Phys. Chem. Letters)

Researchers at Princeton
From left to right: Herschel Rabitz, Renan Cabrera, Andre Campos and Denys Bondar. Photo credit: C. Todd Reichart

By: Tien Nguyen, Department of Chemistry

Theoretical chemists at Princeton University have pioneered a strategy for modeling quantum friction, or how a particle’s environment drags on it, a vexing problem in quantum mechanics since the birth of the field. The study was published in the Journal of Physical Chemistry Letters.

“It was truly a most challenging research project in terms of technical details and the need to draw upon new ideas,” said Denys Bondar, a research scholar in the Rabitz lab and corresponding author on the work.

Quantum friction may operate at the smallest scale, but its consequences can be observed in everyday life. For example, when fluorescent molecules are excited by light, it’s because of quantum friction that the atoms are returned to rest, releasing photons that we see as fluorescence. Realistically modeling this phenomenon has stumped scientists for almost a century and recently has gained even more attention due to its relevance to quantum computing.

“The reason why this problem couldn’t be solved is that everyone was looking at it through a certain lens,” Bondar said. Previous models attempted to describe quantum friction by considering the quantum system as interacting with a surrounding, larger system. This larger system presents an impossible amount of calculations, so in order to simplify the equations to the pertinent interactions, scientists introduced numerous approximations.

These approximations led to numerous different models that could each only satisfy one or the other of two critical requirements. In particular, they could either produce useful observations about the system, or they could obey the Heisenberg Uncertainty Principle, which states that there is a fundamental limit to the precision with which a particle’s position and momentum can be simultaneous measured. Even famed physicist Werner Heisenberg’s attempt to derive an equation for quantum friction was incompatible with his own uncertainty principle.

The researchers’ approach, called operational dynamic modeling (ODM) and introduced in 2012 by the Rabitz group, led to the first model for quantum friction to satisfy both demands. “To succeed with the problem, we had to literally rethink the physics involved, not merely mathematically but conceptually,” Bondar said.

Bondar and his colleagues focused on the two ultimate requirements for their model – that it should obey the Heisenberg principle and produce real observations – and worked backwards to create the proper model.

“Rather than starting with approximations, Denys and the team built in the proper physics in the beginning,” said Herschel Rabitz, the Charles Phelps Smyth ’16 *17 Professor of Chemistry and co-author on the paper. “The model is built on physical and mathematical truisms that must hold. This distinct approach creates a new rigorous and practical formulation for quantum friction,” he said.

The research team included research scholar Renan Cabrera and Ph.D. student Andre Campos as well as Shaul Mukamel, professor of chemistry at the University of California, Irvine.

Their model opens a way forward to understand not only quantum friction but other dissipative phenomena as well. The researchers are interested in exploring the means to manipulate these forces to their advantage. Other theorists are rapidly taking up the new paradigm of operational dynamic modeling, Rabitz said.

Reflecting on how they arrived at such a novel approach, Bondar recalled the unique circumstances under which he first started working on this problem. After he received the offer to work at Princeton, Bondar spent four months awaiting a US work visa (he is a citizen of the Ukraine) and pondering fundamental physics questions. It was during this time that he first thought of this strategy. “The idea was born out of bureaucracy, but it seems to be holding up,” Bondar said.

Read the full article here:

Bondar, D. I.; Cabrera, R.; Campos, A.; Mukamel, S.; Rabitz, H. A. “Wigner-Lindblad Equations for Quantum Friction.J. Phys. Chem. Lett. 2016, 7, 1632.

This work was supported by the US National Science Foundation CHE 1058644, the US Department of Energy DE-FG02-02ER-15344, and ARO-MURI W911NF-11-1-0268.