Warm nights could flood the atmosphere with carbon under climate change (PNAS)

Photo courtesy of William Anderegg, Princeton University

Amazonian tropical rainforest near Manaus, Brazil. Photo courtesy of William Anderegg, Princeton University.

By Morgan Kelly, Office of Communications

The warming effects of climate change usually conjure up ideas of parched and barren landscapes broiling under a blazing sun, its heat amplified by greenhouse gases. But a study led by Princeton University researchers suggests that hotter nights may actually wield much greater influence over the planet’s atmosphere as global temperatures rise — and could eventually lead to more carbon flooding the atmosphere.

Since measurements began in 1959, nighttime temperatures in the tropics have had a strong influence over year-to-year shifts in the land’s carbon-storage capacity, or “sink,” the researchers report in the journal Proceedings of the National Academy of Sciences. Earth’s ecosystems absorb about 25% of the excess carbon from the atmosphere, and tropical forests account for about one-third of land-based plant productivity.

During the past 50 years, the land-based carbon sink’s “interannual variability” has grown by 50 to 100 percent, the researchers found. The researchers used climate- and satellite-imaging data to determine which of various climate factors — including rainfall, drought and daytime temperatures — had the most effect on the carbon sink’s swings. They found the strongest association with variations in tropical nighttime temperatures, which have risen by about 0.6 degrees Celsius since 1959.

First author William Anderegg, an associate research scholar in the Princeton Environmental Institute, explained that he and his colleagues determined that warm nighttime temperatures lead plants to put more carbon into the atmosphere through a process known as respiration.

Just as people are more active on warm nights, so too are plants. Although plants take up carbon dioxide from the atmosphere, they also internally consume sugars to stay alive. That process, known as respiration, produces carbon dioxide. Plants step up respiration in warm weather, Anderegg said. The researchers found that yearly variations in the carbon sink strongly correlated with variations in plant respiration.

“When you heat up a system, biological processes tend to increase,” Anderegg said. “At hotter temperatures, plant respiration rates go up and this is what’s happening during hot nights. Plants lose a lot more carbon than they would during cooler nights.”

Previous research has shown that nighttime temperatures have risen significantly faster as a result of climate change than daytime temperatures, Anderegg said. This means that in future climate scenarios respiration rates could increase to the point that the land is putting more carbon into the atmosphere than it’s taking out, “which would be disastrous,” he said.

Of course, plants consume carbon dioxide as a part of photosynthesis, during which they convert sunlight into energy. Photosynthesis also is sensitive to rises in temperature, but it occurs only during the day, whereas respiration occurs at all hours and thus is more sensitive to nighttime warming, Anderegg said.

“Nighttime temperatures have been increasing faster than daytime temperatures and will continue to rise faster,” Anderegg said. “This suggests that tropical ecosystems might be more vulnerable to climate change than previously thought, risking crossing the threshold from a carbon sink to a carbon source. But there’s certainly potential for plants to acclimate their respiration rates and that’s an area that needs future study.”

This research was supported by the National Science Foundation MacroSystems Biology Grant (EF-1340270), RAPID Grant (DEB-1249256) and EAGER Grant (1550932); and a National Oceanic and Atmospheric Administration (NOAA) Climate and Global Change postdoctoral fellowship administered by the University Corporation of Atmospheric Research.

William R. L. Anderegg, Ashley P. Ballantyne, W. Kolby Smith, Joseph Majkut, Sam Rabin, Claudie Beaulieu, Richard Birdsey, John P. Dunne, Richard A. Houghton, Ranga B. Myneni, Yude Pan, Jorge L. Sarmiento, Nathan Serota, Elena Shevliakova, Pieter Tan and Stephen W. Pacala. “Tropical nighttime warming as a dominant driver of variability in the terrestrial carbon sink.” Proceedings of the National Academy of Sciences, published online in-advance of print Dec. 7, 2015. DOI: 10.1073/pnas.1521479112.

 

PPPL physicists propose new plasma-based method to treat radioactive waste (Journal of Hazardous Materials)

Caption: Securing a shipment of mixed, low-level waste from Hanford for treatment and disposal. Credit: U.S. Department of Energy

Caption: Securing a shipment of mixed, low-level waste from Hanford for treatment and disposal. Credit: U.S. Department of Energy

By Raphael Rosen, Princeton Plasma Physics Laboratory Communications

Physicists at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) are proposing a new way to process nuclear waste that uses a plasma-based centrifuge. Known as plasma mass filtering, the new mass separation techniques would supplement chemical techniques. It is hoped that this combined approach would reduce both the cost of nuclear waste disposal and the amount of byproducts produced during the process. This work was supported by PPPL’s Laboratory Directed Research and Development Program.

“The safe disposal of nuclear waste is a colossal problem,” said Renaud Gueroult, staff physicist at PPPL and lead author of the paper that appeared in the Journal of Hazardous Materials in October. “One solution might be to supplement existing chemical separation techniques with plasma separation techniques, which could be economically attractive, ideally leading to a reevaluation of how nuclear waste is processed.”

The immediate motivation for safe disposal is the radioactive waste stored currently at the Hanford Site, a facility in Washington State that produced plutonium for nuclear weapons during the Cold War. The volume of this waste originally totaled 54 million gallons and was stored in 177 underground tanks.

In 2000, Hanford engineers began building machinery that would encase the radioactive waste in glass. The method, known as “vitrification,” had been used at another Cold War-era nuclear production facility since 1996. A multibillion-dollar vitrification plant is currently under construction at the Hanford site.

To reduce the cost of high-level waste vitrification and disposal, it may be advantageous to reduce the number of high-level glass canisters by packing more waste into each glass canister. To reduce the volume to be vitrified, it would be advantageous to separate the nonradioactive waste, like aluminum and iron, out of the waste, leaving less waste to be vitrified. However, in its 2014 report, the DOE Task Force on Technology Development for Environmental Management argued that, “without the development of new technology, it is not clear that the cleanup can be completed satisfactorily or at any reasonable cost.”

The high-throughput, plasma-based, mass separation techniques advanced at PPPL offer the possibility of reducing the volume of waste that needs to be immobilized in glass. “The interesting thing about our ideas on mass separation is that it is a form of magnetic confinement, so it fits well within the Laboratory’s culture,” said physicist Nat Fisch, co-author of the paper and director of the Princeton University Program in Plasma Physics. “To be more precise, it is ‘differential magnetic confinement’ in that some species are confined while others are lost quickly, which is what makes it a high-throughput mass filter.”

How would a plasma-based mass filter system work? The method begins by atomizing and ionizing the hazardous waste and injecting it into the rotating filter so the individual elements can be influenced by electric and magnetic fields. The filter then separates the lighter elements from the heavier ones by using centrifugal and magnetic forces. The lighter elements are typically less radioactive than the heavier ones and often do not need to be vitrified. Processing of the high-level waste therefore would need fewer high-level glass canisters overall, while the less radioactive material could be immobilized in less costly wasteform (e.g., concrete, bitumen).

The new technique would also be more widely applicable than traditional chemical-based methods since it would depend less on the nuclear waste’s chemical composition. While “the waste’s composition would influence the performance of the plasma mass filter in some ways, the effect would most likely be less than that associated with chemical techniques,” said Gueroult.

Gueroult points out why savings by plasma techniques can be important. “For only about $10 a kilogram in energy cost, solid waste can be ionized. In its ionized form, the waste can then be separated into heavy and light components. Because the waste is atomized, the separation proceeds only on the basis of atomic mass, without regard to the chemistry. Since the total cost of chemical-based techniques can be $2,000 per kilogram of the vitrified waste, as explained in the Journal of Hazardous Materials paper, it stands to reason that even if several plasma-based steps are needed to achieve pure enough separation, there is in principle plenty of room to cut the overall costs. That is the point of our recent paper. It is also why we are excited about our plasma-based methods.”

Fisch notes that “our original ideas grew out of the thesis of Abe Fetterman, who began by considering centrifugal mirror confinement for nuclear fusion, but then realized the potential for mass separation. Now the key role on this project is being played by Renaud, who has developed the concept substantially further.”

According to Fisch, the current developments are a variation and refinement of a plasma-based mass separation system first advanced by a private company called Archimedes Technology Group. That company, started by the late Dr. Tihiro Ohkawa, a fusion pioneer, raised private capital to advance a plasma-based centrifuge concept to clean up the legacy waste at Hanford, but ceased operation in 2006 after failing to receive federal funding.

Now an updated understanding of the complexity of the Hanford problem, combined with an increased appreciation of new ideas, has led to renewed federal interest in waste-treatment solutions. Completion of the main waste processing operations, which was in 2002 projected for 2028, has slipped by 20 years over the last 13 years, and the total cleanup cost is now estimated by the Department of Energy to be greater than 250 billion dollars, according to the DOE Office of Inspector General, Office of Audits and Inspections. DOE, which has the responsibility of cleaning up the legacy nuclear waste at Hanford and other sites, conducted a Basic Research Needs Workshop on nuclear waste cleanup in July that both Fisch and Gueroult attended. The report of that workshop, which is expected to highlight new approaches to the cleanup problem, is due out this fall.

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Read the abstract.

Renaud Gueroult, David T. Hobbs, Nathaniel J. Fisch. “Plasma filtering techniques for nuclear waste remediation.” Journal of Hazardous Materials, published October 2015. doi:10.1016/j.jhazmat.2015.04.058.

Dolphin-disease outbreak shows how to account for the unknown when tracking epidemics (Journal of the Royal Society Interface)

By Morgan Kelly, Office of Communications

Common bottlenose dolphin. Image credit: Allison Henry, NOAA.

Common bottlenose dolphin. Image credit: Allison Henry, NOAA.

Stopping the outbreak of a disease hinges on a wealth of data such as what makes a suitable host and how a pathogen spreads. But gathering these data can be difficult for diseases in remote areas of the world, or for epidemics involving wild animals.

A new study led by Princeton University researchers and published in the Journal of the Royal Society Interface explores an approach to studying epidemics for which details are difficult to obtain. The researchers analyzed the 2013 outbreak of dolphin morbillivirus — a potentially fatal pathogen from the same family as the human measles virus — that resulted in more than 1,600 bottlenose dolphins becoming stranded along the Atlantic coast of the United States by 2015. Because scientists were able to observe dolphins only after they washed up on shore, little is known about how the disease transmits and persists in the wild.

The researchers used a Poisson process — a statistical tool used to model the random nature of disease transmission — to determine from sparse data how dolphin morbillivirus can spread. They found that individual bottlenose dolphins may be infectious for up to a month and can spread the disease over hundreds of miles, particularly during seasonal migrations. In 2013, the height of disease transmission occurred toward the end of summer around an area offshore of Virginia Beach, Virginia, where multiple migratory dolphin groups are thought to cross paths.

In the interview below, first author Sinead Morris, a graduate student in ecology and evolutionary biology, explains what the researchers learned about the dolphin morbillivirus outbreak, and how the Poisson process can help scientists understand human epidemics. Morris is in the research group of co-author Bryan Grenfell, Princeton’s Kathryn Briger and Sarah Fenton Professor of Ecology and Evolutionary Biology and Public Affairs.

Q: How does the Poisson process track indirectly observed epidemics and what specific challenges does it overcome?

A: One of the main challenges in modeling indirectly observed epidemics is a lack of data. In our case, we had information on all infected dolphins that had been found stranded on shore, but had no data on the number of individuals that became infected but did not strand. The strength of the Poisson process is that its simple framework means it can be used to extract important information about the how the disease is spreading across space and time, despite having such incomplete data. Essentially the way the process works is that it keeps track of where and when individual dolphins stranded, and then at each new point in the epidemic it uses the history of what has happened before to project what will happen in the future. For example, an infected individual is more likely to transmit the disease onwards to other individuals in close spatial proximity than to those far away. So, by keeping track of all these infections the model can identify where and when the largest risk of new infections will be.

Q: Why was this 2013-15 outbreak of dolphin morbillivirus selected for study, and what key insights does this work provide?

A: The recent outbreak of dolphin morbillivirus spread rapidly along the northwestern Atlantic coast from New York to Florida, causing substantial mortality among coastal bottlenose dolphin populations. Despite the clear detrimental impact that this disease can have, however, it is still poorly understood. Therefore, our aim in modeling the epidemic was to gain much needed information about how the virus spreads. We found that a dolphin may be infectious for up to 24 days and can travel substantial distances (up to 220 kilometers, or 137 miles) within this time. This is important because such long-range movements — for example, during periods of seasonal migration — are likely to create many transmission opportunities from infected to uninfected individuals, and may have thus facilitated the rapid spread of the virus down the Atlantic coast.

Q: Can this model be used for human epidemics?

A: The Poisson process framework was originally developed to model the occurrence of earthquakes, and has since been used in a variety of other contexts that also tend to suffer from noisy, indirectly observed data, such as urban crime distribution. To model dolphin morbillivirus, we adapted the framework to incorporate more biological information, and similar techniques have also been applied to model meningococcal disease in humans, which can cause meningitis and sepsis. Generally, the data characterizing human epidemics are more detailed than the data we had for this project and, as such, models that can incorporate greater complexity are more widely used. However, we hope that our methods will stimulate the greater use of Poisson process models in epidemiological systems that also suffer from indirectly observed data.

Graph of predictions of risk of disease transmission.

A new study led by Princeton University researchers used a Poisson process to analyze sparse data from the 2013 outbreak of morbillivirus among bottlenose dolphins along the United States’ Atlantic coast. This graph shows the model predictions of how the risk of disease transmission (marginal hazard) changes over space (A) and time (B) since the beginning of the epidemic. The peaks indicate that the greatest risk of transmission occurred around day 70 of the epidemic between 36 and 37 degrees north latitude, which is an area that encompasses the offshore waters of Virginia Beach, Virginia. These peaks coincide with a period towards the end of summer when large numbers of dolphins are known to gather around Virginia Beach as their seasonal migratory ranges overlap. (Image courtesy of Sinead Morris, Princeton University)

This research was supported by the RAPIDD program of the Science and Technology Directorate of the Department of Homeland Security; the National Institutes of Health Fogarty International Center; the Bill and Melinda Gates Foundation; and the Marine Mammal Unusual Mortality Event Contingency Fund and John H. Prescott Marine Mammal Rescue Assistance Grant Program operated by the National Oceanic and Atmospheric Administration.

Read the abstract.

Sinead E. Morris, Jonathan L. Zelner, Deborah A. Fauquier, Teresa K. Rowles, Patricia E. Rosel, Frances Gulland and Bryan T. Grenfell. “Partially observed epidemics in wildlife hosts: modeling an outbreak of dolphin morbillivirus in the northwestern Atlantic, June 2013–2014.” Journal of the Royal Society Interface, published Nov. 18 2015. DOI: 10.1098/rsif.2015.0676

 

Identifying new sources of turbulence in spherical tokamaks (Physics of Plasmas)

By John Greenwald, Princeton Plasma Physics Laboratory Communications

Turbulence 1

Computer simulation of turbulence in a model of the NSTX-U, a spherical tokamak fusion facility at the U.S. Dept. of Energy’s Princeton Plasma Physics Laboratory. Credit: Eliot Feibush

For fusion reactions to take place efficiently, the atomic nuclei that fuse together in plasma must be kept sufficiently hot. But turbulence in the plasma that flows in facilities called tokamaks can cause heat to leak from the core of the plasma to its outer edge, causing reactions to fizzle out.

Researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have for the first time modeled previously unsuspected sources of turbulence in spherical tokamaks, an alternative design for producing fusion energy. The findings, published online in October in Physics of Plasmas, could influence the development of future fusion facilities. This work was supported by the DOE Office of Science.

Spherical tokamaks, like the recently completed National Spherical Torus Experiment-Upgrade (NSTX-U) at PPPL, are shaped like cored apples compared with the mushroom-like design of conventional tokamaks that are more widely used. The cored-apple shape provides some distinct characteristics for the behavior of the plasma inside.

The paper, with PPPL principal research physicist Weixing Wang as lead author, identifies two important new sources of turbulence based on data from experiments on the National Spherical Torus Experiment prior to its upgrade. The discoveries were made by using state-of-the-art large-scale computer simulations. These sources are:

  • Instabilities caused by plasma that flows faster in the center of the fusion facility than toward the edge when rotating strongly in L-mode — or low confinement — regimes. These instabilities, called “Kelvin-Helmholtz modes” after physicists Baron Kelvin and Hermann von Helmholtz, act like wind that stirs up waves as it blows over water and are for the first time found to be relevant for realistic fusion experiments. Such non-uniform plasma flows have been known to play favorable roles in fusion plasmas in conventional and spherical tokamaks. The new results from this study suggest that we may also need to keep these flows within an optimized level.
  • Trapped electrons that bounce between two points in a section of the tokamak instead of swirling all the way around the facility. These electrons were shown to cause significant leakage of heat in H-mode — or high-confinement — regimes by driving a specific instability when they collide frequently. This type of instability is believed to play little role in conventional tokamaks but can provide a robust source of plasma turbulence in spherical tokamaks.

Most interestingly, the model predicts a range of trapped electron collisions in spherical tokamaks that can be turbulence-free, thus improving the plasma confinement. Such favorable plasmas could possibly be achieved by future advanced spherical tokamaks operating at high temperature.

Findings of the new model can be tested on the NSTX-U and will help guide experiments to identify non-traditional sources of turbulence in the spherical facility. Results of this research can shed light on the physics behind key obstacles to plasma confinement in spherical facilities and on ways to overcome them in future machines.

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by Princeton University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Read the abstract:

Weixing X. Wang, Stephane Ethier, Yang Ren, Stanley Kaye, Jin Chen, Edward Startsev, Zhixin Lu, and Zhengqian Li. “Identification of new turbulence contributions to plasma transport and confinement in spherical tokamak regime.” Physics of Plasmas, published October 2015. doi:10.1063/1.4933216.

‘Material universe’ yields surprising new particle (Nature)

By Staff

tungsten ditelluride

A crystal of tungsten ditelluride is shown. Image courtesy of Wudi Wang and N. Phuan Ong, Princeton University.

An international team of researchers has predicted the existence of a new type of particle called the type-II Weyl fermion in metallic materials. When subjected to a magnetic field, the materials containing the particle act as insulators for current applied in some directions and as conductors for current applied in other directions. This behavior suggests a range of potential applications, from low-energy devices to efficient transistors.

The researchers theorize that the particle exists in a material known as tungsten ditelluride (WTe2), which the researchers liken to a “material universe” because it contains several particles, some of which exist under normal conditions in our universe and others that may exist only in these specialized types of crystals. The research appeared in the journal Nature this week.

The new particle is a cousin of the Weyl fermion, one of the particles in standard quantum field theory. However, the type-II particle exhibits very different responses to electromagnetic fields, being a near perfect conductor in some directions of the field and an insulator in others.

The research was led by Princeton University Associate Professor of Physics B. Andrei Bernevig, as well as Matthias Troyer and Alexey Soluyanov of ETH Zurich, and Xi Dai of the Chinese Academy of Sciences Institute of Physics. The team included Postdoctoral Research Associates Zhijun Wang at Princeton and QuanSheng Wu at ETH Zurich, and graduate student Dominik Gresch at ETH Zurich.

The particle’s existence was missed by physicist Hermann Weyl during the initial development of quantum theory 85 years ago, say the researchers, because it violated a fundamental rule, called Lorentz symmetry, that does not apply in the materials where the new type of fermion arises.

Particles in our universe are described by relativistic quantum field theory, which combines quantum mechanics with Einstein’s theory of relativity. Under this theory, solids are formed of atoms that consist of a nuclei surrounded by electrons. Because of the sheer number of electrons interacting with each other, it is not possible to solve exactly the problem of many-electron motion in solids using quantum mechanical theory.

Instead, our current knowledge of materials is derived from a simplified perspective where electrons in solids are described in terms of special non-interacting particles, called quasiparticles, that move in the effective field created by charged entities called ions and electrons. These quasiparticles, dubbed Bloch electrons, are also fermions.

Just as electrons are elementary particles in our universe, Bloch electrons can be considered the elementary particles of a solid. In other words, the crystal itself becomes a “universe,” with its own elementary particles.

In recent years, researchers have discovered that such a “material universe” can host all other particles of relativistic quantum field theory. Three of these quasiparticles, the Dirac, Majorana, and Weyl fermions, were discovered in such materials, despite the fact that the latter two had long been elusive in experiments, opening the path to simulate certain predictions of quantum field theory in relatively inexpensive and small-scale experiments carried out in these “condensed matter” crystals.

These crystals can be grown in the laboratory, so experiments can be done to look for the newly predicted fermion in WTe2 and another candidate material, molybdenum ditelluride (MoTe2).

“One’s imagination can go further and wonder whether particles that are unknown to relativistic quantum field theory can arise in condensed matter,” said Bernevig. There is reason to believe they can, according to the researchers.

The universe described by quantum field theory is subject to the stringent constraint of a certain rule-set, or symmetry, known as Lorentz symmetry, which is characteristic of high-energy particles. However, Lorentz symmetry does not apply in condensed matter because typical electron velocities in solids are very small compared to the speed of light, making condensed matter physics an inherently low-energy theory.

“One may wonder,” Soluyanov said, “if it is possible that some material universes host non-relativistic ‘elementary’ particles that are not Lorentz-symmetric?”

This question was answered positively by the work of the international collaboration. The work started when Soluyanov and Dai were visiting Bernevig in Princeton in November 2014 and the discussion turned to strange unexpected behavior of certain metals in magnetic fields (Nature 514, 205-208, 2014, doi:10.1038/nature13763). This behavior had already been observed by experimentalists in some materials, but more work is needed to confirm it is linked to the new particle.

The researchers found that while relativistic theory only allows a single species of Weyl fermions to exist, in condensed matter solids two physically distinct Weyl fermions are possible. The standard type-I Weyl fermion has only two possible states in which it can reside at zero energy, similar to the states of an electron which can be either spin-up or spin-down. As such, the density of states at zero energy is zero, and the fermion is immune to many interesting thermodynamic effects. This Weyl fermion exists in relativistic field theory, and is the only one allowed if Lorentz invariance is preserved.

The newly predicted type-2 Weyl fermion has a thermodynamic number of states in which it can reside at zero energy – it has what is called a Fermi surface. Its Fermi surface is exotic, in that it appears along with touching points between electron and hole pockets. This endows the new fermion with a scale, a finite density of states, which breaks Lorentz symmetry.

Left: Allowed states for the standard type-I Weyl fermion. When energy is tuned from below, at zero energy, a pinch in the number of allowed states guarantees the absence of many-body phenomena such as superconductivity or ordering. Right: The newly discovered type-II Weyl fermion. At zero energy, a large number of allowed states are still available. This allows for the presence of superconductivity, magnetism, and pair-density wave phenomena. Credit B. Andrei Bernevig et al.

Left: Allowed states for the standard type-I Weyl fermion. When energy is tuned from below, at zero energy, a pinch in the number of allowed states guarantees the absence of many-body phenomena such as superconductivity or ordering.
Right: The newly discovered type-II Weyl fermion. At zero energy, a large number of allowed states are still available. This allows for the presence of superconductivity, magnetism, and pair-density wave phenomena.
Credit
B. Andrei Bernevig et al.

The discovery opens many new directions. Most normal metals exhibit an increase in resistivity when subject to magnetic fields, a known effect used in many current technologies. The recent prediction and experimental realization of standard type-I Weyl fermions in semimetals by two groups in Princeton and one group in IOP Beijing showed that the resistivity can actually decrease if the electric field is applied in the same direction as the magnetic field, an effect called negative longitudinal magnetoresistance. The new work shows that materials hosting a type-II Weyl fermion have mixed behavior: While for some directions of magnetic fields the resistivity increases just like in normal metals, for other directions of the fields, the resistivity can decrease like in the Weyl semimetals, offering possible technological applications.

“Even more intriguing is the perspective of finding more ‘elementary’ particles in other condensed matter systems,” the researchers say. “What kind of other particles can be hidden in the infinite variety of material universes? The large variety of emergent fermions in these materials has only begun to be unraveled.”

Researchers at Princeton University were supported by the U.S. Department of Defense, the U.S. Office of Naval Research, the U.S. National Science Foundation, the David and Lucile Packard Foundation and the W.M. Keck Foundation. Researchers at ETH Zurich were supported by Microsoft Research, the Swiss National Science Foundation and the European Research Council. Xi Dai was supported by the National Natural Science Foundation of China, the 973 program of China and the Chinese Academy of Sciences.

The article, “Type II Weyl Semimetals,” by Alexey A. Soluyanov, Dominik Gresch, Zhijun Wang, QuanSheng Wu, Matthias Troyer, Xi Dai, and B. Andrei Bernevig was published in the journal Nature on November 26, 2015.

Read the abstract.

Army ants’ ‘living’ bridges span collective intelligence, ‘swarm’ robotics (PNAS)

By Morgan Kelly, Office of Communications

Columns of workers penetrate the forest, furiously gathering as much food and supplies as they can. They are a massive army that living things know to avoid, and that few natural obstacles can waylay. So determined are these legions that should a chasm or gap disrupt the most direct path to their spoils they simply build a new path — out of themselves.

Without any orders or direction, individuals from the rank and file instinctively stretch across the opening, clinging to one another as their comrades-in-arms swarm across their bodies. But this is no force of superhumans. They are army ants of the species Eciton hamatum, which form “living” bridges across breaks and gaps in the forest floor that allow their famously large raiding swarms to travel efficiently.

Researchers from Princeton University and the New Jersey Institute of Technology (NJIT) report for the first time that these structures are more sophisticated than scientists knew. The ants exhibit a level of collective intelligence that could provide new insights into animal behavior and even help in the development of intuitive robots that can cooperate as a group, the researchers said.

Ants of E. hamatum automatically form living bridges without any oversight from a “lead” ant, the researchers report in the journal Proceedings of the National Academy of the Sciences. The action of each individual coalesces into a group unit that can adapt to the terrain and also operates by a clear cost-benefit ratio. The ants will create a path over an open space up to the point when too many workers are being diverted from collecting food and prey.

“These ants are performing a collective computation. At the level of the entire colony, they’re saying they can afford this many ants locked up in this bridge, but no more than that,” said co-first author Matthew Lutz, a graduate student in Princeton’s Department of Ecology and Evolutionary Biology.

“There’s no single ant overseeing the decision, they’re making that calculation as a colony,” Lutz said. “Thinking about this cost-benefit framework might be a new insight that can be applied to other animal structures that people haven’t thought of before.”

The research could help explain how large groups of animals balance cost and benefit, about which little is known, said co-author Iain Couzin, a Princeton visiting senior research scholar in ecology and evolutionary biology, and director of the Max Planck Institute for Ornithology and chair of biodiversity and collective behavior at the University of Konstanz in Germany.

Previous studies have shown that single creatures use “rules of thumb” to weigh cost-and-benefit, said Couzin, who also is Lutz’s graduate adviser. This new work shows that in large groups these same individual guidelines can eventually coordinate group-wide, he said — the ants acted as a unit although each ant only knew its immediate circumstances.

“They don’t know how many other ants are in the bridge, or what the overall traffic situation is. They only know about their local connections to others, and the sense of ants moving over their bodies,” Couzin said. “Yet, they have evolved simple rules that allow them to keep reconfiguring until, collectively, they have made a structure of an appropriate size for the prevailing conditions.

“Finding out how sightless ants can achieve such feats certainly could change the way we think of self-configuring structures in nature — and those made by man,” he said.

Ant-colony behavior has been the basis of algorithms related to telecommunications and vehicle routing, among other areas, explained co-first author Chris Reid, a postdoctoral research associate at the University of Sydney who conducted the work while at NJIT. Ants exemplify “swarm intelligence,” in which individual-level interactions produce coordinated group behavior. E. hamatum crossings assemble when the ants detect congestion along their raiding trail, and disassemble when normal traffic has resumed.

The video below shows how E. hamatum confronted a gap they encountered on an apparatus that Lutz and Reid built and deployed in the forests of Barro Colorado Island, Panama. Previously, scientists thought that ant bridges were static structures — their appearance over large gaps that ants clearly could not cross in midair was somewhat of a mystery, Reid said. The researchers found, however, that the ants, when confronted with an open space, start from the narrowest point of the expanse and work toward the widest point, expanding the bridge as they go to shorten the distance their compatriots must travel to get around the expanse.

“The amazing thing is that a very elegant solution to a colony-level problem arises from the individual interactions of a swarm of simple worker ants, each with only local information,” Reid said. “By extracting the rules used by individual ants about whether to initiate, join or leave a living structure, we could program swarms of simple robots to build bridges and other structures by connecting to each other.

“These robot bridges would exhibit the beneficial properties we observe in the ant bridges, such as adaptability to local conditions, real-time optimization of shape and position, and rapid construction and deconstruction without the need for external building materials,” Reid continued. “Such a swarm of robots would be especially useful in dangerous and unpredictable conditions, such as natural disaster zones.”

Radhika Nagpal, a professor of computer science at Harvard University who studies robotics and self-organizing biological systems, said that the findings reveal that there is “something much more fundamental about how complex structures are assembled and adapted in nature, and that it is not through a supervisor or planner making decisions.”

Individual ants adjusted to one another’s choices to create a successful structure, despite the fact that each ant didn’t necessarily know everything about the size of the gap or the traffic flow, said Nagpal, who is familiar with the research but was not involved in it.

“The goal wasn’t known ahead of time, but ’emerged’ as the collective continually adapted its solution to the environmental factors,” she said. “The study really opens your eyes to new ways of thinking about collective power, and has tremendous potential as a way to think about engineering systems that are more adaptive and able to solve complex cost-benefit ratios at the network level just through peer-to-peer interactions.”

She compared the ant bridges to human-made bridges that automatically widened to accommodate heavy vehicle traffic or a growing population. While self-assembling road bridges may be a ways off, the example illustrates the potential that technologies built with the same self-assembling capabilities seen in E. hamatum could have.

“There’s a deep interest in creating robots that don’t just rely on themselves, but can exploit the group to do more — and self-assembly is the ultimate in doing more,” Nagpal said. “If you could have small simple robots that were able to navigate complex spaces, but could self-assemble into larger structures — bridges, towers, pulling chains, rafts — when they face something they individually did not have the ability to do, that’s a huge increase in power in what robots would be capable of.”

The spaces E. hamatum bridges are not dramatic by human standards — small rifts in the leaf cover, or between the ends of two sticks. Bridges will be the length of 10 to 20 ants, which is only a few centimeters, Lutz said. That said, E. hamatum swarms form several bridges during the course of a day, which can see the back-and-forth of thousands of ants. Many ants pass over a living bridge even as it is assembling.

Bridging a gap

Image courtesy of Matthew Lutz at Princeton University and Chris Reid at the University of Sydney.

“The bridges are something that happen numerous times every day. They’re creating bridges to optimize their traffic flow and maximize their time,” Lutz said.

“When you’re moving hundreds of thousands of ants, creating a little shortcut can save a lot of energy,” he said. “This is such a unique behavior. You have other types of ants forming structures out of their bodies, but it’s not such a huge part of their lives and daily behavior.”

The research also included Scott Powell, an army-ant expert and assistant professor of biology at George Washington University; Albert Kao, a postdoctoral fellow at Harvard who received his doctorate in ecology and evolutionary biology from Princeton in 2015; and Simon Garnier, an assistant professor of biological sciences at NJIT who studies swarm intelligence and was once a postdoctoral researcher in Couzin’s lab at Princeton.

To conduct their field experiments, Lutz and Reid constructed a 1.5-foot-tall apparatus with ramps on both sides and adjustable arms in the center with which they could adjust the size of the gap. They then inserted the apparatus into active E. hamatum raiding trails that they found in the jungle in Panama. Because ants follow one another’s chemical scent, Lutz and Reid used sticks and leaves from the ants’ trail to get them to reform their column across the device.

Lutz and Reid observed how the ants formed bridges across gaps that were set at angles of 12, 20, 40 and 60 degrees. They gauged how much travel-distance the ants saved with their bridge versus the surface area (in centimeters squared) of the bridge itself. Twelve-degree angles shaved off the most distance (around 11 centimeters) while taking up the fewest workers. Sixty-degree angles had the highest cost-to-benefit ratio. Interestingly, the ants were willing to expend members for 20-degree angles, forming bridges up to 8 centimeters squared to decrease their travel time by almost 12 centimeters, indicating that the loss in manpower was worth the distance saved.

Lutz said that future research based on this work might compare these findings to the living bridges of another army ant species, E. burchellii, to determine if the same principles are in action.

The paper, “Army ants dynamically adjust living bridges in response to a cost-benefit trade-off,” was published Nov. 23 by Proceedings of the National Academy of Sciences. The work was supported by the National Science Foundation (grant nos. PHY-0848755, IOS0-1355061 and EAGER IOS-1251585); the Army Research Office (grant nos. W911NG-11-1-0385 and W911NF-14-1-0431); and the Human Frontier Science Program (grant no. RGP0065/2012).

Read the abstract

Group living: For baboons intermediate size is optimal (PNAS)

New research reveals that intermediate-sized groups of baboons (50 to 70 individuals) exhibit optimal ranging behavior and low stress levels. Pictured is a group of wild baboons in East Africa. Credit: Beth Archie

New research reveals that intermediate-sized groups of baboons (50 to 70 individuals) exhibit optimal ranging behavior and low stress levels. Pictured is a group of wild baboons in East Africa. Credit: Beth Archie

By Gregory Filiano, Stony Brook University

Living with others can offer tremendous benefits for social animals, including primates, but these benefits could come at a high cost. New research from a project that originated at Princeton University reveals that intermediate-sized groups provide the most benefits to wild baboons. The study, led by Catherine Markham at Stony Brook University and published in the journal, Proceedings of the National Academy of Sciences, offers new insight into the costs and benefits of group living.

In the paper titled “Optimal group size in a highly social mammal,” the authors reveal that while wild baboon groups range in size from 20 to 100 members, groups consisting of about 50 to 70 individuals (intermediate size) exhibit optimal ranging behavior and low physiological stress levels in individual baboons, which translates to a social environment that fosters the health and well-being of individual members. The finding provides novel empirical support for an ongoing theory in the fields of evolutionary biology and anthropology that in living intermediate-sized groups has advantages for social mammals.

“Strikingly, we found evidence that intermediate-sized groups have energetically optimal space-use strategies and both large and small groups experience ranging disadvantages,” said Markham, lead author and an assistant professor in the Department of Anthropology at Stony Brook University. “It appears that large, socially dominant groups are constrained by within-group competition whereas small, socially subordinate groups are constrained by between-group competition and/or predation pressures.”

The researchers compiled their findings based on observing five social wild baboon groups in East Africa over 11 years. This population of wild baboons has been studied continuously for over 40 years by the Amboseli Baboon Research Project. They observed and examined the effects of group size and ranging patterns for all of the groups. To gauge stress levels of individuals, they measured the glucocorticoid (stress hormone) levels found in individual waste droppings.

“The combination of an 11-year data set and more intensive short-term data, together with the levels of stress hormones, led to the important finding that there really is a cost to living in too small a group,” said Jeanne Altmann, the Eugene Higgins Professor of Ecology and Evolutionary Biology, Emeritus and a senior scholar at Princeton University. Altmann is a co-director of the Amboseli Baboon Research Project and co-founded the project in 1971 with Stuart Altmann, a senior scholar in the Department of Ecology and Evolutionary Biology at Princeton.

“The cost of living in smaller groups is a concern from a conservation perspective,” Jeanne Altmann said, “Due to the fragmentation of animal habitats, many animals will be living in smaller groups. Understanding these dynamics is one of the next things to study.” The research was supported primarily by the National Science Foundation and the National Institute on Aging.

Markham, who earned her Ph.D. at Princeton University in 2012 with Jeanne Altmann as her thesis adviser, explained that regarding optimal group sizes for highly social species the key to the analysis is how are trade-offs balanced, and do these trade-offs actually result in an optimal group size for a social species.

She said that their findings provide a testable hypothesis for evaluating group-size constraints in other group-living species, in which the costs of intra- and intergroup competition vary as a function of group size. Additionally, their findings provide implications for new research and a broader understanding of both why some animals live with others and how many neighbors will be best for various species and situations.

The research was conducted in collaboration with Susan Alberts, a professor of biology at Duke University and co-director of the Amboseli Baboon Research Project, and with Laurence Gesquiere, a former postdoctoral researcher at Princeton who is now a senior research scientist working with Alberts. Altmann and Alberts are also affiliated with the Institute for Primate Research, National Museums of Kenya.

Additional support was provided by the American Society of Primatologists, the Animal Behavior Society, the International Primatological Society, and Sigma Xi.

Article courtesy of Stony Brook University.

Read the abstract.

A. Catherine Markham, Laurence R. Gesquiere, Susan C. Alberts and Jeanne Altmann. Optimal group size in a highly social mammal. Proceedings of the National Academy of Sciences. Published online before print October 26, 2015, doi: 10.1073/pnas.1517794112 PNAS October 26, 2015