By Raphael Rosen, Princeton Plasma Physics Laboratory Communications
A team of physicists led by Stephen Jardin of the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) has discovered a mechanism that prevents the electrical current flowing through fusion plasma from repeatedly peaking and crashing. This behavior is known as a “sawtooth cycle” and can cause instabilities within the plasma’s core. The results have been published online in Physical Review Letters. The research was supported by the DOE Office of Science.
The team, which included scientists from General Atomics and the Max Planck Institute for Plasma Physics, performed calculations on the Edison computer at the National Energy Research Scientific Computing Center, a division of the Lawrence Berkeley National Laboratory. Using M3D-C1, a program they developed that creates three-dimensional simulations of fusion plasmas, the team found that under certain conditions a helix-shaped whirlpool of plasma forms around the center of the tokamak. The swirling plasma acts like a dynamo — a moving fluid that creates electric and magnetic fields. Together these fields prevent the current flowing through plasma from peaking and crashing.
The researchers found two specific conditions under which the plasma behaves like a dynamo. First, the magnetic lines that circle the plasma must rotate exactly once, both the long way and the short way around the doughnut-shaped configuration, so an electron or ion following a magnetic field line would end up exactly where it began. Second, the pressure in the center of the plasma must be significantly greater than at the edge, creating a gradient between the two sections. This gradient combines with the rotating magnetic field lines to create spinning rolls of plasma that swirl around the tokamak and gives rise to the dynamo that maintains equilibrium and produces stability.
This dynamo behavior arises only under certain conditions. Both the electrical current running through the plasma and the pressure that the plasma’s electrons and ions exert on their neighbors must be in a range that is “not too large and not too small,” said Jardin. In addition, the speed at which the conditions for the fusion reaction are established must be “not too fast and not too slow.”
Jardin stressed that once a range of conditions like pressure and current are set, the dynamo phenomenon occurs all by itself. “We don’t have to do anything else from the outside,” he noted. “It’s something like when you drain your bathtub and a whirlpool forms over the drain by itself. But because a plasma is more complicated than water, the whirlpool that forms in the tokamak needs to also generate the voltage to sustain itself.”
During the simulations the scientists were able to virtually add new diagnostics, or probes, to the computer code. “These diagnostics were able to measure the helical velocity fields, electric potential, and magnetic fields to clarify how the dynamo forms and persists,” said Jardin. The persistence produces the “voltage in the center of the discharge that keeps the plasma current from peaking.”
Physicists have indirectly observed what they believe to be the dynamo behavior on the DIII-D National Fusion Facility that General Atomics operates for the Department of Energy in San Diego and on the ASDEX Upgrade in Garching, Germany. They hope to learn to create these conditions on demand, especially in ITER, the huge multinational fusion machine being constructed in France to demonstrate the practicality of fusion power. “Now that we understand it better, we think that computer simulations will show us under what conditions this will occur in ITER,” said Jardin. “That will be the focus of our research in the near future.”
Learning how to create these conditions will be particularly important for ITER, which will produce helium nuclei that could amplify the sawtooth disruptions. If large enough, these disruptions could cause other instabilities that could halt the fusion process. Preventing the cycle from starting would therefore be highly beneficial for the ITER experiment.
The warming effects of climate change usually conjure up ideas of parched and barren landscapes broiling under a blazing sun, its heat amplified by greenhouse gases. But a study led by Princeton University researchers suggests that hotter nights may actually wield much greater influence over the planet’s atmosphere as global temperatures rise — and could eventually lead to more carbon flooding the atmosphere.
Since measurements began in 1959, nighttime temperatures in the tropics have had a strong influence over year-to-year shifts in the land’s carbon-storage capacity, or “sink,” the researchers report in the journal Proceedings of the National Academy of Sciences. Earth’s ecosystems absorb about 25% of the excess carbon from the atmosphere, and tropical forests account for about one-third of land-based plant productivity.
During the past 50 years, the land-based carbon sink’s “interannual variability” has grown by 50 to 100 percent, the researchers found. The researchers used climate- and satellite-imaging data to determine which of various climate factors — including rainfall, drought and daytime temperatures — had the most effect on the carbon sink’s swings. They found the strongest association with variations in tropical nighttime temperatures, which have risen by about 0.6 degrees Celsius since 1959.
First author William Anderegg, an associate research scholar in the Princeton Environmental Institute, explained that he and his colleagues determined that warm nighttime temperatures lead plants to put more carbon into the atmosphere through a process known as respiration.
Just as people are more active on warm nights, so too are plants. Although plants take up carbon dioxide from the atmosphere, they also internally consume sugars to stay alive. That process, known as respiration, produces carbon dioxide. Plants step up respiration in warm weather, Anderegg said. The researchers found that yearly variations in the carbon sink strongly correlated with variations in plant respiration.
“When you heat up a system, biological processes tend to increase,” Anderegg said. “At hotter temperatures, plant respiration rates go up and this is what’s happening during hot nights. Plants lose a lot more carbon than they would during cooler nights.”
Previous research has shown that nighttime temperatures have risen significantly faster as a result of climate change than daytime temperatures, Anderegg said. This means that in future climate scenarios respiration rates could increase to the point that the land is putting more carbon into the atmosphere than it’s taking out, “which would be disastrous,” he said.
Of course, plants consume carbon dioxide as a part of photosynthesis, during which they convert sunlight into energy. Photosynthesis also is sensitive to rises in temperature, but it occurs only during the day, whereas respiration occurs at all hours and thus is more sensitive to nighttime warming, Anderegg said.
“Nighttime temperatures have been increasing faster than daytime temperatures and will continue to rise faster,” Anderegg said. “This suggests that tropical ecosystems might be more vulnerable to climate change than previously thought, risking crossing the threshold from a carbon sink to a carbon source. But there’s certainly potential for plants to acclimate their respiration rates and that’s an area that needs future study.”
This research was supported by the National Science Foundation MacroSystems Biology Grant (EF-1340270), RAPID Grant (DEB-1249256) and EAGER Grant (1550932); and a National Oceanic and Atmospheric Administration (NOAA) Climate and Global Change postdoctoral fellowship administered by the University Corporation of Atmospheric Research.
William R. L. Anderegg, Ashley P. Ballantyne, W. Kolby Smith, Joseph Majkut, Sam Rabin, Claudie Beaulieu, Richard Birdsey, John P. Dunne, Richard A. Houghton, Ranga B. Myneni, Yude Pan, Jorge L. Sarmiento, Nathan Serota, Elena Shevliakova, Pieter Tan and Stephen W. Pacala. “Tropical nighttime warming as a dominant driver of variability in the terrestrial carbon sink.” Proceedings of the National Academy of Sciences, published online in-advance of print Dec. 7, 2015. DOI: 10.1073/pnas.1521479112.
By Raphael Rosen, Princeton Plasma Physics Laboratory Communications
Physicists at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) are proposing a new way to process nuclear waste that uses a plasma-based centrifuge. Known as plasma mass filtering, the new mass separation techniques would supplement chemical techniques. It is hoped that this combined approach would reduce both the cost of nuclear waste disposal and the amount of byproducts produced during the process. This work was supported by PPPL’s Laboratory Directed Research and Development Program.
“The safe disposal of nuclear waste is a colossal problem,” said Renaud Gueroult, staff physicist at PPPL and lead author of the paper that appeared in the Journal of Hazardous Materials in October. “One solution might be to supplement existing chemical separation techniques with plasma separation techniques, which could be economically attractive, ideally leading to a reevaluation of how nuclear waste is processed.”
The immediate motivation for safe disposal is the radioactive waste stored currently at the Hanford Site, a facility in Washington State that produced plutonium for nuclear weapons during the Cold War. The volume of this waste originally totaled 54 million gallons and was stored in 177 underground tanks.
In 2000, Hanford engineers began building machinery that would encase the radioactive waste in glass. The method, known as “vitrification,” had been used at another Cold War-era nuclear production facility since 1996. A multibillion-dollar vitrification plant is currently under construction at the Hanford site.
To reduce the cost of high-level waste vitrification and disposal, it may be advantageous to reduce the number of high-level glass canisters by packing more waste into each glass canister. To reduce the volume to be vitrified, it would be advantageous to separate the nonradioactive waste, like aluminum and iron, out of the waste, leaving less waste to be vitrified. However, in its 2014 report, the DOE Task Force on Technology Development for Environmental Management argued that, “without the development of new technology, it is not clear that the cleanup can be completed satisfactorily or at any reasonable cost.”
The high-throughput, plasma-based, mass separation techniques advanced at PPPL offer the possibility of reducing the volume of waste that needs to be immobilized in glass. “The interesting thing about our ideas on mass separation is that it is a form of magnetic confinement, so it fits well within the Laboratory’s culture,” said physicist Nat Fisch, co-author of the paper and director of the Princeton University Program in Plasma Physics. “To be more precise, it is ‘differential magnetic confinement’ in that some species are confined while others are lost quickly, which is what makes it a high-throughput mass filter.”
How would a plasma-based mass filter system work? The method begins by atomizing and ionizing the hazardous waste and injecting it into the rotating filter so the individual elements can be influenced by electric and magnetic fields. The filter then separates the lighter elements from the heavier ones by using centrifugal and magnetic forces. The lighter elements are typically less radioactive than the heavier ones and often do not need to be vitrified. Processing of the high-level waste therefore would need fewer high-level glass canisters overall, while the less radioactive material could be immobilized in less costly wasteform (e.g., concrete, bitumen).
The new technique would also be more widely applicable than traditional chemical-based methods since it would depend less on the nuclear waste’s chemical composition. While “the waste’s composition would influence the performance of the plasma mass filter in some ways, the effect would most likely be less than that associated with chemical techniques,” said Gueroult.
Gueroult points out why savings by plasma techniques can be important. “For only about $10 a kilogram in energy cost, solid waste can be ionized. In its ionized form, the waste can then be separated into heavy and light components. Because the waste is atomized, the separation proceeds only on the basis of atomic mass, without regard to the chemistry. Since the total cost of chemical-based techniques can be $2,000 per kilogram of the vitrified waste, as explained in the Journal of Hazardous Materials paper, it stands to reason that even if several plasma-based steps are needed to achieve pure enough separation, there is in principle plenty of room to cut the overall costs. That is the point of our recent paper. It is also why we are excited about our plasma-based methods.”
Fisch notes that “our original ideas grew out of the thesis of Abe Fetterman, who began by considering centrifugal mirror confinement for nuclear fusion, but then realized the potential for mass separation. Now the key role on this project is being played by Renaud, who has developed the concept substantially further.”
According to Fisch, the current developments are a variation and refinement of a plasma-based mass separation system first advanced by a private company called Archimedes Technology Group. That company, started by the late Dr. Tihiro Ohkawa, a fusion pioneer, raised private capital to advance a plasma-based centrifuge concept to clean up the legacy waste at Hanford, but ceased operation in 2006 after failing to receive federal funding.
Now an updated understanding of the complexity of the Hanford problem, combined with an increased appreciation of new ideas, has led to renewed federal interest in waste-treatment solutions. Completion of the main waste processing operations, which was in 2002 projected for 2028, has slipped by 20 years over the last 13 years, and the total cleanup cost is now estimated by the Department of Energy to be greater than 250 billion dollars, according to the DOE Office of Inspector General, Office of Audits and Inspections. DOE, which has the responsibility of cleaning up the legacy nuclear waste at Hanford and other sites, conducted a Basic Research Needs Workshop on nuclear waste cleanup in July that both Fisch and Gueroult attended. The report of that workshop, which is expected to highlight new approaches to the cleanup problem, is due out this fall.
PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
Renaud Gueroult, David T. Hobbs, Nathaniel J. Fisch. “Plasma filtering techniques for nuclear waste remediation.” Journal of Hazardous Materials, published October 2015. doi:10.1016/j.jhazmat.2015.04.058.
Stopping the outbreak of a disease hinges on a wealth of data such as what makes a suitable host and how a pathogen spreads. But gathering these data can be difficult for diseases in remote areas of the world, or for epidemics involving wild animals.
A new study led by Princeton University researchers and published in the Journal of the Royal Society Interface explores an approach to studying epidemics for which details are difficult to obtain. The researchers analyzed the 2013 outbreak of dolphin morbillivirus — a potentially fatal pathogen from the same family as the human measles virus — that resulted in more than 1,600 bottlenose dolphins becoming stranded along the Atlantic coast of the United States by 2015. Because scientists were able to observe dolphins only after they washed up on shore, little is known about how the disease transmits and persists in the wild.
The researchers used a Poisson process — a statistical tool used to model the random nature of disease transmission — to determine from sparse data how dolphin morbillivirus can spread. They found that individual bottlenose dolphins may be infectious for up to a month and can spread the disease over hundreds of miles, particularly during seasonal migrations. In 2013, the height of disease transmission occurred toward the end of summer around an area offshore of Virginia Beach, Virginia, where multiple migratory dolphin groups are thought to cross paths.
In the interview below, first author Sinead Morris, a graduate student in ecology and evolutionary biology, explains what the researchers learned about the dolphin morbillivirus outbreak, and how the Poisson process can help scientists understand human epidemics. Morris is in the research group of co-author Bryan Grenfell, Princeton’s Kathryn Briger and Sarah Fenton Professor of Ecology and Evolutionary Biology and Public Affairs.
Q: How does the Poisson process track indirectly observed epidemics and what specific challenges does it overcome?
A: One of the main challenges in modeling indirectly observed epidemics is a lack of data. In our case, we had information on all infected dolphins that had been found stranded on shore, but had no data on the number of individuals that became infected but did not strand. The strength of the Poisson process is that its simple framework means it can be used to extract important information about the how the disease is spreading across space and time, despite having such incomplete data. Essentially the way the process works is that it keeps track of where and when individual dolphins stranded, and then at each new point in the epidemic it uses the history of what has happened before to project what will happen in the future. For example, an infected individual is more likely to transmit the disease onwards to other individuals in close spatial proximity than to those far away. So, by keeping track of all these infections the model can identify where and when the largest risk of new infections will be.
Q: Why was this 2013-15 outbreak of dolphin morbillivirus selected for study, and what key insights does this work provide?
A: The recent outbreak of dolphin morbillivirus spread rapidly along the northwestern Atlantic coast from New York to Florida, causing substantial mortality among coastal bottlenose dolphin populations. Despite the clear detrimental impact that this disease can have, however, it is still poorly understood. Therefore, our aim in modeling the epidemic was to gain much needed information about how the virus spreads. We found that a dolphin may be infectious for up to 24 days and can travel substantial distances (up to 220 kilometers, or 137 miles) within this time. This is important because such long-range movements — for example, during periods of seasonal migration — are likely to create many transmission opportunities from infected to uninfected individuals, and may have thus facilitated the rapid spread of the virus down the Atlantic coast.
Q: Can this model be used for human epidemics?
A: The Poisson process framework was originally developed to model the occurrence of earthquakes, and has since been used in a variety of other contexts that also tend to suffer from noisy, indirectly observed data, such as urban crime distribution. To model dolphin morbillivirus, we adapted the framework to incorporate more biological information, and similar techniques have also been applied to model meningococcal disease in humans, which can cause meningitis and sepsis. Generally, the data characterizing human epidemics are more detailed than the data we had for this project and, as such, models that can incorporate greater complexity are more widely used. However, we hope that our methods will stimulate the greater use of Poisson process models in epidemiological systems that also suffer from indirectly observed data.
This research was supported by the RAPIDD program of the Science and Technology Directorate of the Department of Homeland Security; the National Institutes of Health Fogarty International Center; the Bill and Melinda Gates Foundation; and the Marine Mammal Unusual Mortality Event Contingency Fund and John H. Prescott Marine Mammal Rescue Assistance Grant Program operated by the National Oceanic and Atmospheric Administration.
Sinead E. Morris, Jonathan L. Zelner, Deborah A. Fauquier, Teresa K. Rowles, Patricia E. Rosel, Frances Gulland and Bryan T. Grenfell. “Partially observed epidemics in wildlife hosts: modeling an outbreak of dolphin morbillivirus in the northwestern Atlantic, June 2013–2014.” Journal of the Royal Society Interface, published Nov. 18 2015. DOI: 10.1098/rsif.2015.0676
By John Greenwald, Princeton Plasma Physics Laboratory Communications
For fusion reactions to take place efficiently, the atomic nuclei that fuse together in plasma must be kept sufficiently hot. But turbulence in the plasma that flows in facilities called tokamaks can cause heat to leak from the core of the plasma to its outer edge, causing reactions to fizzle out.
Researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have for the first time modeled previously unsuspected sources of turbulence in spherical tokamaks, an alternative design for producing fusion energy. The findings, published online in October in Physics of Plasmas, could influence the development of future fusion facilities. This work was supported by the DOE Office of Science.
Spherical tokamaks, like the recently completed National Spherical Torus Experiment-Upgrade (NSTX-U) at PPPL, are shaped like cored apples compared with the mushroom-like design of conventional tokamaks that are more widely used. The cored-apple shape provides some distinct characteristics for the behavior of the plasma inside.
The paper, with PPPL principal research physicist Weixing Wang as lead author, identifies two important new sources of turbulence based on data from experiments on the National Spherical Torus Experiment prior to its upgrade. The discoveries were made by using state-of-the-art large-scale computer simulations. These sources are:
Instabilities caused by plasma that flows faster in the center of the fusion facility than toward the edge when rotating strongly in L-mode — or low confinement — regimes. These instabilities, called “Kelvin-Helmholtz modes” after physicists Baron Kelvin and Hermann von Helmholtz, act like wind that stirs up waves as it blows over water and are for the first time found to be relevant for realistic fusion experiments. Such non-uniform plasma flows have been known to play favorable roles in fusion plasmas in conventional and spherical tokamaks. The new results from this study suggest that we may also need to keep these flows within an optimized level.
Trapped electrons that bounce between two points in a section of the tokamak instead of swirling all the way around the facility. These electrons were shown to cause significant leakage of heat in H-mode — or high-confinement — regimes by driving a specific instability when they collide frequently. This type of instability is believed to play little role in conventional tokamaks but can provide a robust source of plasma turbulence in spherical tokamaks.
Most interestingly, the model predicts a range of trapped electron collisions in spherical tokamaks that can be turbulence-free, thus improving the plasma confinement. Such favorable plasmas could possibly be achieved by future advanced spherical tokamaks operating at high temperature.
Findings of the new model can be tested on the NSTX-U and will help guide experiments to identify non-traditional sources of turbulence in the spherical facility. Results of this research can shed light on the physics behind key obstacles to plasma confinement in spherical facilities and on ways to overcome them in future machines.
PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by Princeton University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
An international team of researchers has predicted the existence of a new type of particle called the type-II Weyl fermion in metallic materials. When subjected to a magnetic field, the materials containing the particle act as insulators for current applied in some directions and as conductors for current applied in other directions. This behavior suggests a range of potential applications, from low-energy devices to efficient transistors.
The researchers theorize that the particle exists in a material known as tungsten ditelluride (WTe2), which the researchers liken to a “material universe” because it contains several particles, some of which exist under normal conditions in our universe and others that may exist only in these specialized types of crystals. The research appeared in the journal Naturethis week.
The new particle is a cousin of the Weyl fermion, one of the particles in standard quantum field theory. However, the type-II particle exhibits very different responses to electromagnetic fields, being a near perfect conductor in some directions of the field and an insulator in others.
The particle’s existence was missed by physicist Hermann Weyl during the initial development of quantum theory 85 years ago, say the researchers, because it violated a fundamental rule, called Lorentz symmetry, that does not apply in the materials where the new type of fermion arises.
Particles in our universe are described by relativistic quantum field theory, which combines quantum mechanics with Einstein’s theory of relativity. Under this theory, solids are formed of atoms that consist of a nuclei surrounded by electrons. Because of the sheer number of electrons interacting with each other, it is not possible to solve exactly the problem of many-electron motion in solids using quantum mechanical theory.
Instead, our current knowledge of materials is derived from a simplified perspective where electrons in solids are described in terms of special non-interacting particles, called quasiparticles, that move in the effective field created by charged entities called ions and electrons. These quasiparticles, dubbed Bloch electrons, are also fermions.
Just as electrons are elementary particles in our universe, Bloch electrons can be considered the elementary particles of a solid. In other words, the crystal itself becomes a “universe,” with its own elementary particles.
In recent years, researchers have discovered that such a “material universe” can host all other particles of relativistic quantum field theory. Three of these quasiparticles, the Dirac, Majorana, and Weyl fermions, were discovered in such materials, despite the fact that the latter two had long been elusive in experiments, opening the path to simulate certain predictions of quantum field theory in relatively inexpensive and small-scale experiments carried out in these “condensed matter” crystals.
These crystals can be grown in the laboratory, so experiments can be done to look for the newly predicted fermion in WTe2 and another candidate material, molybdenum ditelluride (MoTe2).
“One’s imagination can go further and wonder whether particles that are unknown to relativistic quantum field theory can arise in condensed matter,” said Bernevig. There is reason to believe they can, according to the researchers.
The universe described by quantum field theory is subject to the stringent constraint of a certain rule-set, or symmetry, known as Lorentz symmetry, which is characteristic of high-energy particles. However, Lorentz symmetry does not apply in condensed matter because typical electron velocities in solids are very small compared to the speed of light, making condensed matter physics an inherently low-energy theory.
“One may wonder,” Soluyanov said, “if it is possible that some material universes host non-relativistic ‘elementary’ particles that are not Lorentz-symmetric?”
This question was answered positively by the work of the international collaboration. The work started when Soluyanov and Dai were visiting Bernevig in Princeton in November 2014 and the discussion turned to strange unexpected behavior of certain metals in magnetic fields (Nature 514, 205-208, 2014, doi:10.1038/nature13763). This behavior had already been observed by experimentalists in some materials, but more work is needed to confirm it is linked to the new particle.
The researchers found that while relativistic theory only allows a single species of Weyl fermions to exist, in condensed matter solids two physically distinct Weyl fermions are possible. The standard type-I Weyl fermion has only two possible states in which it can reside at zero energy, similar to the states of an electron which can be either spin-up or spin-down. As such, the density of states at zero energy is zero, and the fermion is immune to many interesting thermodynamic effects. This Weyl fermion exists in relativistic field theory, and is the only one allowed if Lorentz invariance is preserved.
The newly predicted type-2 Weyl fermion has a thermodynamic number of states in which it can reside at zero energy – it has what is called a Fermi surface. Its Fermi surface is exotic, in that it appears along with touching points between electron and hole pockets. This endows the new fermion with a scale, a finite density of states, which breaks Lorentz symmetry.
The discovery opens many new directions. Most normal metals exhibit an increase in resistivity when subject to magnetic fields, a known effect used in many current technologies. The recent prediction and experimental realization of standard type-I Weyl fermions in semimetals by two groups in Princeton and one group in IOP Beijing showed that the resistivity can actually decrease if the electric field is applied in the same direction as the magnetic field, an effect called negative longitudinal magnetoresistance. The new work shows that materials hosting a type-II Weyl fermion have mixed behavior: While for some directions of magnetic fields the resistivity increases just like in normal metals, for other directions of the fields, the resistivity can decrease like in the Weyl semimetals, offering possible technological applications.
“Even more intriguing is the perspective of finding more ‘elementary’ particles in other condensed matter systems,” the researchers say. “What kind of other particles can be hidden in the infinite variety of material universes? The large variety of emergent fermions in these materials has only begun to be unraveled.”
Researchers at Princeton University were supported by the U.S. Department of Defense, the U.S. Office of Naval Research, the U.S. National Science Foundation, the David and Lucile Packard Foundation and the W.M. Keck Foundation. Researchers at ETH Zurich were supported by Microsoft Research, the Swiss National Science Foundation and the European Research Council. Xi Dai was supported by the National Natural Science Foundation of China, the 973 program of China and the Chinese Academy of Sciences.
The article, “Type II Weyl Semimetals,” by Alexey A. Soluyanov, Dominik Gresch, Zhijun Wang, QuanSheng Wu, Matthias Troyer, Xi Dai, and B. Andrei Bernevig was published in the journal Nature on November 26, 2015.
Columns of workers penetrate the forest, furiously gathering as much food and supplies as they can. They are a massive army that living things know to avoid, and that few natural obstacles can waylay. So determined are these legions that should a chasm or gap disrupt the most direct path to their spoils they simply build a new path — out of themselves.
Without any orders or direction, individuals from the rank and file instinctively stretch across the opening, clinging to one another as their comrades-in-arms swarm across their bodies. But this is no force of superhumans. They are army ants of the species Eciton hamatum, which form “living” bridges across breaks and gaps in the forest floor that allow their famously large raiding swarms to travel efficiently.
Researchers from Princeton University and the New Jersey Institute of Technology (NJIT) report for the first time that these structures are more sophisticated than scientists knew. The ants exhibit a level of collective intelligence that could provide new insights into animal behavior and even help in the development of intuitive robots that can cooperate as a group, the researchers said.
Ants of E. hamatum automatically form living bridges without any oversight from a “lead” ant, the researchers report in the journal Proceedings of the National Academy of the Sciences. The action of each individual coalesces into a group unit that can adapt to the terrain and also operates by a clear cost-benefit ratio. The ants will create a path over an open space up to the point when too many workers are being diverted from collecting food and prey.
“These ants are performing a collective computation. At the level of the entire colony, they’re saying they can afford this many ants locked up in this bridge, but no more than that,” said co-first author Matthew Lutz, a graduate student in Princeton’s Department of Ecology and Evolutionary Biology.
“There’s no single ant overseeing the decision, they’re making that calculation as a colony,” Lutz said. “Thinking about this cost-benefit framework might be a new insight that can be applied to other animal structures that people haven’t thought of before.”
The research could help explain how large groups of animals balance cost and benefit, about which little is known, said co-author Iain Couzin, a Princeton visiting senior research scholar in ecology and evolutionary biology, and director of the Max Planck Institute for Ornithology and chair of biodiversity and collective behavior at the University of Konstanz in Germany.
Previous studies have shown that single creatures use “rules of thumb” to weigh cost-and-benefit, said Couzin, who also is Lutz’s graduate adviser. This new work shows that in large groups these same individual guidelines can eventually coordinate group-wide, he said — the ants acted as a unit although each ant only knew its immediate circumstances.
“They don’t know how many other ants are in the bridge, or what the overall traffic situation is. They only know about their local connections to others, and the sense of ants moving over their bodies,” Couzin said. “Yet, they have evolved simple rules that allow them to keep reconfiguring until, collectively, they have made a structure of an appropriate size for the prevailing conditions.
“Finding out how sightless ants can achieve such feats certainly could change the way we think of self-configuring structures in nature — and those made by man,” he said.
Ant-colony behavior has been the basis of algorithms related to telecommunications and vehicle routing, among other areas, explained co-first author Chris Reid, a postdoctoral research associate at the University of Sydney who conducted the work while at NJIT. Ants exemplify “swarm intelligence,” in which individual-level interactions produce coordinated group behavior. E. hamatum crossings assemble when the ants detect congestion along their raiding trail, and disassemble when normal traffic has resumed.
The video below shows how E. hamatum confronted a gap they encountered on an apparatus that Lutz and Reid built and deployed in the forests of Barro Colorado Island, Panama. Previously, scientists thought that ant bridges were static structures — their appearance over large gaps that ants clearly could not cross in midair was somewhat of a mystery, Reid said. The researchers found, however, that the ants, when confronted with an open space, start from the narrowest point of the expanse and work toward the widest point, expanding the bridge as they go to shorten the distance their compatriots must travel to get around the expanse.
“The amazing thing is that a very elegant solution to a colony-level problem arises from the individual interactions of a swarm of simple worker ants, each with only local information,” Reid said. “By extracting the rules used by individual ants about whether to initiate, join or leave a living structure, we could program swarms of simple robots to build bridges and other structures by connecting to each other.
“These robot bridges would exhibit the beneficial properties we observe in the ant bridges, such as adaptability to local conditions, real-time optimization of shape and position, and rapid construction and deconstruction without the need for external building materials,” Reid continued. “Such a swarm of robots would be especially useful in dangerous and unpredictable conditions, such as natural disaster zones.”
Radhika Nagpal, a professor of computer science at Harvard University who studies robotics and self-organizing biological systems, said that the findings reveal that there is “something much more fundamental about how complex structures are assembled and adapted in nature, and that it is not through a supervisor or planner making decisions.”
Individual ants adjusted to one another’s choices to create a successful structure, despite the fact that each ant didn’t necessarily know everything about the size of the gap or the traffic flow, said Nagpal, who is familiar with the research but was not involved in it.
“The goal wasn’t known ahead of time, but ’emerged’ as the collective continually adapted its solution to the environmental factors,” she said. “The study really opens your eyes to new ways of thinking about collective power, and has tremendous potential as a way to think about engineering systems that are more adaptive and able to solve complex cost-benefit ratios at the network level just through peer-to-peer interactions.”
She compared the ant bridges to human-made bridges that automatically widened to accommodate heavy vehicle traffic or a growing population. While self-assembling road bridges may be a ways off, the example illustrates the potential that technologies built with the same self-assembling capabilities seen in E. hamatum could have.
“There’s a deep interest in creating robots that don’t just rely on themselves, but can exploit the group to do more — and self-assembly is the ultimate in doing more,” Nagpal said. “If you could have small simple robots that were able to navigate complex spaces, but could self-assemble into larger structures — bridges, towers, pulling chains, rafts — when they face something they individually did not have the ability to do, that’s a huge increase in power in what robots would be capable of.”
The spaces E. hamatum bridges are not dramatic by human standards — small rifts in the leaf cover, or between the ends of two sticks. Bridges will be the length of 10 to 20 ants, which is only a few centimeters, Lutz said. That said, E. hamatum swarms form several bridges during the course of a day, which can see the back-and-forth of thousands of ants. Many ants pass over a living bridge even as it is assembling.
“The bridges are something that happen numerous times every day. They’re creating bridges to optimize their traffic flow and maximize their time,” Lutz said.
“When you’re moving hundreds of thousands of ants, creating a little shortcut can save a lot of energy,” he said. “This is such a unique behavior. You have other types of ants forming structures out of their bodies, but it’s not such a huge part of their lives and daily behavior.”
The research also included Scott Powell, an army-ant expert and assistant professor of biology at George Washington University; Albert Kao, a postdoctoral fellow at Harvard who received his doctorate in ecology and evolutionary biology from Princeton in 2015; and Simon Garnier, an assistant professor of biological sciences at NJIT who studies swarm intelligence and was once a postdoctoral researcher in Couzin’s lab at Princeton.
To conduct their field experiments, Lutz and Reid constructed a 1.5-foot-tall apparatus with ramps on both sides and adjustable arms in the center with which they could adjust the size of the gap. They then inserted the apparatus into active E. hamatum raiding trails that they found in the jungle in Panama. Because ants follow one another’s chemical scent, Lutz and Reid used sticks and leaves from the ants’ trail to get them to reform their column across the device.
Lutz and Reid observed how the ants formed bridges across gaps that were set at angles of 12, 20, 40 and 60 degrees. They gauged how much travel-distance the ants saved with their bridge versus the surface area (in centimeters squared) of the bridge itself. Twelve-degree angles shaved off the most distance (around 11 centimeters) while taking up the fewest workers. Sixty-degree angles had the highest cost-to-benefit ratio. Interestingly, the ants were willing to expend members for 20-degree angles, forming bridges up to 8 centimeters squared to decrease their travel time by almost 12 centimeters, indicating that the loss in manpower was worth the distance saved.
Lutz said that future research based on this work might compare these findings to the living bridges of another army ant species, E. burchellii, to determine if the same principles are in action.
The paper, “Army ants dynamically adjust living bridges in response to a cost-benefit trade-off,” was published Nov. 23 by Proceedings of the National Academy of Sciences. The work was supported by the National Science Foundation (grant nos. PHY-0848755, IOS0-1355061 and EAGER IOS-1251585); the Army Research Office (grant nos. W911NG-11-1-0385 and W911NF-14-1-0431); and the Human Frontier Science Program (grant no. RGP0065/2012).
Living with others can offer tremendous benefits for social animals, including primates, but these benefits could come at a high cost. New research from a project that originated at Princeton University reveals that intermediate-sized groups provide the most benefits to wild baboons. The study, led by Catherine Markham at Stony Brook University and published in the journal, Proceedings of the National Academy of Sciences, offers new insight into the costs and benefits of group living.
In the paper titled “Optimal group size in a highly social mammal,” the authors reveal that while wild baboon groups range in size from 20 to 100 members, groups consisting of about 50 to 70 individuals (intermediate size) exhibit optimal ranging behavior and low physiological stress levels in individual baboons, which translates to a social environment that fosters the health and well-being of individual members. The finding provides novel empirical support for an ongoing theory in the fields of evolutionary biology and anthropology that in living intermediate-sized groups has advantages for social mammals.
“Strikingly, we found evidence that intermediate-sized groups have energetically optimal space-use strategies and both large and small groups experience ranging disadvantages,” said Markham, lead author and an assistant professor in the Department of Anthropology at Stony Brook University. “It appears that large, socially dominant groups are constrained by within-group competition whereas small, socially subordinate groups are constrained by between-group competition and/or predation pressures.”
The researchers compiled their findings based on observing five social wild baboon groups in East Africa over 11 years. This population of wild baboons has been studied continuously for over 40 years by the Amboseli Baboon Research Project. They observed and examined the effects of group size and ranging patterns for all of the groups. To gauge stress levels of individuals, they measured the glucocorticoid (stress hormone) levels found in individual waste droppings.
“The combination of an 11-year data set and more intensive short-term data, together with the levels of stress hormones, led to the important finding that there really is a cost to living in too small a group,” said Jeanne Altmann, the Eugene Higgins Professor of Ecology and Evolutionary Biology, Emeritus and a senior scholar at Princeton University. Altmann is a co-director of the Amboseli Baboon Research Project and co-founded the project in 1971 with Stuart Altmann, a senior scholar in the Department of Ecology and Evolutionary Biology at Princeton.
“The cost of living in smaller groups is a concern from a conservation perspective,” Jeanne Altmann said, “Due to the fragmentation of animal habitats, many animals will be living in smaller groups. Understanding these dynamics is one of the next things to study.” The research was supported primarily by the National Science Foundation and the National Institute on Aging.
Markham, who earned her Ph.D. at Princeton University in 2012 with Jeanne Altmann as her thesis adviser, explained that regarding optimal group sizes for highly social species the key to the analysis is how are trade-offs balanced, and do these trade-offs actually result in an optimal group size for a social species.
She said that their findings provide a testable hypothesis for evaluating group-size constraints in other group-living species, in which the costs of intra- and intergroup competition vary as a function of group size. Additionally, their findings provide implications for new research and a broader understanding of both why some animals live with others and how many neighbors will be best for various species and situations.
The research was conducted in collaboration with Susan Alberts, a professor of biology at Duke University and co-director of the Amboseli Baboon Research Project, and with Laurence Gesquiere, a former postdoctoral researcher at Princeton who is now a senior research scientist working with Alberts. Altmann and Alberts are also affiliated with the Institute for Primate Research, National Museums of Kenya.
Additional support was provided by the American Society of Primatologists, the Animal Behavior Society, the International Primatological Society, and Sigma Xi.
A. Catherine Markham, Laurence R. Gesquiere, Susan C. Alberts and Jeanne Altmann. Optimal group size in a highly social mammal. Proceedings of the National Academy of Sciences. Published online before print October 26, 2015, doi: 10.1073/pnas.1517794112 PNAS October 26, 2015
Scientists have predicted a new phase of superionic ice, a special form of ice that could exist on Uranus and Neptune, in a theoretical study performed by a team of researchers at Princeton University.
“Superionic ice is this in-between state of matter that we can’t really relate to anything we know of — that’s why it’s interesting,” Salvatore Torquato, a Professor of Chemistry who jointly led the work with Roberto Car, the Ralph W. ‘31 Dornte Professor in Chemistry. Unlike water or regular ice, superionic ice is made up of water molecules that have dissociated into charged atoms called ions, with the oxygen ions locked in a solid lattice and the hydrogen ions moving like the molecules in a liquid.
Published on August 28 in Nature Communications, the research revealed an entirely new type of superionic ice that the investigators call the P21/c-SI phase, which occurs at pressures even higher than those found in the interior of the giant ice planets of our solar system. Two other phases of superionic ice thought to exist on the planets are body-centered cubic superionic ice (BCC-SI) and close-packed superionic ice (CP-SI).
Each phase has a unique arrangement of oxygen ions that gives rise to distinct properties. For example, each of the phases allows hydrogen ions to flow in a characteristic way. The effects of this ionic conductivity may someday be observed by planetary scientists in search of superionic ice. “These unique properties could essentially be used as signatures of superionic ice,” said Torquato. “Now that you know what to look for, you have a better chance of finding it.”
Unlike Earth, which has two magnetic poles (north and south), ice giants can have many local magnetic poles, which leading theories suggest may be due to superionic ice and ionic water in the mantle of these planets. In ionic water both oxygen and hydrogen ions show liquid-like behavior. Scientists have proposed that heat emanating outward from the planet’s core may pass through an inner layer of superionic ice, and through convection, create vortices on the outer layer of ionic water that give rise to local magnetic fields.
By using theoretical simulations, the researchers were able to model states of superionic ice that would be difficult to study experimentally. They simulated pressures that were beyond the highest possible pressures attainable in the laboratory with instruments called diamond anvil cells. Extreme pressure can be achieved through shockwave experiments but these rely on creating an explosion and are difficult to interpret, Professor Car explained.
The researchers calculated the ionic conductivity of each phase of superionic ice and found unusual behavior at the transition where the low temperature crystal, in which both oxygen and hydrogen ions are locked together, transforms into superionic ice. In known superionic materials, generally the conductivity can change either abruptly (type I) or gradually (type II), but the type of change will be specific to the material. However, superionic ice breaks from convention, as the conductivity changes abruptly with temperature across the crystal to close-packed superionic transition, and continuously at the crystal to P21/c-SI transition.
As a foundational study, the research team investigated superionic ice treating the ions as if they were classical particles, but in future studies they plan to take quantum effects into account to further understand the properties of the material.
by Angela Page for the Princeton Environmental Institute
In 2011, an influx of remote sensing data from satellites scanning the African savannas revealed a mystery: these rolling grasslands, with their heavy rainfalls and spells of drought, were home to significantly fewer trees than researchers had previously expected given the biome’s high annual precipitation. In fact, the 2011 study found that the more instances of heavy rainfall a savanna received, the fewer trees it had.
This paradox may finally have a solution due to new work from Princeton University recently published in the Proceeding of the National Academy of Sciences. In the study, researchers use mathematical equations to show that physiological differences between trees and grasses are enough to explain the curious phenomenon.
“A simple way to view this is to think of rainfall as annual income,” said Xiangtao Xu, a doctoral candidate in David Medvigy’s lab and first author on the paper. “Trees and grasses are competing over the amount of money the savanna gets every year and it matters how they use their funds.” Xu explained that when the bank is full and there is a lot of rain, the grasses, which build relatively cheap structures, thrive. When there is a deficit, the trees suffer less than grasses and therefore win out.
To establish these findings, Xu and his Princeton collaborators Medvigy, assistant professor in geosciences, and Ignacio Rodriguez-Iturbe, professor of civil and environmental engineering, created a numerical model that mimics the actual mechanistic functions of the trees and grasses. “We put in equations for how they photosynthesize, how they absorb water, how they steal water from each other—and then we coupled it all with a stochastic rainfall generator,” said Xu.
Whereas former analyses only considered total annual or monthly rainfall, understanding how rainfall is distributed across the days is critical here, Xu said, because it determines who will win in a competition between grasses and trees for the finite resource of water availability.
The stochastic rainfall generator draws on rainfall parameters derived from station observations across the savanna. By coupling it with the mechanistic equations describing how the trees and grasses function, the team was able to observe how the plants would respond under different local climate conditions.
The research team found that under very wet conditions, grasses have an advantage because they can quickly absorb water and support high photosynthesis rates. Trees, with their tougher leaves and roots, are able to survive better in dry periods because of their ability to withstand water stress. But this amounts to a disadvantage for trees in periods of intense rainfall, as they are comparatively less effective at utilizing the newly abundant water.
“We put realistic rainfall schemes into the model, then generated corresponding grass or tree abundance, and compared the numerical results with real-world observations,” Xu said. If the model looked like the real-world data, then they could say it offered a viable explanation for the unexpected phenomenon, which is not supported by traditional models—and that is exactly what they found. They tested the model using both field measurements from a well-studied savanna in Nylsvley, South Africa and nine other sites along the Kalahari Transect, as well as remote sensing data across the whole continent. With each site, the model accurately predicted observed tree abundances in those locations.
The work rejects the long held theory of root niche separation, which predicts that trees will outcompete grasses under intense rainfall when the soil becomes saturated, because their heavy roots penetrate deeper into the ground. “But this ignores the fact that grasses and trees have different abilities for absorbing and utilizing water,” Xu said. “And that’s one of the most important parts of what we found. Grasses are more efficient at absorbing water, so in a big rainfall event, grasses win.”
“Models are developed to understand and predict the past and present state — they offer a perspective on future states given the shift in climatic conditions,” said Gaby Katul, a Professor of Hydrology and Micrometeorology in the Nicholas School of the Environment at Duke University, who was not involved in the research. “This work offers evidence of how shifts in rainfall affect the tree-grass interaction because rainfall variations are large. The approach can be used not only to ‘diagnose’ the present state where rainfall pattern variations dominate but also offers a ‘prognosis’ as to what may happen in the future.”
Several high profile papers over the last decade predict that periods of intense rainfall like those described in the paper will become more frequent around the globe, especially in tropical areas, Xu said. His work suggests that these global climate changes will eventually lead to diminished tree abundance on the savannas.
“Because the savanna takes up a large area, which is home to an abundance of both wild animals and livestock, this will influence many people who live in those areas,” Xu said. “It’s important to understand how the biome would change under global climate change.”
Furthermore, the study highlights the importance of understanding the structure and pattern of rainfall, not just the total annual precipitation—which is where most research in this area has traditionally focused. Fifty years from now, a region may still experience the same overall depth of precipitation, but if the intensity has changed, that will induce changes to the abundance of grasses and trees. This, in turn, will influence the herbivores that subsist on them, and other animals in the biome — essentially, affecting the entire complex ecosystem.
Xu said it would be difficult to predict whether such changes would have positive or negative impacts. But he did say that more grasses mean more support for cows and horses and other herbivores. On the other hand, fewer trees mean less CO2 is captured out of the atmosphere, as well as diminished habitat for birds and other animals that rely on the trees for survival.
What the model does offer is an entry point for better policies and decisions to help communities adapt to future changes. “It’s just like with the weather,” Xu said. “If you don’t read the weather report, you have to take what nature gives you. But if you know in advance that it will rain tomorrow, you know to bring an umbrella.”
Xiangtao Xua, David Medvigy, and Ignacio Rodriguez-Iturbe. Relation between rainfall intensity and savanna tree abundance explained by water use strategies. Published online September 29, 2015, doi: 10.1073/pnas.1517382112. PNAS October 5, 2015.