New research will help forecast bad ozone days over the western U.S. (Nature Communications)

The contribution of stratospheric ozone to US surface ozone peaks in the western Rockies during late spring. This map shows mean contribution in parts per billion by volume (ppbv) for May to June. Credit: NOAA

The contribution of stratospheric ozone to US surface ozone peaks in the western Rockies during late spring. This map shows mean contribution in parts per billion by volume (ppbv) for May to June. Credit: NOAA

New research published in Nature Communications led by Meiyun Lin of NOAA’s Geophysical Fluid Dynamics Laboratory and NOAA’s cooperative institute at Princeton University, reveals a strong connection between high ozone days in the western U.S. during late spring and La Niña, an ocean-atmosphere phenomena that affects global weather patterns.

Recognizing this link offers an opportunity to forecast ozone several months in advance, which could improve public education to reduce health effects. It would also help western U.S. air quality managers prepare to track these events, which can have implications for attaining the national ozone standard.

Exposure to ozone is harmful to human health, can cause breathing difficulty, coughing, scratchy and sore throats, and asthma attacks, and can damage sensitive plants.

NOAA scientists used a lidar aboard this Twin Otter aircraft to study the movement of ozone from the stratosphere to the lower atmosphere above California in 2010. Credit: NOAA

NOAA scientists used a lidar aboard this Twin Otter aircraft to study the movement of ozone from the stratosphere to the lower atmosphere above California in 2010. Credit: NOAA

“Ozone in the stratosphere, located 6 to 30 miles (10 to 48 kilometers) above the ground, typically stays in the stratosphere,” said Lin, an associate research scholar in the Program in Atmospheric and Oceanic Sciences at Princeton University. “But not on some days in late spring following a strong La Niña winter. That’s when the polar jet stream meanders southward over the western U.S. and facilitates intrusions of stratospheric ozone to ground level where people live.”

Over the last two decades, there have been three La Niña events – 1998-1999, 2007-2008 and 2010-2011. After these events, scientists saw spikes in ground level ozone for periods of two to three days at a time during late spring in high altitude locations of the U.S. West.

While high ozone typically occurs on muggy summer days when pollution from cars and power plants fuels the formation of regional ozone pollution, high-altitude regions of the U.S. West sometimes have a different source of high ozone levels in late spring. On these days, strong gusts of cold dry air associated with downward transport of ozone from the stratosphere pose a risk to these communities.

Lin and her colleagues found that these deep intrusions of stratospheric ozone could add 20 to 40 parts per billion of ozone to the ground-level ozone concentration, which can provide over half the ozone needed to exceed the standard set by the U.S. Environmental Protection Agency. The EPA has proposed tightening that standard currently set at 75 parts per billion for an eight-hour average to between 65 and 70 parts per billion.

In the spring after La Niña winters, when the polar jet stream meanders southward over the western US, it facilitates intrusions of stratospheric ozone to ground level where people live. Credit: NOAA

In the spring after La Niña winters, when the polar jet stream meanders southward over the western US, it facilitates intrusions of stratospheric ozone to ground level where people live. Credit: NOAA

Under the Clean Air Act, these deep stratospheric ozone intrusions can be classified as “exceptional events” that are not counted towards EPA attainment determinations. As our national ozone standard becomes more stringent, the relative importance of these stratospheric intrusions grows, leaving less room for human-caused emissions to contribute to ozone pollution prior to exceeding the level set by the U.S. EPA.

“Regardless of whether these events count towards non-attainment, people are living in these regions and the possibility of predicting a high-ozone season might allow for public education to minimize adverse health effects,” said Arlene Fiore, an atmospheric scientist at Columbia University and a co-author of the research.

Predicting where and when stratospheric ozone intrusions may occur would also provide time to deploy air sensors to obtain evidence as to how much of ground-level ozone can be attributed to these naturally occurring intrusions and how much is due to human-caused emissions.

The study involved collaboration across two NOAA laboratories, NOAA’s cooperative institutes at Princeton and the University of Colorado Boulder, and scientists at partner institutions in the U.S., Canada and Austria. It was also supported in part by the NASA Air Quality Applied Sciences Team whose mission is to apply earth science data to help address air quality management needs.

“This study brings together observations and chemistry-climate modeling to help understand the processes that contribute to springtime high-ozone events in the western U.S.,” said Andrew Langford, an atmospheric scientist at NOAA’s Earth System Research Laboratory in Boulder, Colorado, whose teams measure ozone concentrations using lidar and balloon-borne sensors.

“You’ve heard about good ozone, the kind found high in the stratosphere that protects the earth from harmful ultraviolet radiation,” said Langford. “And you’ve heard about bad ozone at ground level. This study looks at the factors that cause good ozone to go bad.”

Lin, Fiore and Langford conducted the research with Larry Horowitz of NOAA’s Geophysical Fluid Dynamics Laboratory; Samuel Oltmans of the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder, who works in NOAA’s Earth System Research Laboratory; David Tarasick of Environment Canada; and Harald Rieder of the University of Graz in Austria.

Read the article in Nature Communications.


Meiyun Lin, Arlene M. Fiore, Larry W. Horowitz, Andrew O. Langford, Samuel J. Oltmans, David Tarasick & Harald E. Rieder. Climate variability modulates western US ozone air quality in spring via deep stratospheric intrusions. Nature Communications 6, No 7105 doi:10.1038/ncomms8105

Courtesy of National Oceanic and Atmospheric Administration (NOAA) Communications & External Affairs

Dissecting the ocean’s unseen waves to learn where the heat, energy and nutrients go (Nature)

By Morgan Kelly, Office of Communications

Sonya Legg, Senior Research Oceanographer, Atmospheric and Oceanic Sciences at Princeton University, and a team of colleagues from other institutions created the first-ever model of the world’s most powerful internal ocean waves.

Sonya Legg, a senior research oceanographer in the Program in Atmospheric and Oceanic Sciences at Princeton University, and colleagues from collaborating institutions created the first “cradle to grave” model of the world’s most powerful internal ocean waves.

Beyond the pounding surf loved by novelists and beachgoers alike, the ocean contains rolling internal waves beneath the surface that displace massive amounts of water and push heat and vital nutrients up from the deep ocean.

Internal waves have long been recognized as essential components of the ocean’s nutrient cycle, and key to how oceans will store and distribute additional heat brought on by global warming. Yet, scientists have not until now had a thorough understanding of how internal waves start, move and dissipate.

Researchers from the Office of Naval Research’s multi-institutional Internal Waves In Straits Experiment (IWISE) have published in the journal Nature the first “cradle-to-grave” model of the world’s most powerful internal waves. Caused by the tide, the waves move through the Luzon Strait between southern Taiwan and the Philippine island of Luzon that connects the Pacific Ocean to the South China Sea.

Simulation of waves in Luzon Strait

The complexity of the Luzon Strait’s two-ridge system was not previously known. The Princeton researchers’ simulations showed that the two ridges of the Luzon Strait greatly amplify the size and energy of the wave, well beyond the sum of what the two ridges would generate separately. The simulation above of the tide moving over the second, or western, ridge shows that the tidally-driven flow reaches a high velocity (top) as it moves down the slope (left to right), creating a large wave in density (black lines) with concentrated turbulent energy dissipation (bottom). As the tide moves back over the ridge, the turbulence is swept away. For both the velocity and energy dissipation panels, the color scale indicates the greatest velocity or energy (red) to the least amount (blue). (Image by Maarten Buijsman, University of Southern Mississippi)

Combining computer models constructed largely by Princeton University researchers with on-ship observations, the researchers determined the movement and energy of the waves from their origin on a double-ridge between Taiwan and the Philippines to when they fade off the coast of China. Known to provide nutrients for whales and pose a hazard to shipping, the Luzon Strait internal waves move west at speeds as fast as 3 meters (18 feet) per second and can be as much as 500 meters (1,640 feet) from trough to crest, the researchers found.

The Luzon Strait internal waves provide an ideal archetype for understanding internal waves, explained co-author Sonya Legg, a Princeton senior research oceanographer in the Program in Atmospheric and Oceanic Sciences and a lecturer in geosciences. The distance from the Luzon Strait to China is relatively short — compared to perhaps the Hawaiian internal wave that crosses the Pacific to Oregon — and the South China Sea is relatively free of obstructions such as islands, crosscurrents and eddies, Legg said. Not only did these factors make the waves much more manageable to model and study in the field, but also resulted in a clearer understanding of wave dynamics that can be used to understand internal waves elsewhere in the ocean, she said.

Model of internal waves

Researchers from the Office of Naval Research’s multi-institutional Internal Waves In Straits Experiment (IWISE) — including from Princeton University — have published the first “cradle-to-grave” model of internal waves, which are subsurface ocean displacements recognized as essential to the distribution of nutrients and heat. The researchers modeled the internal waves that move through the Luzon Strait between southern Taiwan and the Philippine island of Luzon. Part of the Princeton researchers’ role was to simulate when and where the Luzon Strait’s internal waves are strongest as the tide moves westward from the Pacific Ocean into the South China Sea over a unique double-ridge formation in the strait. The above image shows the two underwater ridges — indicated in green, orange and red — between Taiwan (top) and island of Luzon (bottom). The color scale indicates elevation from lowest (blue) to highest (red). (Image by Maarten Buijsman, University of Southern Mississippi)

“We know there are these waves in other parts of the ocean, but they’re hard to look at because there are other things in the way,” Legg said. “The Luzon Strait waves are in a mini-basin, so instead of the whole Pacific to focus on, we had this small sea — it’s much more manageable. It’s a place you can think of as a laboratory in the ocean that’s much simpler than other parts of the ocean.”

Legg and co-author Maarten Buijsman, who worked on the project while a postdoctoral researcher at Princeton and is now an assistant professor of physical oceanography at the University of Southern Mississippi, created computer simulations of the Luzon Strait waves that the researchers in the South China Sea used to determine the best locations to gather data.

For instance, Legg and Buijsman used their models to pinpoint where and when the waves begin with the most energy as the ocean tide crosses westward over the strait’s two underwater ridges. Notably, their models showed that the two ridges greatly amplify the size and energy of the wave, well beyond the sum of what the two ridges would generate separately. The complexity of a two-ridge system was not previously known, Legg said.

The energy coming off the strait’s two ridges steepens as it moves toward China, evolving from a rolling wavelength to a steep “saw-tooth” pattern, Legg said. These are the kind of data the researchers sought to gather — where the energy behind internal waves goes and how it changes on its way. How an internal wave’s energy is dissipated determines the amount of heat and nutrients that are transferred from the cold depths of the lower ocean to the warm surface waters, or vice versa.

Models used to project conditions on an Earth warmed by climate change especially need to consider how the ocean will move excess heat around, Legg said. Heat that stays at the surface will ultimately result in greater sea-level rise as warmer water expands more readily as it heats up. The cold water of the deep, however, expands less for the same input of heat and has a greater capacity to store warm water. If heat goes to the deep ocean, that could greatly increase how much heat the oceans can absorb, Legg said.

As researchers learn more about internal waves such as those in the Luzon Strait, climate models can be tested against what becomes known about ocean mechanics to more accurately project conditions on a warmer Earth, she said.

“Ultimately, we want to know what effect the transportation and storage of heat has on the ocean. Internal waves are a significant piece in the puzzle in telling us where heat is stored,” Legg said. “We have in the Luzon Strait an oceanic laboratory where we can test our theoretical models and simulations to see them play out on a small scale.”

This work supported by the U.S. Office of Naval Research and the Taiwan National Science Council.

Read the abstract

Matthew H. Alford, et al. 2015. The formation and fate of internal waves in the South China Sea. Nature. Arti­cle pub­lished online in-advance-of-print May 7, 2015. DOI: 10.1038/nature14399



An improvement to the global software standard for analyzing fusion plasmas (Nuclear Fusion)

By Raphael Rosen, Princeton Plasma Physics Laboratory

The gold standard for analyzing the behavior of fusion plasmas may have just gotten better. Mario Podestà, a staff physicist at the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL), has updated the worldwide computer program known as TRANSP to better simulate the interaction between energetic particles and instabilities – disturbances in plasma that can halt fusion reactions. The program’s updates, reported in the journal Nuclear Fusion, could lead to improved capability for predicting the effects of some types of instabilities in future facilities such as ITER, the international experiment under construction in France to demonstrate the feasibility of fusion power.

Podestà and co-authors saw a need for better modeling techniques when they noticed that while TRANSP could accurately simulate an entire plasma discharge, the code wasn’t able to represent properly the interaction between energetic particles and instabilities. The reason was that TRANSP, which PPPL developed and has regularly updated, treated all fast-moving particles within the plasma the same way. Those instabilities, however, can affect different parts of the plasma in different ways through so-called “resonant processes.”

The authors first figured out how to condense information from other codes that do model the interaction accurately – albeit over short time periods – so that TRANSP could incorporate that information into its simulations. Podestà then teamed up with TRANSP developer Marina Gorelenkova at PPPL to update a TRANSP module called NUBEAM to enable it to make sense of this condensed data. “Once validated, the updated module will provide a better and more accurate way to compute the transport of energetic particles,” said Podestà. “Having a more accurate description of the particle interactions with instabilities can improve the fidelity of the program’s simulations.”

Schematic of NSTX tokamak at PPPL with a cross-section showing perturbations of the plasma profiles caused by instabilities. Without instabilities, energetic particles would follow closed trajectories and stay confined inside the plasma (blue orbit). With instabilities, trajectories can be modified and some particles may eventually be pushed out of the plasma boundary and lost (red orbit). Credit: Mario Podestà

Schematic of NSTX tokamak at PPPL with a cross-section showing perturbations of the plasma profiles caused by instabilities. Without instabilities, energetic particles would follow closed trajectories and stay confined inside the plasma (blue orbit). With instabilities, trajectories can be modified and some particles may eventually be pushed out of the plasma boundary and lost (red orbit). Credit: Mario Podestà

Fast-moving particles, which result from neutral beam injection into tokamak plasmas, cause the instabilities that the updated code models. These particles begin their lives with a neutral charge but turn into negatively charged electrons and positively charged ions – or atomic nuclei – inside the plasma. This scheme is used to heat the plasma and to drive part of the electric current that completes the magnetic field confining the plasma.

The improved simulation tool may have applications for ITER, which will use fusion end-products called alpha particles to sustain high plasma temperatures. But just like the neutral beam particles in current-day-tokamaks, alpha particles could cause instabilities that degrade the yield of fusion reactions. “In present research devices, only very few, if any, alpha particles are generated,” said Podestà. “So we have to study and understand the effects of energetic ions from neutral beam injectors as a proxy for what will happen in future fusion reactors.”

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, visit

Read the paper

Podestà, M. Gorelenkova, D.S. Darrow, E.D. Fredrickson, S.P. Gerhardt and R.B. White. Nucl. Fusion 55 053018

Decoding the Cell’s Genetic Filing System (Nature Chemistry)

By Tien Nguyen, Department of Chemistry

A fully extended strand of human DNA measures about five feet in length. Yet it occupies a space just one-tenth of a cell by wrapping itself around histones—spool-like proteins—to form a dense hub of information called chromatin.

Access to these meticulously packed genes is regulated by post-translational modifications, chemical changes to the structure of histones that act as on-off signals for gene transcription. Mistakes or mutations in histones can cause diseases such as glioblastoma, a devastating pediatric brain cancer.

Source: Nature Chemistry

Source: Nature Chemistry

Researchers at Princeton University have developed a facile method to introduce non-native chromatin into cells to interrogate these signaling pathways. Published on April 6 in the journal Nature Chemistry, this work is the latest chemical contribution from the Muir lab towards understanding nature’s remarkable information indexing system.

Tom Muir, the Van Zandt Williams, Jr. Class of ’65 Professor of Chemistry, began investigating transcriptional pathways in the so-called field of epigenetics almost a decade earlier. Deciphering such a complex and dynamic system posed a formidable challenge, but his research lab was undeterred. “It’s better to fail at something important than to succeed at something trivial,” he said.

Muir recognized the value of introducing chemical approaches to epigenetics to complement early contributions that came mainly from molecular biologists and geneticists. If epigenetics was like a play, he said, molecular biology and genetics could identify the characters but chemistry was needed to understand the subplots.

These subplots, or post-translational modifications of histones, of which there are more than 100, can occur cooperatively and simultaneously. Traditional methods to probe post-translational modifications involved synthesizing modified histones one at a time, which was a very slow process that required large amounts of biological material.

Last year, the Muir group introduced a method that would massively accelerate this process. The researchers generated a library of 54 nucleosomes—single units of chromatin, like pearls on a necklace—encoded with DNA-barcodes, unique genetic tags that can be easily identified. Published in the journal Nature Methods, the high throughput method required only microgram amounts of each nucleosome to run approximately 4,500 biochemical assays.

“The speed and sensitivity of the assay was shocking,” Muir said. Each biochemical assay involved treatment of the DNA-barcoded nucleosome with a writer, reader or nuclear extract, to reveal a particular binding preference of the histone. The products were then isolated using a technique called chromatin immunoprecipitation and characterized by DNA sequencing, essentially an ordered readout of the nucleotides.

“There have been incredible advances in genetic sequencing over the last 10 years that have made this work possible,” said Manuel Müller, a postdoctoral researcher in the Muir lab and co-author on the Nature Methods article.

Schematic of approach using split inteins

Schematic of approach using split inteins

With this method, researchers could systematically interrogate the signaling system to propose mechanistic pathways. But these mechanistic insights would remain hypotheses unless they could be validated in vivo, meaning inside the cellular environment.

The only method for modifying histones in vivo was extremely complicated and specific, said Yael David, a postdoctoral researcher in the Muir lab and lead author on the recent Nature Chemistry study that demonstrated a new and easily customizable approach.

The method relied on using ultra-fast split inteins, protein fragments that have a great affinity for one another. First, one intein fragment was attached to a modified histone, by encoding it into a cell. Then, the other intein fragment was synthetically fused to a label, which could be a small protein tag, fluorophore or even an entire protein like ubiquitin.

Within minutes of being introduced into the cell, the labeled intein fragment bound to the histone intein fragment. Then like efficient and courteous matchmakers, the inteins excised themselves and created a new bond between the label and modified histone. “It’s really a beautiful way to engineer proteins in a cell,” David said.

Regions of the histone may be loosely or tightly packed, depending on signals from the cell indicating whether or not to transcribe a gene. By gradually lowering the amount of labeled intein introduced, the researchers could learn about the structure of chromatin and tease out which areas were more accessible than others.

Future plans in the Muir lab will employ these methods to ask specific biological questions, such as whether disease outcomes can be altered by manipulating signaling pathway. “Ultimately, we’re developing methods at the service of biological questions,” Muir said.

This research was supported by the US National Institutes of Health (grants R37-GM086868 and R01 GM107047).

Read the articles:

Nguyen, U.T.T.; Bittova, L.; Müller, M.; Fierz, B.; David, Y.; Houck-Loomis, B.; Feng, V.; Dann, G.P.; Muir, T.W. “Accelerated chromatin biochemistry using DNA-barcoded nucleosome libraries.” Nature Methods, 2014, 11, 834.

David, Y.; Vila-Perelló, M; Verma, S.; Muir, T.W. “Chemical tagging and customizing of cellular chromatin states using ultrafast trans-splicing inteins.” Nature Chemistry, Advance online publication, April 6, 2015.

Frustrated magnets – new experiment reveals clues to their discontent (Science)

By Catherine Zandonella, Office of the Dean for Research

A crystal of frustrated magnet (Tb2Ti2O7). Image credit: Jason Krizan.

A crystal of frustrated magnet (Tb2Ti2O7). Image credit: Jason Krizan.

An experiment conducted by Princeton researchers has revealed an unlikely behavior in a class of materials called frustrated magnets, addressing a longdebated question about the nature of these discontented quantum materials.

The work represents a surprising discovery that down the road may suggest new research directions for advanced electronics. Published this week in the journal Science, the study also someday may help clarify the mechanism of high-temperature superconductivity, the frictionless transmission of electricity.

The researchers tested the frustrated magnets — so-named because they should be magnetic at low temperatures but aren’t — to see if they exhibit a behavior called the Hall Effect. When a magnetic field is applied to an electric current flowing in a conductor such as a copper ribbon, the current deflects to one side of the ribbon. This deflection, first observed in 1879 by E.H. Hall, is used today in sensors for devices such as computer printers and automobile anti-lock braking systems.

Because the Hall Effect happens in charge-carrying particles, most physicists thought it would be impossible to see such behavior in non-charged, or neutral, particles like those in frustrated magnets. “To talk about the Hall Effect for neutral particles is an oxymoron, a crazy idea,” said N. Phuan Ong, Princeton’s Eugene Higgins Professor of Physics.

Nevertheless, some theorists speculated that the neutral particles in frustrated magnets might bend to the Hall rule under extremely cold conditions, near absolute zero, where particles behave according to the laws of quantum mechanics rather than the classical physical laws we observe in our everyday world. Harnessing quantum behavior could enable game-changing innovations in computing and electronic devices.

Ong and colleague Robert Cava, Princeton’s Russell Wellman Moore Professor of Chemistry, and their graduate students Max Hirschberger and Jason Krizan decided to see if they could settle the debate and demonstrate conclusively that the Hall Effect exists for frustrated magnets.

To do so, the research team turned to a class of the magnets called pyrochlores. They contain magnetic moments that, at very low temperatures near absolute zero, should line up in an orderly manner so that all of their “spins,” a quantum-mechanical property, point in the same direction. Instead, experiments have found that the spins point in random directions. These frustrated materials are also referred to as “quantum spin ice.”

“These materials are very interesting because theorists think the tendency for spins to align is still there, but, due to a concept called geometric frustration, the spins are entangled but not ordered,” Ong said. Entanglement is a key property of quantum systems that researchers hope to harness for building a quantum computer, which could solve problems that today’s computers cannot handle.

A chance conversation in a hallway between Cava and Ong revealed that Cava had the know-how and experimental infrastructure to make such materials. He tasked chemistry graduate student Krizan with growing the crystals while Hirschberger, a graduate student in physics, set up the experiments needed to look for the Hall Effect.

Graduate student Max Hirschberger lowers the assembled experimental setup into a high-field magnet system, capable of creating fields as strong as 250,000 times the earth's magnetic field.  (Image credit: Jason Krizan.)

Graduate student Max Hirschberger lowers the assembled experimental setup into a high-field magnet system, capable of creating fields as strong as 250,000 times the earth’s magnetic field. (Image credit: Jason Krizan.)

“The main challenge was how to measure the Hall Effect at an extremely low temperature where the quantum nature of these materials comes out,” Hirschberger said. The experiments were performed at temperatures of 0.5 degrees Kelvin, and required Hirschberger to resolve temperature differences as small as a thousandth of a degree between opposite edges of a crystal.

To grow the crystals, Krizan first synthesized the material from terbium oxide and titanium oxide in a furnace similar to a kiln. After forming the pyrochlore powder into a cylinder suitable for feeding the crystal growth, Krizan suspended it in a chamber filled with pure oxygen and blasted it with enough focused light from four 1000-Watt halogen light bulbs to heat a small region to 1800 degrees Celsius. The final products were thin, flat transparent or orange slabs about the size of a sesame seed.

To test each crystal, Hirschberger attached tiny gold electrodes to either end of the slab, using microheaters to drive a heat current through the crystal. At such low temperatures, this heat current is analogous to the electric current in the ordinary Hall Effect experiment.

At the same time, he applied a magnetic field in the direction perpendicular to the heat current. To his surprise, he saw that the heat current was deflected to one side of the crystal. He had observed the Hall Effect in a non-magnetic material.

Surprised by the results, Ong suggested that Hirschberger repeat the experiment, this time by reversing the direction of the heat current. If Hirschberger was really seeing the Hall Effect, the current should deflect to the opposite side of the crystal. Reconfiguring the experiment at such low temperatures was not easy, but eventually he demonstrated that the signal did indeed reverse in a manner consistent with the Hall Effect.

“All of us were very surprised because we work and play in the classical, non-quantum world,” Ong said. “Quantum behavior can seem very strange, and this is one example where something that shouldn’t happen is really there. It really exists.”

The use of experiments to probe the quantum behavior of materials is essential for broadening our understanding of fundamental physical properties and the eventual exploitation of this understanding in new technologies, according to Cava. “Every technological advance has a basis in fundamental science through our curiosity about how the world works,” he said.

Further experiments on these materials may provide insights into how superconductivity occurs in certain copper-containing materials called cuprates, also known as “high-temperature” superconductors because they work well above the frigid temperatures required for today’s superconductors, such as those used in MRI machines.

One of the ideas for how high-temperature superconductivity could occur is based on the possible existence of a particle called the spinon. Theorists, including the Nobel laureate Philip Anderson, Princeton’s Joseph Henry Professor of Physics, Emeritus and a senior physicist, and others have speculated that spinons could be the carrier of a heat current in a quantum system such as the one explored in the present study.

Although the team does not claim to have observed the spinon, Ong said that the work could lead in such a direction in the future. “This work sets the stage for hunting the spinon,” Ong said. “We have seen its tracks, so to speak.”


The research was funded by the Army Research Office (ARO W911NF-11-1-0379, ARO W911NF-12-1-0461), the U.S. National Science Foundation (DMR 1420541), and the U.S. Department of Energy’s Division of Basic Energy Sciences, (DE-FG-02-08ER46544).


Max Hirschberger, Jason W. Krizan, R. J. Cava, N. P. Ong. Large thermal Hall conductivity of neutral spin excitations in a frustrated quantum magnet. Science. 10.1126/science.1257340

Revisiting the mechanics of the action potential (Nature Communications)

By Staff


The action potential (AP) and the accompanying action wave (AW) constitute an electromechanical pulse traveling along the axon.

The action potential is widely understood as an electrical phenomenon. However, a long experimental history has documented the existence of co-propagating mechanical signatures.

In a new paper in the journal Nature Communications, two Princeton University researchers have proposed a theoretical model to explain these mechanical signatures, which they term “action waves.” The research was conducted by Ahmed El Hady, a visiting postdoctoral research associate at the Princeton Neuroscience Institute and a postdoctoral fellow at the Howard Hughes Medical Institute, and Benjamin Machta, an associate research scholar at the Lewis-Sigler Institute for Integrative Genomics and a lecturer in physics and the Lewis-Sigler Institute for Integrative Genomics.

In the model, the co-propagating waves are driven by changes in charge separation across the axonal membrane, just as a speaker uses charge separation to drive sound waves through the air. The researchers argue that these forces drive surface waves involving both the axonal membrane and cytoskeleton as well as its surrounding fluid. Their model may help shed light on the functional role of the surprisingly structured axonal cytoskeleton that recent super-resolution techniques have uncovered, and suggests a wider role for mechanics in neuronal function.

Read the paper.

Ahmed El Hady & Benjamin B. Machta. Mechanical surface waves accompany action potential propagation. Nature Communications 6, No. 6697 doi:10.1038/ncomms7697

Do biofuel policies seek to cut emissions by cutting food? (Science)

By Catherine Zandonella, Office of the Dean for Research

2015_03_27_cornfieldA study published today in the journal Science found that government biofuel policies rely on reductions in food consumption to generate greenhouse gas savings.

Shrinking the amount of food that people and livestock eat decreases the amount of carbon dioxide that they breathe out or excrete as waste. The reduction in food available for consumption, rather than any inherent fuel efficiency, drives the decline in carbon dioxide emissions in government models, the researchers found.

“Without reduced food consumption, each of the models would estimate that biofuels generate more emissions than gasoline,” said Timothy Searchinger, first author on the paper and a research scholar at Princeton University’s Woodrow Wilson School of Public and International Affairs and the Program in Science, Technology, and Environmental Policy.

Searchinger’s co-authors were Robert Edwards and Declan Mulligan of the Joint Research Center at the European Commission; Ralph Heimlich of the consulting practice Agricultural Conservation Economics; and Richard Plevin of the University of California-Davis.

The study looked at three models used by U.S. and European agencies, and found that all three estimate that some of the crops diverted from food to biofuels are not replaced by planting crops elsewhere. About 20 percent to 50 percent of the net calories diverted to make ethanol are not replaced through the planting of additional crops, the study found.

The result is that less food is available, and, according to the study, these missing calories are not simply extras enjoyed in resource-rich countries. Instead, when less food is available, prices go up. “The impacts on food consumption result not from a tailored tax on excess consumption but from broad global price increases that will disproportionately affect some of the world’s poor,” Searchinger said.

The emissions reductions from switching from gasoline to ethanol have been debated for several years. Automobiles that run on ethanol emit less carbon dioxide, but this is offset by the fact that making ethanol from corn or wheat requires energy that is usually derived from traditional greenhouse gas-emitting sources, such as natural gas.

Both the models used by the U.S. Environmental Protection Agency and the California Air Resources Board indicate that ethanol made from corn and wheat generates modestly fewer emissions than gasoline. The fact that these lowered emissions come from reductions in food production is buried in the methodology and not explicitly stated, the study found.

The European Commission’s model found an even greater reduction in emissions. It includes reductions in both quantity and overall food quality due to the replacement of oils and vegetables by corn and wheat, which are of lesser nutritional value. “Without these reductions in food quantity and quality, the [European] model would estimate that wheat ethanol generates 46% higher emissions than gasoline and corn ethanol 68% higher emissions,” Searching said.

The paper recommends that modelers try to show their results more transparently so that policymakers can decide if they wish to seek greenhouse gas reductions from food reductions. “The key lesson is the trade-offs implicit in the models,” Searchinger said.

The research was supported by The David and Lucile Packard Foundation.

Read the abstract.

T. Searchinger, R. Edwards, D. Mulligan, R. Heimlich, and R. Plevin. Do biofuel policies seek to cut emissions by cutting food? Science 27 March 2015: 1420-1422. DOI: 10.1126/science.1261221.