Author Archives: Catherine Zandonella

Too many chefs: Smaller groups exhibit more accurate decision-making (Proceedings of the Royal Society B)

Flock behavior

Smaller groups actually tend to make more accurate decisions, according to a new study from Princeton University Professor Iain Couzin and graduate student Albert Kao. (Photo credit: Gabriel Miller)

By Morgan Kelly, Office of Communications

The trope that the likelihood of an accurate group decision increases with the abundance of brains involved might not hold up when a collective faces a variety of factors — as often happens in life and nature. Instead, Princeton University researchers report that smaller groups actually tend to make more accurate decisions, while larger assemblies may become excessively focused on only certain pieces of information.

The findings present a significant caveat to what is known about collective intelligence, or the “wisdom of crowds,” wherein individual observations — even if imperfect — coalesce into a single, accurate group decision. A classic example of crowd wisdom is English statistician Sir Francis Galton’s 1907 observation of a contest in which villagers attempted to guess the weight of an ox. Although not one of the 787 estimates was correct, the average of the guessed weights was a mere one-pound short of the animal’s recorded heft. Along those lines, the consensus has been that group decisions are enhanced as more individuals have input.

But collective decision-making has rarely been tested under complex, “realistic” circumstances where information comes from multiple sources, the Princeton researchers report in the journal Proceedings of the Royal Society B. In these scenarios, crowd wisdom peaks early then becomes less accurate as more individuals become involved, explained senior author Iain Couzin, a professor of ecology and evolutionary biology.

“This is an extension of the wisdom-of-crowds theory that allows us to relax the assumption that being in big groups is always the best way to make a decision,” Couzin said.

“It’s a starting point that opens up the possibility of capturing collective decision-making in a more realistic environment,” he said. “When we do see small groups of animals or organisms making decisions they are not necessarily compromising accuracy. They might actually do worse if more individuals were involved. I think that’s the new insight.”

Couzin and first author Albert Kao, a graduate student of ecology and evolutionary biology in Couzin’s group, created a theoretical model in which a “group” had to decide between two potential food sources. The group’s decision accuracy was determined by how well individuals could use two types of information: One that was known to all members of the group — known as correlated information — and another that was perceived by only some individuals, or uncorrelated information. The researchers found that the communal ability to pool both pieces of information into a correct, or accurate, decision was highest in a band of five to 20. After that, the accurate decision increasingly eluded the expanding group.

At work, Kao said, was the dynamic between correlated and uncorrelated cues. With more individuals, that which is known by all members comes to dominate the decision-making process. The uncorrelated information gets drowned out, even if individuals within the group are still well aware of it.

In smaller groups, on the other hand, the lesser-known cues nonetheless earn as much consideration as the more common information. This is due to the more random nature of small groups, which is known as “noise” and typically seen as an unwelcome distraction. Couzin and Kao, however, found that noise is surprisingly advantageous in these smaller arrangements.

“It’s surprising that noise can enhance the collective decision,” Kao said. “The typical assumption is that the larger the group, the greater the collective intelligence.

“We found that if you increase group size, you see the wisdom-of-crowds benefit, but if the group gets too large there is an over-reliance on high-correlation information,” he said. “You would find yourself in a situation where the group uses that information to the point that it dominates the group’s decision-making.”

None of this is to suggest that large groups would benefit from axing members, Couzin said. The size threshold he and Kao found corresponds with the number of individuals making the decisions, not the size of the group overall. The researchers cite numerous studies — including many from Couzin’s lab — showing that decisions in animal groups such as schools of fish can often fall to a select few members. Thusly, these organisms can exhibit highly coordinated movements despite vast numbers of individuals. (Such hierarchies could help animals realize a dual benefit of efficient decision-making and defense via strength-in-numbers, Kao said.)

“What’s important is the number of individuals making the decision,” Couzin said. “Just looking at group size per se is not necessarily relevant. It depends on the number of individuals making the decision.”

Read the abstract.

Kao, Albert B., Iain D. Couzin. 2014. Decision accuracy in complex environments is often maximized by small group sizes. Proceedings of the Royal Society B. Article published online April 23, 2014. DOI: 10.1098/rspb.2013.3305

This work was supported by a National Science Foundation Graduate Research Fellowship, National Science Foundation Doctoral Dissertation Improvement (grant no. 1210029), the National Science Foundation (grant no. PHY-0848755), the Office of Naval Research Award (no. N00014-09-1-1074), the Human Frontier Science Project (grant no. RGP0065/2012), the Army Research Office (grant no. W911NG-11-1-0385), and an NSF EAGER grant (no. IOS-1251585).

Study resolves controversy over nitrogen’s ocean “exit strategies” (Science)

By Catherine Zandonella, Office of the Dean for Research

Seawater collection device

Princeton University graduate student Andrew Babbin (left) prepares a seawater collection device known as a rosette. The team used samples of seawater to determine how nitrogen is removed from the oceans. (Research photos courtesy of A. Babbin)

A decades-long debate over how nitrogen is removed from the ocean may now be settled by new findings from researchers at Princeton University and their collaborators at the University of Washington.

The debate centers on how nitrogen — one of the most important food sources for ocean life and a controller of atmospheric carbon dioxide — becomes converted to a form that can exit the ocean and return to the atmosphere where it is reused in the global nitrogen cycle.

Researchers have argued over which of two nitrogen-removal mechanisms, denitrification and anammox, is most important in the oceans. The question is not just a scientific curiosity, but has real world applications because one mechanism contributes more greenhouse gases to the atmosphere than the other.

Bess Ward

Bess Ward, Princeton’s William J. Sinclair Professor of Geosciences (Photo courtesy of Georgette Chalker)

“Nitrogen controls much of the productivity of the ocean,” said Andrew Babbin, first author of the study and a graduate student who works with Bess Ward, Princeton’s William J. Sinclair Professor of Geosciences. “Understanding nitrogen cycling is crucial to understanding the productivity of the oceans as well as the global climate,” he said.

In the new study, the researchers found that both of these nitrogen “exit strategies” are at work in the oceans, with denitrification mopping up about 70 percent of the nitrogen and anammox disposing of the rest.

The researchers also found that this 70-30 ratio could shift in response to changes in the quantity and quality of the nitrogen in need of removal. The study was published online this week in the journal Science.

The two other members of the research team were Richard Keil and Allan Devol, both professors at University of Washington’s School of Oceanography.

Research vessel

The researchers collected the samples in 2012 in the ocean off Baja California. Click on the image to read blog posts from a similar expedition that took place the following year.

Essential for the Earth’s life and climate, nitrogen is an element that cycles between soils and the atmosphere and between the atmosphere and the ocean. Bacteria near the surface help shuttle nitrogen into the ocean food chain by converting or “fixing” atmospheric nitrogen into forms that phytoplankton can use.

Without this fixed nitrogen, phytoplankton could not absorb carbon dioxide from the air, a feat which is helping to check today’s rising carbon dioxide levels in the atmosphere. When these tiny marine algae die or are consumed by predators, their biomass sinks to the ocean interior where it becomes food for other types of bacteria.

Test tubes

Researchers added specific amounts and types of nitrogen and organic compounds to test tubes containing seawater, and then noted whether denitrification or anammox occurred.

Until about 20 years ago, most scientists thought that denitrification, carried out by some of these bacteria, was the primary way that fixed nitrogen was recycled back to nitrogen gas. The second process, known as anaerobic ammonia oxidation, or anammox, was discovered by Dutch researchers studying how nitrogen is removed in sewage treatment plants.

Both processes occur in regions of the ocean that are naturally low in oxygen, or anoxic, due to local lack of water circulation and intense phytoplankton productivity overlying these regions. Within the world’s ocean, such regions occur only in the Arabian Sea, and off the coasts of Peru and Mexico.

In these anoxic environments, anaerobic bacteria feast on the decaying phytoplankton, and in the process cause the denitrification of nitrate into nitrogen gas, which cannot be used as a nutrient by most phytoplankton. During this process, ammonium is also produced, although marine geochemists had never been able to detect the ammonium that they knew must be there.

That riddle was solved in the early 2000s by the discovery of the anammox reaction in the marine environment, in which anaerobic bacteria feed on ammonium and convert it to nitrogen gas.

Glove bag

Graduate student Andrew Babbin filling incubation vials under an anoxic atmosphere in a transparent container called a “glove bag.” (Photo courtesy of Andrew Babbin)

But another riddle soon appeared: the anammox rates that Dutch and German teams of researchers measured in the oceans appeared to account for the entire nitrogen loss, leaving no role for denitrification.

Then in 2009, Ward’s team published a study in the journal Nature showing that denitrification was still a major actor in returning nitrogen to the air, at least in the Arabian Sea. The paper further fueled the controversy.

Back at Princeton, Ward suspected that both processes were necessary, with denitrification churning out the ammonium that anammox then converted to nitrogen gas.

To settle the issue, Ward and Babbin decided to look at exactly what was going on in anoxic ocean water when bacteria were given nitrogen and other nutrients to chew on.

They collected water samples from an anoxic region in the ocean south of Baja California and brought test tubes of the water into an on-ship laboratory. Working inside a sturdy, flexible “glove bag” to keep air from contaminating the low-oxygen water, Babbin added specific amounts and types of nitrogen and organic compounds to each test tube, and then noted whether denitrification or anammox occurred.

“We conducted a suite of experiments in which we added different types of organic matter, with variable ammonium content, to see if the ratio between denitrification and anammox would change,” said Babbin. “We found that not only did increased ammonia favor anammox as predicted, but that the precise proportions of nitrogen loss matched exactly as predicted based on the ammonium content.”

The explanation of why, in past experiments, some researchers found mostly denitrification while others found only anammox comes down to a sort-of “bloom and bust” cycle of phytoplankton life, explained Ward.

“If you have a big plankton bloom, then when those organisms die, a large amount of organic matter will sink and be degraded,” she said, “but we scientists are not always there to measure this. In other words, if you aren’t there on the day lunch is delivered, you won’t measure these processes.”

The researchers also linked the rates of nitrogen loss with the supply of organic material that drives the rates: more organic material equates to more nitrogen loss, so the quantity of the material matters too, Babbin said.

The two pathways have distinct metabolisms that turn out to be important in global climate change, he said.  ”Denitrification produces carbon dioxide and both produces and consumes nitrous oxide, which is another major greenhouse gas and an ozone depletion agent,” he said. “Anammox, however, consumes carbon dioxide and has no known nitrous oxide byproduct. The balance between the two therefore has a significant impact on the production and consumption of greenhouse gases in the ocean.”

The research was funded by National Science Foundation grant OCE-1029951.

Read the abstract.

Andrew R. Babbin, Richard G. Keil, Allan H. Devol, and Bess B. Ward. Organic Matter Stoichiometry, Flux, and Oxygen Control Nitrogen Loss in the Ocean. Science. Published Online April 10 2014. DOI: 10.1126/science.1248364

 

A promising concept on the path to fusion energy (IEEE Transactions on Plasma Science)

by John Greenwald, Princeton Plasma Physics Laboratory

QUASAR stellerator design

QUASAR stellerator design (Source: PPPL)

Completion of a promising experimental facility at the U.S. Department of Energy’s Princeton Plasma Laboratory (PPPL) could advance the development of fusion as a clean and abundant source of energy for generating electricity, according to a PPPL paper published this month in the journal IEEE Transactions on Plasma Science.

The facility, called the Quasi-Axisymmetric Stellarator Research (QUASAR) experiment, represents the first of a new class of fusion reactors based on the innovative theory of quasi-axisymmetry, which makes it possible to design a magnetic bottle that combines the advantages of the stellarator with the more widely used tokamak design. Experiments in QUASAR would test this theory. Construction of QUASAR — originally known as the National Compact Stellarator Experiment — was begun in 2004 and halted in 2008 when costs exceeded projections after some 80 percent of the machine’s major components had been built or procured.

“This type of facility must have a place on the roadmap to fusion,” said physicist George “Hutch” Neilson, the head of the Advanced Projects Department at PPPL.

Both stellarators and tokamaks use magnetic fields to control the hot, charged plasma gas that fuels fusion reactions. While tokamaks put electric current into the plasma to complete the magnetic confinement and hold the gas together, stellarators don’t require such a current to keep the plasma bottled up. Stellarators rely instead on twisting — or 3D —magnetic fields to contain the plasma in a controlled “steady state.”

Stellarator plasmas thus run little risk of disrupting — or falling apart — as can happen in tokamaks if the internal current abruptly shuts off. Developing systems to suppress or mitigate such disruptions is a challenge that builders of tokamaks like ITER, the international fusion experiment under construction in France, must face.

Stellarators had been the main line of fusion development in the 1950s and early 1960s before taking a back seat to tokamaks, whose symmetrical, doughnut-shaped magnetic field geometry produced good plasma confinement and proved easier to create. But breakthroughs in computing and physics understanding have revitalized interest in the twisty, cruller-shaped stellarator design and made it the subject of major experiments in Japan and Germany.

PPPL developed the QUASAR facility with both stellarators and tokamaks in mind. Tokamaks produce magnetic fields and a plasma shape that are the same all the way around the axis of the machine — a feature known as “axisymmetry.” QUASAR is symmetrical too, but in a different way. While QUASAR was designed to produce a twisting and curving magnetic field, the strength of that field varies gently as in a tokamak — hence the name “quasi-symmetry” (QS) for the design.  This property of the field strength was to produce plasma confinement properties identical to those of tokamaks.

“If the predicted near-equivalence in the confinement physics can be validated experimentally,” Neilson said, “then the development of the QS line may be able to continue as essentially a ‘3D tokamak.’”

Such development would test whether a QUASAR-like design could be a candidate for a demonstration — or DEMO —fusion facility that would pave the way for construction of a commercial fusion reactor that would generate electricity for the power grid.

Read the paper.

George Neilson, David Gates, Philip Heitzenroeder, Joshua Breslau, Stewart Prager, Timothy Stevenson, Peter Titus, Michael Williams, and Michael Zarnstorff. Next Steps in Quasi-Axisymmetric Stellarator Research IEEE Transactions on Plasma Science, vol. 42, No. 3, March 2014.

The research was supported by the U.S. Department of Energy under contract DE-AC02 09CH11466. Princeton University manages PPPL, which is part of the national laboratory system funded by the U.S. Department of Energy through the Office of Science.

A more potent greenhouse gas than CO2, methane emissions will leap as Earth warms (Nature)

Freshwater wetlands can release methane, a potent greenhouse gas, as the planet warms. (Image source: RGBstock.com)

Freshwater wetlands can release methane, a potent greenhouse gas, as the planet warms. (Image source: RGBstock.com)

By Morgan Kelly, Office of Communications

While carbon dioxide is typically painted as the bad boy of greenhouse gases, methane is roughly 30 times more potent as a heat-trapping gas. New research in the journal Nature indicates that for each degree that the Earth’s temperature rises, the amount of methane entering the atmosphere from microorganisms dwelling in lake sediment and freshwater wetlands — the primary sources of the gas — will increase several times. As temperatures rise, the relative increase of methane emissions will outpace that of carbon dioxide from these sources, the researchers report.

The findings condense the complex and varied process by which methane — currently the third most prevalent greenhouse gas after carbon dioxide and water vapor — enters the atmosphere into a measurement scientists can use, explained co-author Cristian Gudasz, a visiting postdoctoral research associate in Princeton’s Department of Ecology and Evolutionary Biology. In freshwater systems, methane is produced as microorganisms digest organic matter, a process known as “methanogenesis.” This process hinges on a slew of temperature, chemical, physical and ecological factors that can bedevil scientists working to model how the Earth’s systems will contribute, and respond, to a hotter future.

The researchers’ findings suggest that methane emissions from freshwater systems will likely rise with the global temperature, Gudasz said. But to not know the extent of methane contribution from such a widely dispersed ecosystem that includes lakes, swamps, marshes and rice paddies leaves a glaring hole in climate projections.

“The freshwater systems we talk about in our paper are an important component to the climate system,” Gudasz said. “There is more and more evidence that they have a contribution to the methane emissions. Methane produced from natural or manmade freshwater systems will increase with temperature.”

To provide a simple and accurate way for climate modelers to account for methanogenesis, Gudasz and his co-authors analyzed nearly 1,600 measurements of temperature and methane emissions from 127 freshwater ecosystems across the globe.

New research in the journal Nature found that for each degree that the Earth's temperature rises, the amount of methane entering the atmosphere from microorganisms dwelling in freshwater wetlands — a primary source of the gas — will increase several times. The researchers analyzed nearly 1,600 measurements of temperature and methane emissions from 127 freshwater ecosystems across the globe (above), including lakes, swamps, marshes and rice paddies. The size of each point corresponds with the average rate of methane emissions in milligrams per square meter, per day, during the course of the study. The smallest points indicate less than one milligram per square meter, while the largest-sized point represents more than three milligrams. (Image courtesy of Cristian Gudasz)

New research in the journal Nature found that for each degree that the Earth’s temperature rises, the amount of methane entering the atmosphere from microorganisms dwelling in freshwater wetlands — a primary source of the gas — will increase several times. The researchers analyzed nearly 1,600 measurements of temperature and methane emissions from 127 freshwater ecosystems across the globe (above), including lakes, swamps, marshes and rice paddies. The size of each point corresponds with the average rate of methane emissions in milligrams per square meter, per day, during the course of the study. The smallest points indicate less than one milligram per square meter, while the largest-sized point represents more than three milligrams. (Image courtesy of Cristian Gudasz)

The researchers found that a common effect emerged from those studies: freshwater methane generation very much thrives on high temperatures. Methane emissions at 0 degrees Celsius would rise 57 times higher when the temperature reached 30 degrees Celsius, the researchers report. For those inclined to model it, the researchers’ results translated to a temperature dependence of 0.96 electron volts (eV), an indication of the temperature-sensitivity of the methane-emitting ecosystems.

“We all want to make predictions about greenhouse gas emissions and their impact on global warming,” Gudasz said. “Looking across these scales and constraining them as we have in this paper will allow us to make better predictions.”

Read the abstract.

Yvon-Durocher, Gabriel, Andrew P. Allen, David Bastviken, Ralf Conrad, Cristian Gudasz, Annick St-Pierre, Nguyen Thanh-Duc, Paul A. del Giorgio. 2014. Methane fluxes show consistent temperature dependence across microbial to ecosystem scales. Nature. Article published online before print: March 19, 2014. DOI: 10.1038/nature13164 and in the March 27, 2014 print edition.

It slices, it dices, and it protects the body from harm (Science)

By Catherine Zandonella, Office of the Dean for Research

RNase L enzyme structure

Researchers at Princeton have deciphered the 3D structure of RNase L, an enzyme that slices through RNA and is a first responder in the innate immune system. The structure contains two subunits, represented in red as two parts of a pair of scissors. Illustration by Sneha Rath, Inset courtesy of Science.

An essential weapon in the body’s fight against infection has come into sharper view. Researchers at Princeton University have discovered the 3D structure of an enzyme that cuts to ribbons the genetic material of viruses and helps defend against bacteria.

The discovery of the structure of this enzyme, a first-responder in the body’s “innate immune system,” could enable new strategies for fighting infectious agents and possibly prostate cancer and obesity. The work was published Feb. 27 in the journal Science.

Until now, the research community has lacked a structural model of the human form of this enzyme, known as RNase L, said Alexei Korennykh, an assistant professor of molecular biology and leader of the team that made the discovery.

“Now that we have the human RNase L structure, we can begin to understand the effects of carcinogenic mutations in the RNase L gene. For example, families with hereditary prostate cancers often carry genetic mutations in the region, or locus, encoding RNase L,” Korennykh said. The connection is so strong that the RNase L locus also goes by the name “hereditary prostate cancer 1.” The newly found structure reveals the positions of these mutations and explains why some of these mutations could be detrimental, perhaps leading to cancer, Korennykh said. RNase L is also essential for insulin function and has been implicated in obesity.

The Princeton team’s work has also led to new insights on the enzyme’s function.

The enzyme is an important player in the innate immune system, a rapid and broad response to invaders that includes the production of a molecule called interferon. Interferon relays distress signals from infected cells to neighboring healthy cells, thereby activating RNase L to turn on its ability to slice through RNA, a type of genetic material that is similar to DNA. The result is new cells armed for destruction of the foreign RNA.

The 3D structure uncovered by Korennykh and his team consists of two nearly identical subunits called protomers. The researchers found that one protomer finds and attaches to the RNA, while the other protomer snips it.

The initial protomer latches onto one of the four “letters” that make up the RNA code, in particular, the “U,” which stands for a component of RNA called uridine. The other protomer “counts” RNA letters starting from the U, skips exactly one letter, then cuts the RNA.

Although the enzyme can slice any RNA, even that of the body’s own cells, it only does so when activated by interferon.

“We were surprised to find that the two protomers were identical but have different roles, one binding and one slicing,” Korennykh said. “Enzymes usually have distinct sites that bind the substrate and catalyze reactions. In the case of RNase L, it appears that the same exact protein surface can do both binding and catalysis. One RNase L subunit randomly adopts a binding role, whereas the other identical subunit has no other choice but to do catalysis.”

To discover the enzyme’s structure, the researchers first created a crystal of the RNase L enzyme. The main challenge was finding the right combination of chemical treatments that would force the enzyme to crystallize without destroying it.

Korennykh groupAfter much trial and error and with the help of an automated system, postdoctoral research associate Jesse Donovan and graduate student Yuchen Han succeeded in making the crystals.

Next, the crystals were bombarded with powerful X-rays, which diffract when they hit the atoms in the crystal and form patterns indicative of the crystal’s structure. The diffraction patterns revealed how the atoms of RNase L are arranged in 3D space.

At the same time Sneha Rath, a graduate student in Korennykh’s laboratory, worked on understanding the RNA cleavage mechanism of RNase L using synthetic RNA fragments. Rath’s results matched the structural findings of Han and Donovan, and the two pieces of data ultimately revealed how RNase L cleaves its RNA targets.

Han, Donovan and Rath contributed equally to the paper and are listed as co-first authors.

Finally, senior research specialist Gena Whitney and graduate student Alisha Chitrakar conducted additional studies of RNase L in human cells, confirming the 3D structure.

Now that the human structure has been solved, researchers can explore ways to either enhance or dampen RNase L activity for medical and therapeutic uses, Korennykh said.

“This work illustrates the wonderful usefulness of doing both crystallography and careful kinetic and enzymatic studies at the same time,” said Peter Walter, professor of biochemistry and biophysics at the University of California-San Francisco School of Medicine. “Crystallography gives a static picture which becomes vastly enhanced by studies of the kinetics.”

Support for the work was provided by Princeton University.

Read the abstract.

Han, Yuchen, Jesse Donovan, Sneha Rath, Gena Whitney, Alisha Chitrakar, and Alexei Korennykh. Structure of Human RNase L Reveals the Basis for Regulated RNA Decay in the IFN Response Science 1249845. Published online 27 February 2014 [DOI:10.1126/science.1249845]

Now in 3D: Video of virus-sized particle trying to enter cell (Nature Nanotechnology)

Video of virus trying to enter cell

3D movie (below) of virus-like nanoparticle trying to gain entry to a cell

By Catherine Zandonella, Office of the Dean for Research

Tiny and swift, viruses are hard to capture on video. Now researchers at Princeton University have achieved an unprecedented look at a virus-like particle as it tries to break into and infect a cell. The technique they developed could help scientists learn more about how to deliver drugs via nanoparticles — which are about the same size as viruses — as well as how to prevent viral infection from occurring.

The video reveals a virus-like particle zipping around in a rapid, erratic manner until it encounters a cell, bounces and skids along the surface, and either lifts off again or, in much less time than it takes to blink an eye, slips into the cell’s interior. The work was published in Nature Nanotechnology.

Video caption: ‘Kiss and run’ on the cell surface. This 3D movie shows actual footage of a virus-like particle (red dot) approaching a cell (green with reddish brown nucleus), as captured by Princeton University researchers Kevin Welcher and Haw Yang. The color of the particle represents its speed, with red indicating rapid movement and blue indicating slower movement. The virus-like particle lands on the surface of the cell, appears to try to enter it, then takes off again. Source: Nature Nanotechnology.

“The challenge in imaging these events is that viruses and nanoparticles are small and fast, while cells are relatively large and immobile,” said Kevin Welsher, a postdoctoral researcher in Princeton’s Department of Chemistry and first author on the study. “That has made it very hard to capture these interactions.”

The problem can be compared to shooting video of a hummingbird as it roams around a vast garden, said Haw Yang, associate professor of chemistry and Welsher’s adviser. Focus the camera on the fast-moving hummingbird, and the background will be blurred. Focus on the background, and the bird will be blurred.

The researchers solved the problem by using two cameras, one that locked onto the virus-like nanoparticle and followed it faithfully, and another that filmed the cell and surrounding environment.

Putting the two images together yielded a level of detail about the movement of nano-sized particles that has never before been achieved, Yang said. Prior to this work, he said, the only way to see small objects at a similar resolution was to use a technique called electron microscopy, which requires killing the cell.

“What Kevin has done that is really different is that he can capture a three-dimensional view of a virus-sized particle attacking a living cell, whereas electron microscopy is in two-dimensions and on dead cells,” Yang said. “This gives us a completely new level of understanding.”

In addition to simply viewing the particle’s antics, the researchers can use the technique to map the contours of the cell surface, which is bumpy with proteins that push up from beneath the surface. By following the particle’s movement along the surface of the cell, the researchers were able to map the protrusions, just as a blind person might use his or her fingers to construct an image of a person’s face.

“Following the motion of the particle allowed us to trace very fine structures with a precision of about 10 nanometers, which typically is only available with an electron microscope,” Welsher said. (A nanometer is one billionth of a meter and roughly 1000 times smaller than the width of a human hair.) He added that measuring changes in the speed of the particle allowed the researchers to infer the viscosity of the extracellular environment just above the cell surface.

The technology has potential benefits for both drug discovery and basic scientific discovery, Yang said.  “We believe this will impact the study of how nanoparticles can deliver medicines to cells, potentially leading to some new lines of defense in antiviral therapies,” he said. “For basic research, there are a number of questions that can now be explored, such as how a cell surface receptor interacts with a viral particle or with a drug.”

Welsher added that such basic research could lead to new strategies for keeping viruses from entering cells in the first place.

“If we understand what is happening to the virus before it gets to your cells,” said Welsher, “then we can think about ways to prevent infection altogether. It is like deflecting missiles before they get there rather than trying to control the damage once you’ve been hit.”

To create the virus-like particle, the researchers coated a miniscule polystyrene ball with quantum dots, which are semiconductor bits that emit light and allow the camera to find the particle. Next, the particle was studded with protein segments known as Tat peptides, derived from the HIV-1 virus, which help the particle find the cell. The width of the final particle was about 100 nanometers.

The researchers then let loose the particles into a dish containing skin cells known as fibroblasts. One camera followed the particle while a second imaging system took pictures of the cell using a technique called laser scanning microscopy, which involves taking multiple images, each in a slightly different focal plane, and combining them to make a three-dimensional picture.

The research was supported by the US Department of Energy (DE-SC0006838) and by Princeton University.

Read the abstract.

Kevin Welsher and Haw Yang. 2014. Multi-resolution 3D visualization of the early stages of cellular uptake of peptide-coated nanoparticles. Nature nanotechnology. Published online: 23 February 2014 | DOI: 10.1038/NNANO.2014.12

Rife with hype, exoplanet study needs patience and refinement (PNAS)

By Morgan Kelly, Office of Communications

Exoplanet

Exoplanet transiting in front of its star. Princeton’s Adam Burrows argues against drawing too many conclusions about such distant objects with today’s technologies. Photo credit: ESA/C. Carreau

Imagine someone spent months researching new cities to call home using low-resolution images of unidentified skylines. The pictures were taken from several miles away with a camera intended for portraits, and at sunset. From these fuzzy snapshots, that person claims to know the city’s air quality, the appearance of its buildings, and how often it rains.

This technique is similar to how scientists often characterize the atmosphere — including the presence of water and oxygen — of planets outside of Earth’s solar system, known as exoplanets, according to a review of exoplanet research published in the Proceedings of the National Academy of Sciences.

A planet’s atmosphere is the gateway to its identity, including how it was formed, how it developed and whether it can sustain life, stated Adam Burrows, author of the review and a Princeton University professor of astrophysical sciences.

But the dominant methods for studying exoplanet atmospheres are not intended for objects as distant, dim and complex as planets trillions of miles from Earth, Burrows said. They were instead designed to study much closer or brighter objects, such as planets in Earth’s solar system and stars.

Nonetheless, scientific reports and the popular media brim with excited depictions of Earth-like planets ripe for hosting life and other conclusions that are based on vague and incomplete data, Burrows wrote in the first in a planned series of essays that examine the current and future study of exoplanets. Despite many trumpeted results, few “hard facts” about exoplanet atmospheres have been collected since the first planet was detected in 1992, and most of these data are of “marginal utility.”

The good news is that the past 20 years of study have brought a new generation of exoplanet researchers to the fore that is establishing new techniques, technologies and theories. As with any relatively new field of study, fully understanding exoplanets will require a lot of time, resources and patience, Burrows said.

“Exoplanet research is in a period of productive fermentation that implies we’re doing something new that will indeed mature,” Burrows said. “Our observations just aren’t yet of a quality that is good enough to draw the conclusions we want to draw.

“There’s a lot of hype in this subject, a lot of irrational exuberance. Popular media have characterized our understanding as better than it actually is,” he said. “They’ve been able to generate excitement that creates a positive connection between the astrophysics community and the public at large, but it’s important not to hype conclusions too much at this point.”

The majority of data on exoplanet atmospheres come from low-resolution photometry, which captures the variation in light and radiation an object emits, Burrows reported. That information is used to determine a planet’s orbit and radius, but its clouds, surface, and rotation, among other factors, can easily skew the results. Even newer techniques such as capturing planetary transits — which is when a planet passes in front of its star, and was lauded by Burrows as an unforeseen “game changer” when it comes to discovering new planets — can be thrown off by a thick atmosphere and rocky planet core.

All this means that reliable information about a planet can be scarce, so scientists attempt to wring ambitious details out of a few data points. “We have a few hard-won numbers and not the hundreds of numbers that we need,” Burrows said. “We have in our minds that exoplanets are very complex because this is what we know about the planets in our solar system, but the data are not enough to constrain even a fraction of these conceptions.”

Burrows emphasizes that astronomers need to acknowledge that they will never achieve a comprehensive understanding of exoplanets through the direct-observation, stationary methods inherited from the exploration of Earth’s neighbors. He suggests that exoplanet researchers should acknowledge photometric interpretations as inherently flawed and ambiguous. Instead, the future of exoplanet study should focus on the more difficult but comprehensive method of spectrometry, wherein the physical properties of objects are gauged by the interaction of its surface and elemental features with light wavelengths, or spectra. Spectrometry has been used to determine the age and expansion of the universe.

Existing telescopes and satellites are likewise vestiges of pre-exoplanet observation. Burrows calls for a mix of small, medium and large initiatives that will allow the time and flexibility scientists need to develop tools to detect and analyze exoplanet spectra. He sees this as a challenge in a research environment that often puts quick-payback results over deliberate research and observation. Once scientists obtain high-quality spectral data, however, Burrows predicted, “Many conclusions reached recently about exoplanet atmospheres will be overturned.”

“The way we study planets out of the solar system has to be radically different because we can’t ‘go’ to those planets with satellites or probes,” Burrows said. “It’s much more an observational science. We have to be detectives. We’re trying to find clues and the best clues since the mid-19th century have been in spectra. It’s the only means of understanding the atmosphere of these planets.”

A longtime exoplanet researcher, Burrows predicted the existence of “hot-Jupiter” planets — gas planets similar to Jupiter but orbiting very close to the parent star — in a paper in the journal Nature months before the first such planet, 51 Pegasi b, was discovered in 1995.

Read the abstract.

Citation: Burrows, Adam S. 2014. Spectra as windows into exoplanet atmospheres. Proceedings of the National Academy of Sciences. Article first published online: Jan. 13, 2014. DOI: 10.1073/pnas.1304208111