Author Archives: Catherine Zandonella

Glaser Goldston and BTO_cropped

A farewell to arms? Scientists developing a novel technique that could facilitate nuclear disarmament (Nature)

Alexander Glaser and Robert Goldston

Alexander Glaser and Robert Goldston with the British Test Object. Credit: Elle Starkman/PPPL Communications Office

By John Greenwald, Princeton Plasma Physics Laboratory Office of Communications

A proven system for verifying that apparent nuclear weapons slated to be dismantled contained true warheads could provide a key step toward the further reduction of nuclear arms. The system would achieve this verification while safeguarding classified information that could lead to nuclear proliferation.

Scientists at Princeton University and the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) are developing the prototype for such a system, as reported this week in the journal Nature. Their novel approach, called a “zero-knowledge protocol,” would verify the presence of warheads without collecting any classified information at all.

“The goal is to prove with as high confidence as required that an object is a true nuclear warhead while learning nothing about the materials and design of the warhead itself,” said physicist Robert Goldston, coauthor of the paper, a fusion researcher and former director of PPPL, and a professor of astrophysical sciences at Princeton.

While numerous efforts have been made over the years to develop systems for verifying the actual content of warheads covered by disarmament treaties, no such methods are currently in use for treaty verification.

Traditional nuclear arms negotiations focus instead on the reduction of strategic — or long-range — delivery systems, such as bombers, submarines and ballistic missiles, without verifying their warheads. But this approach could prove insufficient when future talks turn to tactical and nondeployed nuclear weapons that are not on long-range systems. “What we really want to do is count warheads,” said physicist Alexander Glaser, first author of the paper and an assistant professor in Princeton’s Woodrow Wilson School of Public and International Affairs and the Department of Mechanical and Aerospace Engineering.

The system Glaser and Goldston are mapping out would compare a warhead to be inspected with a known true warhead to see if the weapons matched. This would be done by beaming high-energy neutrons into each warhead and recording how many neutrons passed through to detectors positioned on the other side. Neutrons that passed through would be added to those already “preloaded” into the detectors by the warheads’ owner — and if the total number of neutrons were the same for each warhead, the weapons would be found to match. But different totals would show that the putative warhead was really a spoof. Prior to the test, the inspector would decide which preloaded detector would go with which warhead.

No classified data would be measured in this process, and no electronic components that might be vulnerable to tampering and snooping would be used. “This approach really is very interesting and elegant,” said Steve Fetter, a professor in the School of Public Policy at the University of Maryland and a former White House official. “The main question is whether it can be implemented in practice.”

A project to test this approach is under construction at PPPL. The project calls for firing high-energy neutrons at a non-nuclear target, called a British Test Object, that will serve as a proxy for warheads. Researchers will compare results of the tests by noting how many neutrons pass through the target to bubble detectors that Yale University is designing for the project. The gel-filled detectors will add the neutrons that pass through to those already preloaded to produce a total for each test.

The project was launched with a seed grant from The Simons Foundation of Vancouver, Canada, that came to Princeton through Global Zero, a nonprofit organization. Support also was provided by the U.S. Department of State, the DOE (via PPPL pre-proposal development funding), and most recently, a total of $3.5 million over five years from the National Nuclear Security Administration.

Glaser hit upon the idea for a zero-knowledge proof over a lunch hosted by David Dobkin, a computer scientist, and until June 2014, dean of the Princeton faculty. “I told him I was really interested in nuclear warhead verification without learning anything about the warhead itself,” Glaser said. ‘“We call this a zero-knowledge proof in computer science,”’ Glaser said Dobkin replied. “That was the trigger,” Glaser recalled. “I went home and began reading about zero-knowledge proofs,” which are widely used in applications such as verifying online passwords.

Glaser’s reading led him to Boaz Barak, a senior researcher at Microsoft New England who had taught computer science at Princeton and is an expert in cryptology, the science of disguising secret information. “We started having discussions,” Glaser said of Barak, who helped develop statistical measures for the PPPL project and is the third coauthor of the paper in Nature.

Glaser also reached out to Goldston, with whom he had taught a class for three years in the Princeton Department of Astrophysical Sciences. “I told Rob that we need neutrons for this project,” Glaser recalled. “And he said, ‘That’s what we do — we have 14 MeV [or high-energy] neutrons at the Laboratory.’” Glaser, Goldston and Barak then worked together to refine the concept, developing ways to assure that even the statistical noise — or random variation — in the measurements conveyed no information.

If proven successful, dedicated inspection systems based on radiation measurements, such as the one proposed here, could help to advance disarmament talks beyond the New Strategic Arms Reduction Treaty (New START) between the United States and Russia, which runs from 2011 to 2021. The treaty calls for each country to reduce its arsenal of deployed strategic nuclear arms to 1,550 weapons, for a total of 3,100, by 2018.

Not included in the New START treaty are more than 4,000 nondeployed strategic and tactical weapons in each country’s arsenal. These very weapons, note the authors of the Nature paper, are apt to become part of future negotiations, “which will likely require verification of individual warheads, rather than whole delivery systems.” Deep cuts in the nuclear arsenals and the ultimate march to zero, say the authors, will require the ability to verifiably count individual warheads.

Read the abstract: http://dx.doi.org/10.1038/nature13457

A.Glaser, B. Barak, R. Goldston. A zero-knowledge protocol for nuclear  warhead verification. Nature 26 June 2014 DOI: 10.1038/nature13457

An electron microscope image shows two lasers placed just two microns apart from each other. (Image source: Turecki lab)

Strange physics turns off laser (Nature Communications)

By Steve Schultz, School of Engineering Office of Communications

An electron microscope image shows two lasers placed just two microns apart from each other. (Image source: Turecki lab)

An electron microscope image shows two lasers placed just two microns apart from each other. (Image source: Turecki lab)

Inspired by anomalies that arise in certain mathematical equations, researchers have demonstrated a laser system that paradoxically turns off when more power is added rather than becoming continuously brighter.

The finding by a team of researchers at Vienna University of Technology and Princeton University, could lead to new ways to manipulate the interaction of electronics and light, an important tool in modern communications networks and high-speed information processing.

The researchers published their results June 13 in the journal Nature Communications.

Their system involves two tiny lasers, each one-tenth of a millimeter in diameter, or about the width of a human hair. The two are nearly touching, separated by a distance 50 times smaller than the lasers themselves. One is pumped with electric current until it starts to emit light, as is normal for lasers. Power is then added slowly to the other, but instead of it also turning on and emitting even more light, the whole system shuts off.

“This is not the normal interference that we know,” said Hakan Türeci, assistant professor of electrical engineering at Princeton, referring to the common phenomenon of light waves or sound waves from two sources cancelling each other.  Instead, he said, the cancellation arises from the careful distribution of energy loss within an overall system that is being amplified.

Interactions between two lasers

Manipulating minute areas of gain and loss within individual lasers (shown as peaks and valleys in the image), researchers were able to create paradoxical interactions between two nearby lasers.(Image source: Turecki lab)

“Loss is something you normally are trying to avoid,” Türeci said. “In this case, we take advantage of it and it gives us a different dimension we can use – a new tool – in controlling optical systems.”

The research grows out of Türeci’s longstanding work on mathematical models that describe the behavior of lasers. In 2008, he established a mathematical framework for understanding the unique properties and complex interactions that are possible in extremely small lasers – devices with features measured in micrometers or nanometers. Different from conventional desk-top lasers, these devices fit on a computer chip.

That work opened the door to manipulating gain or loss (the amplification or loss of an energy input) within a laser system. In particular, it allowed researchers to judiciously control the spatial distribution of gain and loss within a single system, with one tiny sub-area amplifying light and an immediately adjacent area absorbing the generated light.

Türeci and his collaborators are now using similar ideas to pursue counterintuitive ideas for using distribution of gain and loss to make micro-lasers more efficient.

The researchers’ ideas for taking advantage of loss derive from their study of mathematical constructs called “non-Hermitian” matrices in which a normally symmetric table of values becomes asymmetric. Türeci said the work is related to certain ideas of quantum physics in which the fundamental symmetries of time and space in nature can break down even though the equations used to describe the system continue to maintain perfect symmetry.

Over the past several years, Türeci and his collaborators at Vienna worked to show how the mathematical anomalies at the heart of this work, called “exceptional points,” could be manifested in an actual system. In 2012 (Ref. 3), the team published a paper in the journal Physical Review Letters demonstrating computer simulations of a laser system that shuts off as energy is being added. In the current Nature Communications paper, the researchers created an experimental realization of their theory using a light source known as a quantum cascade laser.

The researchers report in the article that results could be of particular value in creating “lab-on-a-chip” devices – instruments that pack tiny optical devices onto a single computer chip. Understanding how multiple optical devices interact could provide ways to manipulate their performance electronically in previously unforeseen ways. Taking advantage of the way loss and gain are distributed within tightly coupled laser systems could lead to new types of highly accurate sensors, the researchers said.

“Our approach provides a whole new set of levers to create unforeseen and useful behaviors,” Türeci said.

The work at Vienna, including creation and demonstration of the actual device, was led by Stefan Rotter at Vienna along with Martin Brandstetter, Matthias Liertzer, C. Deutsch, P. Klang, J. Schöberl, G. Strasser and K. Unterrainer. Türeci participated in the development of the mathematical models underlying the phenomena. The work on the 2012 computer simulation of the system also included Li Ge, who was a post-doctoral researcher at Princeton at the time and is now an assistant professor at City University of New York.

The work was funded by the Vienna Science and Technology Fund and the Austrian Science Fund, as well as by the National Science Foundation through a major grant for the Mid-Infrared Technologies for Health and the Environment Center based at Princeton and by the Defense Advanced Research Projects Agency.

Read the abstract.

M. Brandstetter, M. Liertzer, C. Deutsch,P. Klang,J. Schöberl,H. E. Türeci,G. Strasser,K. Unterrainer & S. Rotter. Reversing the pump dependence of a laser at an exceptional point. Nature Communications 13 June 2014. DOI:10.1038/ncomms5034

Science 2 May 2008. DOI: 10.1126/science.1155311

Physical Review Letters 24 April 2012. DOI:10.1103/PhysRevLett.108.173901

 

Migrating north may trigger immediate health declines among Mexicans (Demography)

Mexican-Square_351x351_45

Photo credit: Ticiana Jardim Marini, Woodrow Wilson School

By B. Rose Huber, Woodrow Wilson School of Public and International Affairs

Mexican immigrants who relocate to the United States often face barriers like poorly paying jobs, crowded housing and family separation. Such obstacles – including the migration process itself – may be detrimental to the health of Mexican immigrants, especially those who have recently moved.

A study led by Princeton University’s Woodrow Wilson School of Public and International Affairs finds that Mexican immigrants who relocate to the United States are more likely to experience declines in health within a short time period compared with other Mexicans.

While past studies have attempted to examine the consequences of immigration for a person’s health, few have had adequate data to compare recent Mexican immigrants, those who moved years ago and individuals who never left Mexico. Published in the journal Demography, the Princeton-led study is one of the first to examine self-reported health at two stages among these groups.

“Our study demonstrates that declines in health appear quickly after migrants’ arrival in the United States,” said Noreen Goldman, lead author and professor of demography and public affairs at the Wilson School and faculty associate at the Wilson School’s Office of Population Research (OPR). “Overall, we find that recent Mexican migrants are more likely to experience rapid changes in health, both good and bad, than the other groups. The deteriorations in health within a year or two of migration far outweigh the improvements.”

For the study, the researchers used data from the Mexican Family Life Survey, a longitudinal survey containing demographic and health information on nearly 20,000 Mexicans who were 20 years or older at the time of the first interview in 2002. Follow-up interviews took place in 2005-06 with individuals who stayed in Mexico as well as with those who moved to the United States between 2002 and 2005. Goldman and her collaborators based their analysis on a sample of 14,257 adults, excluding those who didn’t report health conditions at the follow-up interview.

In order to assess whether migrants from Mexico to the United States experienced changes in their health after they moved, the researchers used two health assessments: self-rated health (compared to someone of the same age and sex) at each of the two interviews and perceived change in health at the second interview. The latter measure was based on the following question: “Comparing your health to a year ago, would you say your health is much better, better, the same, worse or much worse?” Goldman and her collaborators narrowed the original five response categories to three: better, worse or the same. Changes in health for Mexicans who migrated between 2002 to 2005 were compared with those of migrants from earlier time periods and with people who remained in Mexico.

The researchers also took health measures at the first wave into account: obesity, anemia, hypertension – which were all determined by at-home visits by trained health workers – and hospitalization within the past year. They also controlled for socioeconomic factors – years of schooling and household spending. Additionally, they included data from 136 municipalities in Mexico (as past research has found that migration decisions can differ based on place of origin.)

Using statistical models, the researchers analyzed changes in health status. The two health measures revealed that recent migrants to the United States were more apt to experience both improvements and declines in their health than either earlier migrants or non-migrants. However, the overall net change was a substantial deterioration in the health of recent migrants relative to the other groups. The health of recent migrants was about 60 percent more likely to have worsened within a one- or two-year period than that of those who never left Mexico.

“The speed of the health decline for recent migrants suggests that the process of border crossing for both documented and undocumented immigrants combined with the physical and psychological costs of finding work, crowded housing, limited access to health care in the United States and isolation from family members can result in rapid deterioration of immigrants’ physical and mental wellbeing,” said Goldman.

“Immigrants are often assumed to be resilient and in good health because they have not yet adopted unhealthy American behaviors like poor diet and a sedentary lifestyle,” said co-author Anne Pebley from the California Center for Population Research at the University of California, Los Angeles. “But these results suggest that the image of the ‘healthy migrant’ is an illusion – at least for many recent immigrants.”

“These results demonstrate the high personal costs that many immigrants are willing to pay for a chance to improve their lives,” said Goldman. “From a humanitarian standpoint, the health declines underscore the need for public health, social service and immigration agencies to provide basic services for physical and psychological health to recent migrants.”

Given the limitations of the dataset, Goldman and her collaborators could not provide a more nuanced analysis regarding the causes of the changes in health status, but, with the availability of the third wave of data (collected between 2009-12), many of these questions can be later addressed.

In addition to Goldman and Pebley, study researchers include Chang Chung from OPR; Mathew Creighton from the University of Massachusetts; Graciela Teruel from the Universidad Iberoamericana; and Luis Rubalcava from the Centro de Análisis y Medición del Bienestar Social.

Support for this project from the Eunice Kennedy Shriver National Institute of Child Health and Human Development (R01HD051764, R24HD047879, R03HD040906, and R01HD047522) and from the Sector Research Fund for Social Development of the National Council for Science and Technology of Mexico.

Read the abstract.

Goldman N, Pebley AR, Creighton MJ, Teruel GM, Rubalcava LN, Chung C. 2014. The Consequences of Migration to the United States for Short-Term Changes in the Health of Mexican Immigrants. Demography. 2014 May 1 (Epub ahead of print).

Public interest in climate change unshaken by scandal, but unstirred by science (Environ. Res. Lett.)

Public interest in climate change

Princeton University and University of Oxford researchers found that negative media reports seem to have only a passing effect on public opinion, but that positive stories don’t appear to possess much staying power, either. Measured by how often people worldwide scour the Internet for information related to climate change, overall public interest in the topic has steadily waned since 2007. To gauge public interest, the researchers used Google Trends to document the Internet search-engine activity for “global warming” (blue line) and “climate change” (red line) from 2004 to 2013. They examined activity both globally (top) and in the United States (bottom). The numbers on the left indicate how often people looked up each term based on its percentage of the maximum search volume at any given point in time. (Image courtesy of William Anderegg)

By Morgan Kelly, Office of Communications

The good news for any passionate supporter of climate-change science is that negative media reports seem to have only a passing effect on public opinion, according to Princeton University and University of Oxford researchers. The bad news is that positive stories don’t appear to possess much staying power, either. This dynamic suggests that climate scientists should reexamine how to effectively and more regularly engage the public, the researchers write.

Measured by how often people worldwide scour the Internet for information related to climate change, overall public interest in the topic has steadily waned since 2007, according to a report in the journal Environmental Research Letters. Yet, the downturn in public interest does not seem tied to any particular negative publicity regarding climate-change science, which is what the researchers primarily wanted to gauge.

First author William Anderegg, a postdoctoral research associate in the Princeton Environmental Institute who studies communication and climate change, and Gregory Goldsmith, a postdoctoral researcher at Oxford’s Environmental Change Institute, specifically looked into the effect on public interest and opinion of two widely reported, almost simultaneous events.

The first involved the November 2009 hacking of emails from the Climate Research Unit at the University of East Anglia in the United Kingdom, which has been a preeminent source of data confirming human-driven climate change. Known as “climategate,” this event was initially trumpeted as proving that dissenting scientific views related to climate change have been maliciously quashed. Thorough investigations later declared that no misconduct took place.

The second event was the revelation in late 2009 that an error in the 2007 Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) — an organization under the auspices of the United Nations that periodically evaluates the science and impacts of climate change — overestimated how quickly glaciers in the Himalayas would melt.

To first get a general sense of public interest in climate change, Anderegg and Goldsmith combed the freely available database Google Trends for “global warming,” “climate change” and all related terms that people around the world searched for between 2004 and 2013. The researchers documented search trends in English, Chinese and Spanish, which are the top three languages on the Internet. Google Trends receives more than 80 percent of the world’s Internet search-engine activity, and it is increasingly called upon for research in economics, political science and public health.

Internet searches related to climate change began to climb following the 2006 release of the documentary “An Inconvenient Truth” starring former vice president Al Gore, and continued its ascent with the release of the IPCC’s fourth report, the researchers found.

Anderegg and Goldsmith specifically viewed searches for “climategate” between Nov. 1 and Dec. 31, 2009. They found that the search trend had a six-day “half-life,” meaning that search frequency dropped by 50 percent every six days. After 22 days, the number of searches for climategate was a mere 10 percent of its peak. Information about climategate was most sought in the United States, Canada and Australia, while the cities with the most searchers were Toronto, London and Washington, D.C.

Searches for the phrase "global warming hoax" correlate with conservative political leanings

The researchers found that searchers for the phrase “global warming hoax” and related terms correlate in the United States with Republican or conservative political leanings. They compared the prevalence of searches for “global warming hoax” with the Cook Partisan Voting Index — which gauges how far toward Republicans or Democrats a congressional district leans — for 34 US states (above). They found that the more Republican/conservative the state (bottom measurement), the more frequently people in that state looked up related terms. The bottom graph shows how often a state votes Democrat (low numbers) versus Republican (high numbers). The numbers on the left indicate how often people looked up “global warming hoax” based on its percentage of the maximum search volume at any given point in time. (Image courtesy of William Anderegg)

The researchers tracked the popularity of the term “global warming hoax” to gauge the overall negative effect of climategate and the IPCC error on how the public perceives climate change. They found that searches for the term were actually higher the year before the events than during the year afterward.

“The search volume quickly returns to the same level as before the incident,” Goldsmith said. “This suggests no long-term change in the level of climate-change skepticism.

We found that intense media coverage of an event such as ‘climategate’ was followed by bursts of public interest, but these bursts were short-lived.”

All of this is to say that moments of great consternation for climate scientists seem to barely register in the public consciousness, Anderegg said. The study notes that independent polling data also indicate that these events had very little effect on American public opinion. “There’s a lot of handwringing among scientists, and a belief that these events permanently damaged public trust. What these results suggest is that that’s just not true,” Anderegg said.

While that’s good in a sense, Anderegg said, his and Goldsmith’s results also suggest that climate change as a whole does not top the list of gripping public topics. For instance, he said, climategate had the same Internet half-life as the public fallout from pro-golfer Tiger Woods’ extramarital affair, which happened around the same time (but received far more searches).

A public with little interest in climate change is unlikely to push for policies that actually address the problem, Anderegg said. He and Goldsmith suggest communicating in terms familiar to the public rather than to scientists. For example, their findings suggest that most people still identify with the term “global warming” instead of “climate change,” though the shift toward embracing the more scientific term is clear.

“If public interest in climate change is falling, it may be more difficult to muster public concern to address climate change,” Anderegg said. “This long-term trend of declining interest is worrying and something I hope we can address soon.”

One outcome of the research might be to shift scientists’ focus away from battling short-lived, so-called scandals, said Michael Oppenheimer, Princeton’s Albert G. Milbank Professor of Geosciences and International Affairs. The study should remind climate scientists that every little misstep or controversy does not make or break the public’s confidence in their work, he said. Oppenheimer, who was not involved in the study, is a long-time participant in the IPCC and an author of the Fifth Assessment Report being released this year in sections.

“This is an important study because it puts scientists’ concerns about climate skepticism in perspective,” Oppenheimer said. “While scientists should maintain the aspirational goal of their work being error-free, they should be less distracted by concerns that a few missteps will seriously influence attitudes in the general public, which by-and-large has never heard of these episodes.”

Read the article.

Anderegg, William R. L., Gregory R. Goldsmith. 2014. Public interest in climate change over the past decade and the effects of the ‘climategate’ media event. Environmental Research Letters 9 054005. doi:10.1088/1748-9326/9/5/054005 Article published online May 20, 2014.

Unlocking the potential of bacterial gene clusters to discover new antibiotics (Proc. Natl. Acad. Sci.)

High-throughput screening for the discovery of small molecules that activate silent bacterial gene clusters

High-throughput screening for the discovery of small molecules that activate silent bacterial gene clusters. Image courtesy of Mohammad Seyedsayamdost.

by Tien Nguyen, Department of Chemistry

Resistance to antibiotics has been steadily rising and poses a serious threat to the stronghold of existing treatments. Now, a method from Mohammad Seyedsayamdost, an assistant professor of chemistry at Princeton University, may open the door to the discovery of a host of potential drug candidates.

The vast majority of anti-infectives on the market today are bacterial natural products, made by biosynthetic gene clusters. Genome sequencing of bacteria has revealed that these active gene clusters are outnumbered approximately ten times by so-called silent gene clusters.

“Turning these clusters on would really expand our available chemical space to search for new antibiotic or otherwise therapeutically useful molecules,” Seyedsayamdost said.

In an article published last week in the journal Proceedings of the National Academy of Sciences, Seyedsayamdost reported a strategy to quickly screen whole libraries of compounds to find elicitors, small molecules that can turn on a specific gene cluster. He used a genetic reporter that fluoresces or generates a color when the gene cluster is activated to easily identify positive hits. Using this method, two silent gene clusters were successfully activated and a new metabolite was discovered.

Application of this work promises to uncover new bacterial natural products and provide insights into the regulatory networks that control silent gene clusters.

Read the abstract.

Seyedsayamdost, M. R. “High-throughput platform for the discovery of elicitors of silent bacterial gene clusters.” Proc. Natl. Acad. Sci. 2014, Early edition.

Why a bacterium got its curve — and why biologists should know (Nature Communications)

Art by Laura Ancona

Princeton University researchers found that the banana-like curve of the bacteria Caulobacter crescentus provides stability and helps them flourish as a group in the moving water they experience in nature. The findings suggest a new way of studying the evolution of bacteria that emphasizes using naturalistic settings. The illustration shows how C. crescentus divides asymmetrically into a “stalked” mother cell that anchors to a bacterium’s home surface, and an upper unattached portion that forms a new, juvenile cell known as a “swarmer.” Swarmer cells later morph into stalked cells and lay down roots nearby. They repeat the life cycle with their own swarmer cell and the bacterial colony grows. The Princeton researchers found that in moving water, curvature points the swarmer cell toward the surface to which it needs to attach. This ensures that the bacteria’s next generation does not stray too far from its progenitors. (Image by Laura Ancona)

By Morgan Kelly, Office of Communications

Drawing from his engineering background, Princeton University researcher Alexandre Persat had a notion as to why the bacteria Caulobacter crescentus are curved — a hunch that now could lead to a new way of studying the evolution of bacteria, according to research published in the journal Nature Communications.

Commonly used in labs to study cell division, C. crescentus naturally take on a banana-like curve, but they also can undergo a mutation in which they grow to be perfectly straight. The problem was that in a laboratory there was no apparent functional difference between the two shapes. So a question among biologists was, why would nature bother?

Then Persat, who is a postdoctoral researcher in the group of Associate Professor of Molecular Biology Zemer Gitai, considered that the bacteria dwell in large groups attached to surfaces in lakes, ponds and streams. That means that their curvature could be an adaptation that allows C. crescentus to better develop in the water currents the organisms experience in nature.

In the new paper, first author Persat, corresponding author Gitai and Howard Stone, Princeton’s Donald R. Dixon ’69 and Elizabeth W. Dixon Professor of Mechanical and Aerospace Engineering, report that curvature does more than just help C. crescentus hold their ground in moving fluid. The researchers monitored C. crescentus growth on surfaces in flow and found that the bacteria’s arched anatomy is crucial to flourishing as a group.

“It didn’t take a long time to figure out how flow brought out the advantages of curvature,” Persat said. “The obvious thing to me as someone with a fluid-dynamics background was that this shape had something to do with fluid flow.”

The findings emphasize the need to study bacteria in a naturalistic setting, said Gitai, whose group focuses on how bacterial shapes are genetically determined. While a petri dish generally suffices for this line of study, the functionality of bacterial genes and anatomy can be elusive in most lab settings, he said. For instance, he said, 80 percent of the genes in C. crescentus are seemingly disposable — but they might not be in nature.

“We now see there can be benefits to bacterial shapes that are only seen in a growth environment that is close to the bacteria’s natural environment,” Gitai said.

“For C. crescentus, the ecology was telling us there is an advantage to being curved, but nothing we previously did in the lab could detect what that was,” he said. “We need to not only think of the chemical environment of the bacteria — we also need to think of the physical environment. I think of this research as opening a whole new axis of studying bacteria.”

While most bacteria grow and divide as two identical “daughter” cells, C. crescentus divides asymmetrically. A “stalked” mother cell anchors to a bacterium’s home surface while the upper unattached portion forms a new, juvenile version of the stalked cell known as a “swarmer” cell. The swarmer cells later morph into stalked cells then eventually detach before laying down roots nearby. They repeat the life cycle with their own swarmer cell and the bacterial colony grows.

The Princeton researchers found that in moving water, curvature points the swarmer cell toward the surface to which it needs to attach. This ensures that the bacteria’s next generation does not stray too far from its progenitors, as well as from the nutrients that prompted cell division in the first place, Gitai said. On the other hand, the upper cells of straight bacteria — which are comparatively higher from the ground — are more likely to be carried far away as they are to stay near home.

But the advantage of curvature only goes so far. The researchers found that when the water current was too strong, both curved and straight bacteria were pressed flat against the surface, eliminating the curved cells’ colonization advantage.

These findings put some interesting boundaries on what is known about C. crescentus, starting with the upper limits of the current in which the organism can thrive, Gitai said. He and Persat also plan to pursue whether the bacteria are able to straighten out and cast offspring downstream when the home colony faces a decline in available nutrients.

At the same time, understanding why C. crescentus got its curve helps in figuring out the evolution of other bacteria, he said. Close relatives of the bacteria, for example, are not curved — could it have to do with the severity of their natural environment, such as the powerful turbulence of an ocean? Harmful bacteria such as Vibrio cholerae, strains of which cause cholera, are curved, though the reason is unclear. It’s possible this shape could be related to the organism’s environment in a way that might help treat those infected by it, Gitai said.

Whatever the reason for a specific bacteria’s shape, the Princeton research shows that exploring the influence of its natural habitat could be worthwhile, Gitai said.

“It was clear with C. crescentus that we needed to try something different,” Gitai said. “People didn’t really think of flow as a major driver of this bacteria’s evolution. That really is a new idea.”

Read the article..

Persat, Alexandre, Howard A. Stone, Zemer Gitai. 2014. The curved shape of Caulobacter crescentus enhances surface colonization in flow. Nature Communications. Article published online May 8, 2014. DOI: 10.1038/ncomms4824

The work was supported by the Gordon and Betty Moore Foundation (grant no. GBMF 2550.02), the National Science Foundation (grant no. CBET-1234500), and the National Institutes of Health Director’s New Investigator Innovator Award (grant no. 1DP2OD004389).

Too many chefs: Smaller groups exhibit more accurate decision-making (Proceedings of the Royal Society B)

Flock behavior

Smaller groups actually tend to make more accurate decisions, according to a new study from Princeton University Professor Iain Couzin and graduate student Albert Kao. (Photo credit: Gabriel Miller)

By Morgan Kelly, Office of Communications

The trope that the likelihood of an accurate group decision increases with the abundance of brains involved might not hold up when a collective faces a variety of factors — as often happens in life and nature. Instead, Princeton University researchers report that smaller groups actually tend to make more accurate decisions, while larger assemblies may become excessively focused on only certain pieces of information.

The findings present a significant caveat to what is known about collective intelligence, or the “wisdom of crowds,” wherein individual observations — even if imperfect — coalesce into a single, accurate group decision. A classic example of crowd wisdom is English statistician Sir Francis Galton’s 1907 observation of a contest in which villagers attempted to guess the weight of an ox. Although not one of the 787 estimates was correct, the average of the guessed weights was a mere one-pound short of the animal’s recorded heft. Along those lines, the consensus has been that group decisions are enhanced as more individuals have input.

But collective decision-making has rarely been tested under complex, “realistic” circumstances where information comes from multiple sources, the Princeton researchers report in the journal Proceedings of the Royal Society B. In these scenarios, crowd wisdom peaks early then becomes less accurate as more individuals become involved, explained senior author Iain Couzin, a professor of ecology and evolutionary biology.

“This is an extension of the wisdom-of-crowds theory that allows us to relax the assumption that being in big groups is always the best way to make a decision,” Couzin said.

“It’s a starting point that opens up the possibility of capturing collective decision-making in a more realistic environment,” he said. “When we do see small groups of animals or organisms making decisions they are not necessarily compromising accuracy. They might actually do worse if more individuals were involved. I think that’s the new insight.”

Couzin and first author Albert Kao, a graduate student of ecology and evolutionary biology in Couzin’s group, created a theoretical model in which a “group” had to decide between two potential food sources. The group’s decision accuracy was determined by how well individuals could use two types of information: One that was known to all members of the group — known as correlated information — and another that was perceived by only some individuals, or uncorrelated information. The researchers found that the communal ability to pool both pieces of information into a correct, or accurate, decision was highest in a band of five to 20. After that, the accurate decision increasingly eluded the expanding group.

At work, Kao said, was the dynamic between correlated and uncorrelated cues. With more individuals, that which is known by all members comes to dominate the decision-making process. The uncorrelated information gets drowned out, even if individuals within the group are still well aware of it.

In smaller groups, on the other hand, the lesser-known cues nonetheless earn as much consideration as the more common information. This is due to the more random nature of small groups, which is known as “noise” and typically seen as an unwelcome distraction. Couzin and Kao, however, found that noise is surprisingly advantageous in these smaller arrangements.

“It’s surprising that noise can enhance the collective decision,” Kao said. “The typical assumption is that the larger the group, the greater the collective intelligence.

“We found that if you increase group size, you see the wisdom-of-crowds benefit, but if the group gets too large there is an over-reliance on high-correlation information,” he said. “You would find yourself in a situation where the group uses that information to the point that it dominates the group’s decision-making.”

None of this is to suggest that large groups would benefit from axing members, Couzin said. The size threshold he and Kao found corresponds with the number of individuals making the decisions, not the size of the group overall. The researchers cite numerous studies — including many from Couzin’s lab — showing that decisions in animal groups such as schools of fish can often fall to a select few members. Thusly, these organisms can exhibit highly coordinated movements despite vast numbers of individuals. (Such hierarchies could help animals realize a dual benefit of efficient decision-making and defense via strength-in-numbers, Kao said.)

“What’s important is the number of individuals making the decision,” Couzin said. “Just looking at group size per se is not necessarily relevant. It depends on the number of individuals making the decision.”

Read the abstract.

Kao, Albert B., Iain D. Couzin. 2014. Decision accuracy in complex environments is often maximized by small group sizes. Proceedings of the Royal Society B. Article published online April 23, 2014. DOI: 10.1098/rspb.2013.3305

This work was supported by a National Science Foundation Graduate Research Fellowship, National Science Foundation Doctoral Dissertation Improvement (grant no. 1210029), the National Science Foundation (grant no. PHY-0848755), the Office of Naval Research Award (no. N00014-09-1-1074), the Human Frontier Science Project (grant no. RGP0065/2012), the Army Research Office (grant no. W911NG-11-1-0385), and an NSF EAGER grant (no. IOS-1251585).