Princeton researchers report new system to study chronic hepatitis B

A co-culture of human hepatocytes
A co-culture of human hepatocytes and non-parenchymal stromal cells self-assembles into liver-like structures that can be infected for extended periods with the hepatitis B virus. Image courtesy of Benjamin Winer and Alexander Ploss, Princeton University Department of Molecular Biology.

By the Department of Molecular Biology

Scientists from Princeton University‘s Department of Molecular Biology have successfully tested a cell-culture system that will allow researchers to perform laboratory-based studies of long-term hepatitis B virus (HBV) infections. The technique, which is described in a paper published July 25 in the journal Nature Communications, will aid the study of viral persistence and accelerate the development of antiviral drugs to cure chronic hepatitis B, a condition that affects over 250 million people worldwide and can cause severe liver disease, including liver cancer.

HBV specifically infects the liver by binding to a protein called sodium-taurocholate co-transporting polypeptide (NTCP) that is only present on the surface of liver cells. Once inside the cell, HBV hijacks its host’s cellular machinery to convert the virus’s DNA into a stable “mini-chromosome.” This allows the virus to establish persistent, long-term infections that can ultimately cause liver fibrosis, cirrhosis and hepatocellular carcinoma. The World Health Organization estimates that 600,000 people die every year as a result of HBV infection.

Researchers have so far failed to develop drugs that can cure chronic HBV infections, partly because they have not been able to study the long-term infection of liver cells grown in the laboratory. Liver cells—also known as hepatocytes—lose their function within days of being isolated from donor livers, preventing researchers from studying anything other than the acute stage of HBV infection. Hepatocytes can be maintained for longer when they are co-cultured with other, supportive cells.

“In previous studies using hepatocytes and cells known as fibroblasts grown on micro-patterned surfaces, HBV infections worked with only a few donors, and infection lasted for no longer than 14-19 days and required the suppression of antiviral cell signaling pathways, which poses problems for studying host-cell responses to HBV and for antiviral drug testing,” said Alexander Ploss, an assistant professor of molecular biology at Princeton University.

Dr. Ploss and colleagues at Princeton and the Hurel Corporation, led by graduate student Benjamin Winer, tested a different system, in which primary human hepatocytes are co-cultured with non-parenchymal stromal cells, which are cells that support the function of the parenchymal hepatocytes in the liver. When plated in collagen-coated labware, the co-cultures self-assemble into liver-like structures. These self-assembling liver-like cultures could be persistently infected with HBV for over 30 days, without the aid of antiviral signaling inhibitors. Moreover, the system worked with hepatocytes grown from a variety of donors and with viruses isolated from chronically-infected patients, which are harder to work with than lab-grown strains of HBV.

“The establishment of a co-culturing system of human primary hepatocytes and non-parenchymal stromal cells for extended HBV infection is a valuable addition to the armamentarium of cell culture model systems for the study of HBV biology and therapeutic development, which has been hampered by a relative lack of efficient infectious cell culture systems,” said T. Jake Liang, a senior investigator at the National Institute of Diabetes and Digestive and Kidney Diseases, who was not involved in the research.

Ploss and colleagues were able to scale down their co-culture infections to volumes as small as a few hundred microliters. This will be important for future high-throughput screens for anti-HBV drug candidates. As a proof-of-principle for these screens, the researchers found that they could block HBV infections in their co-culture system using drugs that either prevent the virus’ entry into hepatocytes or inhibit a viral enzyme that is essential for the virus’ replication. “The platform presented here may aid the identification and testing of novel therapeutic regimens,” Ploss said.

This study is supported in part by grants from the National Institutes of Health (R21AI117213 to Alexander Ploss and R37GM086868 to Tom W. Muir), a Burroughs Wellcome Fund Award for Investigators in Pathogenesis (to Alexander Ploss) and funds from Princeton University (to Alexander Ploss). Benjamin Y. Winer is a recipient of F31 NIH/NRSA Ruth L. Kirschstein Predoctoral awarded from the National Institute of Allergy and Infectious Diseases. Felix Wojcik is supported by a German Research Foundation (DFG) postdoctoral fellowship.

The study, “Long-term hepatitis B infection in a scalable hepatic co-culture system,” by Benjamin Y. Winer, Tiffany S. Huang, Eitan Pludwinski, Brigitte Heller, Felix Wojcik, Gabriel E. Lipkowitz, Amit Parekh, Cheul Cho, Anil Shrirao, Tom W. Muir, Eric Novik, Alexander Ploss, was published in Nature Communications on July 25, 2017. DOI: 10.1038/s41467-017-00200-8

Study reveals ways in which cells feel their surroundings

Model of fibrin network
Researchers used computer modeling to show how cells can feel their way through their surroundings, important when, for example, a tumor cell invades a new tissue or organ. This computer simulation depicts collagen fibers that make up the extracellular matrix in which cells live. Local arrangements of these fibers are extremely variable in their flexibility, with some fibers (blue) responding strongly to the cell and others (red) responding hardly at all. The surprising amount of variability in a local area makes it difficult for cells (represented by green arrows) to determine the overall amount of stiffness in a local area, and suggests that cells need to move or change shape to sample more of the surrounding area.

By Catherine Zandonella, Office of the Dean for Research

Cells push out tiny feelers to probe their physical surroundings, but how much can these tiny sensors really discover? A new study led by Princeton University researchers and colleagues finds that the typical cell’s environment is highly varied in the stiffness or flexibility of the surrounding tissue, and that to gain a meaningful amount of information about its surroundings, the cell must move around and change shape. The finding aids the understanding of how cells respond to mechanical cues and may help explain what happens when migrating tumor cells colonize a new organ or when immune cells participate in wound healing.

“Our study looks at how cells literally feel their way through an environment, such as muscle or bone,” said Ned Wingreen, Princeton’s Howard A. Prior Professor in the Life Sciences and professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics. “These tissues are highly disordered on the cellular scale, and the cell can only make measurements in the immediate area around it,” he said. “We wanted to model this process.” The study was published online on July 18 in the journal Nature Communications.

The organs and tissues of the body are enmeshed in a fiber-rich structure known as the extracellular matrix, which provides a scaffold for the cells to live, move and differentiate to carry out specific functions. Cells interact with this matrix by extending sticky proteins out from the cell surface to pull on nearby fibers. Previous work, mostly employing artificial flat surfaces, has shown that cells can use this tactile feedback to determine the elasticity or stiffness in a process called mechanosensing. But because the fibers of the natural matrix are all interconnected in a jumbled, three-dimensional network, it was not clear how much useful information the cell could glean from feeling its immediate surroundings.

To find out, the researchers built a computer simulation that mimicked a typical cell in a matrix made of collagen protein, which is found in skin, bones, muscles and connective tissue. The team also modeled a cell in a network of fibrin, the strong, stringy protein that makes up blood clots. To accurately capture the composition of these networks, the researchers worked with Chase Broedersz, a former Princeton Lewis-Sigler Fellow who is now professor of physics at Ludwig-Maximilians-University of Munich, and his colleagues Louise Jawerth and Stefan Münster to first create physical models of the matrices, using approaches originally developed in the group of collaborator David Weitz, a systems biologist at Harvard University. Princeton graduate student Farzan Beroz then used those models to recreate virtual versions of the collagen and fibrin networks in computer models.

With these virtual networks, Beroz, Broedersz and Wingreen could then ask the question: can cells glean useful information about the elasticity or stiffness of their environment by feeling their surroundings? If the answer is yes, then the finding would shed light on how cells can change in response to those surroundings. For example, the work might help explain how cancer cells are able to detect that they’ve arrived at an organ that has the right type of scaffold to support tumor growth, or how cells that arrive at a wound know to start secreting proteins to promote healing.

Using mathematics, the researchers calculated how the networks would deform when nearby fibers are pulled on by cells. They found that both the collagen and fibrin networks contained configurations of fibers with remarkably broad ranges of collective stiffness, from rather bendable to very rigid, and that these regions could be immediately next to each other. As a result, the cell could have two nearby probes whereby one detects hardness and the other detects softness, making it difficult for a cell to learn by mechanosensing what type of tissue it inhabits. “We were surprised to find that the cell’s environment can vary quite a lot even across a small distance,” Wingreen said.

The researchers concluded that to obtain an accurate assessment of its environment, a cell must move around and also change shape, for example elongating to cover a different area of the matrix. “What we found in our simulation conforms to what experimentalists have found,” Wingreen said, “and reveals new, ‘intelligent’ strategies that cells could employ to feel their way through tissue environments.”

The study was supported in part by the National Science Foundation (grants DMR-1310266, DMR-1420570, PHY-1305525 and PHY-1066293) the German Excellence Initiative, and the Deutsche Forschungsgemeinschaft.

The study, “Physical limits to biomechanical sensing in disordered fiber networks,” by Farzan Beroz, Louise Jawerth, Stefan Münster, David Weitz, Chase Broedersz, and Ned Wingreen, was published in the journal Nature Communications on July 18, 2017. DOI 10.1038/NCOMMS16096.

New model projects an increase in dust storms in the US

Drifting dust burying farm abandoned farm equipment.
Drifting dust burying farm abandoned farm equipment in 1935. Image courtesy of NOAA.

By Pooja Makhijani for the Office of Communications

Could the storms that once engulfed the Great Plains in clouds of black dust in the 1930’s once again wreak havoc in the U.S.? A new statistical model developed by researchers at Princeton University and the National Oceanic and Atmospheric Administration (NOAA)’s Geophysical Fluid Dynamics Laboratory (GFDL) predicts that climate change will amplify dust activity in parts of the U.S. in the latter half of the 21st century, which may lead to the increased frequency of spectacular dust storms that have far-reaching impacts on public health and infrastructure.

The model, detailed in a study published July 17 in the journal Scientific Reports, eliminates some of the uncertainty found in previous dust activity models by using present-day satellite data such as dust optical depth, which measures to what extent dust particles block sunlight, as well as leafy green coverage over land and other factors.

“Few existing climate models have captured the magnitude and variability of dust across North America,” said Bing Pu, the study’s lead author and an associate research scholar in the Program in Atmospheric and Oceanic Sciences (AOS), a collaboration between Princeton and GFDL.

Dust storms happen when wind blows soil particles into the atmosphere. Dust storms are most frequent and destructive in arid climates with loose soil — especially on lands affected by drought and deforestation. Certain regions of the U.S., such as the southwestern deserts and the central plains, are dust-prone. Most importantly, existing climate models predict “unprecedented” dry conditions in the late-21st century due to an increase in greenhouse gases in these very areas.

It is this “perfect storm” of geography and predicted drought and drought-like conditions that led Pu and her colleague Paul Ginoux, a physical scientist at GFDL, to examine the influence of climate change on dust. They analyzed satellite data about the frequency of dust events and the land’s leafy green coverage over the contiguous U.S., as well as precipitation and surface wind speed, and reported that climate change will increase dust activity in the southern Great Plains from spring to fall in the late half of the 21st century due to reduced rainfall, increased land surface bareness and increased surface wind speed. Conversely, they predicted reduced dust activity in the northern Great Plains in spring during the same time period due to increased precipitation and increased surface vegetation.

Although it is still unclear if rising temperatures themselves trigger the release of yet more dust into the atmosphere, this research offers a glimpse of what the future might hold. “This is an early attempt to project future changes in dust activity in parts of the United States caused by increasing greenhouse gases,” Pu said. Nonetheless, these findings are important given the huge economic and health consequences of severe dust storms, as they can disrupt public transportation systems and trigger respiratory disease epidemics. “Our specific projections may provide an early warning on erosion control, and help improve risk management and resource planning,” she said.

The paper, “Projection of American dustiness in the late 21st century due to climate change,” was published July 17, 2017 in the journal Scientific Reports (doi 10.1038/s41598-017-05431-9 ) and is available online.

This research was supported by NOAA, Princeton University’s Cooperative Institute for Climate Science, and NASA grantNNH14ZDA001N-ACMAP.

Read more about the research in this GFDL Research Highlight.

How TPX2 helps microtubules branch out

By Staff, Department of Molecular Biology

Branching microtubules, which are structures involved in cell division, form in response to a protein known as TPX2, according to a study conducted at Princeton University in the laboratory of Sabine Petry, assistant professor of molecular biology. The image was featured on the cover of the Journal of Cell Biology. Image credit: Alfaro-Aco et al.

A new study has revealed insights into how new microtubules branch from the sides of existing ones. Researchers at Princeton University investigated proteins that control the formation of the thin, hollow tubes, which play an essential role in cellular structure and cell division. In a study published in the Journal of Cell Biology in March, the team found that one of these microtubule regulators—a protein called TPX2—controls the formation of new microtubule branches.

“TPX2 is often overexpressed in various cancers, and, in many cases, serves as a prognostic indicator,” said Raymundo Alfaro-Aco, a graduate student in the Department of Molecular Biology. Aco conducted the study with graduate student Akanksha Thawani in the Department of Chemical and Biological Engineering in the laboratory of Sabine Petry, assistant professor of molecular biology. “Therefore, elucidating the role of TPX2 in cell division in general can have important implications in our understanding of human diseases,” Alfaro-Aco said.

Microtubules are formed by the polymerization of two proteins, α- and ß-tubulin, but a third form of tubulin—γ-tubulin—helps to initiate (or “nucleate”) microtubule polymerization inside cells. γ-Tubulin combines with several other proteins to form γ-tubulin ring complexes (γ-TuRCs) that localize, for example, to the cell’s centrosomes, which nucleate and organize most of the microtubules that assemble into the mitotic spindle, the cellular structure that segregates chromosomes into newly forming daughter cells during cell division.

While a postdoc at the University of California-San Francisco, Petry demonstrated that spindles also contain microtubules that are nucleated from the sides of other microtubules (Petry et al., Cell. 152: 768-777, 2013). This “branching nucleation” process depends, in part, on a microtubule-binding protein called TPX2. Petry and Alfaro-Aco decided to investigate exactly how this protein stimulates branching microtubule nucleation.

To explore this question, the researchers used cell-free extracts prepared from frog eggs, which are capable of forming functional spindles in vitro, Alfaro-Aco said. “This powerful system allows us to easily add or remove factors, such as proteins or small molecules, to probe different aspects of spindle assembly,” he said. “Combining this extract system with a powerful imaging method — known as total internal reflection fluorescence microscopy — allows us to observe and measure microtubule events, such as nucleation, at the level of single microtubules.”

By adding different fragments of TPX2 to egg extracts and observing their effects on microtubules, Alfaro-Aco found that a fragment containing three of the protein’s seven alpha-helical domains was the smallest piece capable of stimulating branching microtubule nucleation.

This minimal fragment contained three short stretches of amino acids that are similar to sequences found in proteins that bind and activate γ-TuRC. The researchers found that deleting or mutating these sequences eliminated the TPX2 fragment’s capacity to stimulate microtubule branching, without affecting the protein’s ability to bind to microtubules.

The team also found that this region of TPX2 binds to γ-TuRC. Mutating the three sequences found in other γ-TuRC-binding proteins didn’t inhibit this interaction but, because these mutants no longer stimulate branching microtubule nucleation, Alfaro-Aco and colleagues think that the sequences are required to activate γ-TuRC. TPX2 may therefore bind to existing spindle microtubules and then bind and activate γ-TuRC to initiate the formation of a new microtubule branch. This process is crucial for spindle assembly and the accurate segregation of chromosomes.

This work was supported by the National Institutes of Health/National Institute of General Medical Sciences (grant # 4R00GM100013), the Pew Scholars Program in the Biomedical Sciences, the Sidney Kimmel Foundation, and the David and Lucile Packard Foundation. In addition, Alfaro-Aco received support from the Howard Hughes Medical Institute and the National Science Foundation.

Alfaro-Aco, R., A. Thawani, and S. Petry. Structural analysis of the role of TPX2 in branching microtubule nucleation. Journal of Cell Biology, 216: 983-997, 2017. DOI: 10.1083/jcb.201607060 | Published March 6, 2017.

Mysterious force harnessed in a silicon chip

By Catherine Zandonella for the Office of the Dean for Research

Getting something from nothing sounds like a good deal, so for years scientists have been trying to exploit the tiny amount of energy found in nearly empty space. It’s a source of energy so obscure it was once derided as a source of “perpetual motion.” Now, a research team including Princeton scientists has found a way to harness this energy using a silicon-chip device, potentially enabling applications.

This energy, predicted seven decades ago by the Dutch scientist Hendrik Casimir, arises from quantum effects and can be seen experimentally by placing two opposing plates very close to each other in a vacuum. At close range, the plates attract each other. Until recently, however, harnessing this “Casimir force” to do anything useful seemed impossible.

A new silicon chip built by researchers at Hong Kong University of Science and Technology and Princeton University is a step toward harnessing the Casimir force. Using a clever assembly of micron-sized shapes etched into the plates, the researchers demonstrated that the plates can instead repel each other as they are brought close together. Constructing this device entirely out of a single silicon chip could open the way to using the Casimir force for practical applications such as keeping tiny machine parts from sticking to each other. The work was published in the February issue of the journal Nature Photonics.

Energy of a vacuum

Image of a Casimir-on-a-chip device
Researchers created a silicon device that enabled them to observe the Casimir force. (Image credit: Nature Photonics)

“This is among the first experimental verifications of the Casimir effect on a silicon chip,” said Alejandro Rodriguez, an assistant professor of electrical engineering at Princeton University, who provided theoretical calculations for the device, which was built by a team led by Ho Bun Chan at Hong Kong University of Science and Technology. “And it also allows you to make measurements of forces in very nontrivial structures like these that cause repulsion. It is a double-whammy.”

The silicon structure looks like two plates lined with teeth that face each other across a tiny gap which is only about 100 nanometers wide. (A human hair is 60,000-80,000 nanometers wide.) As the two plates are pushed closer together, the Casimir force comes into play and pushes them apart.

This repulsive effect happens without any input of energy and to all appearances, in a vacuum. These characteristics led this energy to be called “zero-point energy.” They also fueled earlier claims that the Casimir force could not exist because its existence would imply some sort of perpetual motion, which would be impossible according to the laws of physics.

The force, which has since been experimentally confirmed to exist, arises from the normal quantum fluctuations of the few atoms that persist in the chasm despite the evacuation of all the air.

The team demonstrated that it is possible to build a device in silicon to control the Casimir force.

“Our paper shows that it is possible to control the Casimir force using structures of complex, tailor-made shapes,” said Ho Bun Chan, senior author on the paper and a scientist at the Hong Kong University of Science and Technology. His team drew on earlier work by Rodriguez published in 2008 that proposed shapes that would be expected to yield a Casimir force that could both attract and repel. “This paper is the experimental realization using a structure inspired by Rodriguez’s design,” Chan said.

Rodriguez and his team at Princeton developed techniques that allowed the researchers to compute interactions between two parallel plates as they approach each other. With these tools, they were then able to explore what would happen if more complex geometries were used. This led to some of the first predictions of a repulsive Casimir force in 2008.

The Rodriguez group used nanophotonic techniques, which involved measuring how light would interact with the structures, to get at the complex equations of how the force arises from the interaction of two plates.

The silicon device included a small mechanical spring that the researchers used to measure the force between the two plates, and to verify that the quantum force can be repulsive. The roughly T-shaped silicon teeth are what allow the repulsive force to form. The repulsion comes from how different parts of the surface interact with the opposite surface.

“We tried to think about what kind of shapes Chan’s group would have to fabricate to lead to a significant repulsive force, so we did some background studies and calculations to make sure they would see enough non-monotonicity as to be measurable,” Rodriguez said.

Going forward, the researchers plan to explore other configurations that may give rise to even larger repulsive forces and more well-defined repulsion at larger separations.

Funding for the study came from the Research Grants Council of Hong Kong and the National Science Foundation (grant no. DMR-1454836).

The paper, “Measurement of non-monotonic Casimir forces between silicon nanostructures,” by L. Tang, M. Wang, C. Y. Ng, M. Nikolic, C. T. Chan, A. W. Rodriguez and H. B. Chan was published in the journal Nature Photonics online Jan. 9, 2017 and in the February 2017 issue. Nature Photonics 97–101(2017) doi:10.1038/nphoton.2016.254.

Artificial topological matter opens new research directions

By Catherine Zandonella, Office of the Dean for Research

An international team of researchers have created a new structure that allows the tuning of topological properties in such a way as to turn on or off these unique behaviors. The structure could open up possibilities for new explorations into the properties of topological states of matter.

“This is an exciting new direction in topological matter research,” said M. Zahid Hasan, professor of physics at Princeton University and an investigator at Lawrence Berkeley National Laboratory in California who led the study, which was published March 24th in the journal Science Advances. “We are engineering new topological states that do not occur naturally, opening up numerous exotic possibilities for controlling the behaviors of these materials.”

The new structure consists of alternating layers of topological and normal, or trivial, insulators, an architecture that allows the researchers to turn on or off the flow of current through the structure. The ability to control the current suggests possibilities for circuits based on topological behaviors, but perhaps more importantly presents a new artificial crystal lattice structure for studying quantum behaviors.

Theories behind the topological properties of matter were the subject of the 2016 Nobel Prize in physics awarded to Princeton University’s F. Duncan Haldane and two other scientists. One class of matter is topological insulators, which are insulators on the inside but allow current to flow without resistance on the surfaces.

In the new structure, interfaces between the layers create a one-dimensional lattice in which topological states can exist. The one-dimensional nature of the lattice can be thought of as if one were to cut into the material and remove a very thin slice, and then look at the thin edge of the slice. This one-dimensional lattice resembles a chain of artificial atoms. This behavior is emergent because it arises only when many layers are stacked together.

Artificial topological matter
The researchers made different samples where they could control how the electrons tunnel from interface to interface through alternating layers of trivial and topological insulators, forming an emergent, tunable one-dimensional quantum lattice. The top panel (A, B, C, and D) shows a structure where the trivial layer is relatively thin, enabling electron-like particles to tunnel through the layers (topological phase). The bottom panel (G, H, I, and J) shows a structure where the trivial insulator is relatively thick and blocks tunneling (trivial phase). (Image courtesy of Science/AAAS)

By changing the composition of the layers, the researchers can control the hopping of electron-like particles, called Dirac fermions, through the material. For example, by making the trivial-insulator layer relatively thick – still only about four nanometers – the Dirac fermions cannot travel through it, making the entire structure effectively a trivial insulator. However, if the trivial-insulator layer is thin – about one nanometer – the Dirac fermions can tunnel from one topological layer to the next.

To fashion the two materials, the Princeton team worked with researchers at Rutgers University led by Seongshik Oh, associate professor of physics, who in collaboration with Hasan and others showed in 2012 that adding indium to a topological insulator, bismuth selenide, caused it to become a trivial insulator. Prior to that, bismuth selenide (Bi2Se3) was theoretically and experimentally identified as a topological insulator by Hasan’s team, a finding which was published in the journal Nature in 2009.

“We had shown that, depending on how much indium you add, the resulting material had this nice tunable property from trivial to topological insulator,” Oh said, referring to the work published in Physical Review Letters in 2012.

Graduate students Ilya Belopolski of Princeton and Nikesh Koirala of Rutgers combined two state-of-the-art techniques with new instrumentation development and worked together on layering these two materials, bismuth selenide and indium bismuth selenide, to design the optimal structure. One of the challenges was getting the lattice structures of the two materials to match up so that the Dirac fermions can hop from one layer to the next. Belopolski and Suyang Xu worked with colleagues at Princeton University, Lawrence Berkeley National Laboratory and multiple institutions to use high resolution angle-resolved photoemission spectroscopy  to optimize the behavior of the Dirac fermions based on a growth to measurement feedback loop.

Photo of research team
Princeton research team from L to R: Guang Bian, M. Zahid Hasan, Nasser Alidoust, Hao Zheng, Daniel Sanchez, Suyang Xu and Ilya Belopolski (Image credit: Princeton University)

Although no topologically similar states exist naturally, the researchers note that analogous behavior can be found in a chain of polyacetylene, which is a known model of one-dimensional topological behavior as described by the 1979 Su-Schrieffer-Heeger’s theoretical model of an organic polymer.

The research presents a foray into making artificial topological materials, Hasan said. “In nature, whatever a material is, topological insulator or not, you are stuck with that,” Hasan said. “Here we are tuning the system in a way that we can decide in which phase it should exist; we can design the topological behavior.”

The ability to control the travel of light-like Dirac fermions could eventually lead future researchers to harness the resistance-less flow of current seen in topological materials. “These types of topologically tunable heterostructures are a step toward applications, making devices where topological effects can be utilized,” Hasan said.

The Hasan group plans to further explore ways to tune the thickness and explore the topological states in connection to the quantum Hall effect, superconductivity, magnetism, and Majorana and Weyl fermion states of matter.

In addition to work done at Princeton and Rutgers, the research featured contributions from the following institutions: South University of Science and Technology of China; Swiss Light Source, Paul Scherrer Institute; National University of Singapore; University of Central Florida; Universität Würzburg; Diamond Light Source, Didcot, U.K.; and Synchrotron SOLEIL, Saint-Aubin, France.

Work at Princeton University and synchrotron-based ARPES measurements led by Princeton researchers were supported by the U.S. Department of Energy under Basic Energy AQ29 Sciences grant no. DE-FG-02-05ER46200 (to M.Z.H.). I.B. was supported by an NSF Graduate Research Fellowship. N.K., M.B., and S.O. were supported by the Emergent Phenomena in Quantum Systems Initiative of the Gordon and Betty Moore Foundation under grant no. GBMF4418 and by the NSF under grant no. NSF-EFMA-1542798. H.L. acknowledges support from the Singapore National Research Foundation under award no. NRF-NRFF2013-03. M.N. was supported by start-up funds from the University of Central Florida. The work acknowledges support of Diamond Light Source, Didcot, U.K., for time on beamline I05 under proposal SI11742-1. Some measurements were carried out at the ADRESS beamline (24) of the Swiss Light Source, Paul Scherrer Institute, Switzerland. This study was in part supported by grant AQ30 no. 11504159 of the National Natural Science Foundation of China (NSFC), grant no. 2016A030313650 of NSFC Guangdong, and project no. JCY20150630145302240 of the Shenzhen Science and Technology Innovations Committee.

The paper, “A novel artificial condensed matter lattice and a new platform for one-dimensional topological phases,” by Ilya Belopolski, Su-Yang Xu, Nikesh Koirala, Chang Liu, Guang Bian, Vladimir Strocov, Guoqing Chang, Madhab Neupane, Nasser Alidoust, Daniel Sanchez, Hao Zheng, Matthew Brahlek, Victor Rogalev, Timur Kim, Nicholas C. Plumb, Chaoyu Chen, François Bertran, Patrick Le Fèvre, Amina Taleb-Ibrahimi, Maria-Carmen Asensio, Ming Shi, Hsin Lin, Moritz Hoesch, Seongshik Oh and M. Zahid Hasan, was published in the journal Science Advances on March 24, 2017. (Belopolski et al., Sci. Adv. 2017;3: e1501692 24 March 2017)

 

Study reveals the multitasking secrets of an RNA-binding protein

RNA-binding domains
Two views of one of Glo’s RNA-binding domains highlight the amino acids required for binding G-tract RNA (left) and U-A stem structures (right). Courtesy of Cell Reports.

By Staff, Department of Molecular Biology

Researchers from Princeton University and the National Institute of Environmental Health Sciences have discovered how a fruit fly protein binds and regulates two different types of RNA target sequence. The study, published April 4 in the journal Cell Reports, may help explain how various RNA-binding proteins, many of which are implicated in cancer and neurodegenerative disease, perform so many different functions in the cell.

There are hundreds of RNA-binding proteins in the human genome that together regulate the processing, turnover and localization of the many thousands of RNA molecules expressed in cells. These proteins also control the translation of RNA into proteins. RNA-binding proteins are crucial for maintaining normal cellular function, and defects in this family of proteins can lead to disease. For example, RNA-binding proteins are overexpressed in many human cancers, and mutations in some of these proteins have been linked to neurological and neurodegenerative disorders such as amyotrophic lateral sclerosis. “Understanding the fundamental properties of this class of proteins is very relevant,” said Elizabeth Gavis, the Damon B. Pfeiffer Professor in the Life Sciences and a professor of molecular biology.

Gavis and colleagues are particularly interested in a protein called Glorund (Glo), a type of RNA-binding protein that performs several functions in fruit fly development. This protein was originally identified due to its ability to repress the translation of an RNA molecule called nanos to protein in fly eggs. By binding to a stem structure formed by uracil and adenine nucleotides in the nanos RNA, Glo prevents the production of Nanos protein at the front of the embryo, a step that enables the fly’s head to form properly.

Like many other RNA-binding proteins, however, Glo is multifunctional. It regulates several other steps in fly development, apparently by binding to RNAs other than nanos. The mammalian counterparts of Glo, known as heterogeneous nuclear ribonucleoprotein (hnRNP) F/H proteins, bind to RNAs containing stretches of guanine nucleotides known as G-tracts, and, rather than repressing translation, mammalian hnRNP F/H proteins regulate processes such as RNA splicing, in which RNAs are rearranged to produce alternative versions of the proteins they encode.

To understand how Glo might bind to diverse RNAs and regulate them in different ways, Gavis and graduate student Joel Tamayo collaborated with Traci Tanaka Hall and Takamasa Teramoto from the National Institute of Environmental Health Sciences to generate X-ray crystallographic structures of Glo’s three RNA-binding domains. As expected, the three domains were almost identical to the corresponding domains of mammalian hnRNP F/H proteins. They retained, for example, the amino acid residues that bind to G-tract RNA, and the researchers confirmed that, like their mammalian counterparts, each RNA-binding domain of Glo can bind to this type of RNA sequence.

However, the researchers also saw something new. “When we looked at the structures, we realized that there were also some basic amino acids that projected from a different part of the RNA-binding domains that could be involved in contacting RNA,” Gavis explained.

The researchers found that these basic amino acids mediate binding to uracil-adenine (U-A) stem structures like the one found in nanos RNA. Each of Glo’s RNA-binding domains therefore contains two distinct binding surfaces that interact with different types of RNA target sequence. “While there have been examples previously of RNA-binding proteins that carry more than one binding domain, each with a different specificity, this represents the first example of a single domain harboring two different specificities,” said Howard Lipshitz, a professor of molecular genetics at the University of Toronto who was not involved in the study.

To investigate which of Glo’s two RNA-binding modes was required for its different functions in flies, Gavis and colleagues generated insects carrying mutant versions of the RNA-binding protein. Glo’s ability to repress nanos translation during egg development required both of the protein’s RNA-binding modes. The researchers discovered that, as well as binding the U-A stem in the nanos RNA, Glo also recognized a nearby G-tract sequence. But Glo’s ability to regulate other RNAs at different developmental stages only depended on the protein’s capacity to bind G-tracts.

“We think that the binding mode may correlate with Glo’s activity towards a particular RNA,” said Gavis. “If it binds to a G-tract, Glo might promote RNA splicing. If it simultaneously binds to both a G-tract and a U-A stem, Glo acts as a translational repressor.”

The RNA-binding domains of mammalian hnRNP F/H proteins probably have a similar ability to bind two different types of RNA, allowing them to regulate diverse target RNAs within the cell. “This paper represents an exciting advance in a field that has become increasingly important with the discovery that defects in RNA-binding proteins contribute to human diseases such as metabolic disorders, cancer and neurodegeneration,” Lipshitz said. “Since these proteins are evolutionarily conserved from fruit flies to humans, experiments of this type tell us a lot about how their human versions normally work or can go wrong.”

The research was supported in part by a National Science Foundation Graduate Research Fellowship (DGE 1148900), a Japan Society for the Promotion of Science fellowship, the National Institutes of Health (R01 GM061107) and the Intramural Research Program of the National Institute of Environmental Health Sciences. The Advanced Photon Source used for this study is supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, under contract W-31-109-Eng-38.

The study, “The Drosophila hnRNP F/H Homolog Glorund Uses Two Distinct RNA-binding Modes to Diversify Target Recognition,” by Joel Tamayo, Takamasa Teramoto, Seema Chatterjee, Traci Tanaka Hall, and Elizabeth Gavis, was published in the journal Cell Reports on April 4, 2017.  http://dx.doi.org/10.1016/j.celrep.2017.03.022

Princeton-led team produces 3-D map of Earth’s interior

Magna plumes and hotspots, such as the one below Yellowstone, are visible in this subterranean simulation. (Credit: David Pugmire, ORNL)

By Jonathan Hines, Oak Ridge National Laboratory

Because of Earth’s layered composition, scientists have often compared the basic arrangement of its interior to that of an onion. There’s the familiar thin crust of continents and ocean floors; the thick mantle of hot, semisolid rock; the molten metal outer core; and the solid iron inner core.

But unlike an onion, peeling back Earth’s layers to better explore planetary dynamics isn’t an option, forcing scientists to make educated guesses about our planet’s inner life based on surface-level observations. Clever imaging techniques devised by computational scientists, however, offer the promise of illuminating Earth’s subterranean secrets.

Using advanced modeling and simulation, seismic data generated by earthquakes, and one of the world’s fastest supercomputers, a team led by Jeroen Tromp of Princeton University is creating a detailed 3-D picture of Earth’s interior. Currently, the team is focused on imaging the entire globe from the surface to the core–mantle boundary, a depth of 1,800 miles.

These high-fidelity simulations add context to ongoing debates related to Earth’s geologic history and dynamics, bringing prominent features like tectonic plates, magma plumes, and hotspots into view. In September 2016, the team published a paper in Geophysical Journal International on its first-generation global model. Created using data from 253 earthquakes captured by seismograms scattered around the world, the team’s model is notable for its global scope and high scalability.

“This is the first global seismic model where no approximations—other than the chosen numerical method—were used to simulate how seismic waves travel through the Earth and how they sense heterogeneities,” said Ebru Bozdag, a coprincipal investigator of the project and an assistant professor of geophysics at the University of Nice Sophia Antipolis. “That’s a milestone for the seismology community. For the first time, we showed people the value and feasibility of running these kinds of tools for global seismic imaging.”

The project’s genesis can be traced to a seismic imaging theory first proposed in the 1980s. To fill in gaps within seismic data maps, the theory posited a method called adjoint tomography, an iterative full-waveform inversion technique. This technique leverages more information than competing methods, using forward waves that travel from the quake’s origin to the seismic receiver and adjoint waves, which are mathematically derived waves that travel from the receiver to the quake.

The problem with testing this theory? “You need really big computers to do this,” Bozdag said, “because both forward and adjoint wave simulations are performed in 3-D numerically.”

In 2012, just such a machine arrived in the form of the Titan supercomputer, a 27-petaflop Cray XK7 managed by the US Department of Energy’s (DOE’s) Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at DOE’s Oak Ridge National Laboratory. After trying out its method on smaller machines, Tromp, who is Princeton’s Blair Professor of Geology and a professor of geosciences and applied and computational mathematics, and his team gained access to Titan in 2013 through the Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program.

Working with OLCF staff, the team continues to push the limits of computational seismology to deeper depths.

Stitching together seismic slices

When an earthquake strikes, the release of energy creates seismic waves that often wreak havoc for life at the surface. Those same waves, however, present an opportunity for scientists to peer into the subsurface by measuring vibrations passing through the Earth.

As seismic waves travel, seismograms can detect variations in their speed. These changes provide clues about the composition, density, and temperature of the medium the wave is passing through. For example, waves move slower when passing through hot magma, such as mantle plumes and hotspots, than they do when passing through colder subduction zones, locations where one tectonic plate slides beneath another.

Each seismogram represents a narrow slice of the planet’s interior. By stitching many seismograms together, researchers can produce a 3-D global image, capturing everything from magma plumes feeding the Ring of Fire, to Yellowstone’s hotspots, to subducted plates under New Zealand.

This process, called seismic tomography, works in a manner similar to imaging techniques employed in medicine, where 2-D x-ray images taken from many perspectives are combined to create 3-D images of areas inside the body.

In the past, seismic tomography techniques have been limited in the amount of seismic data they can use. Traditional methods forced researchers to make approximations in their wave simulations and restrict observational data to major seismic phases only. Adjoint tomography based on 3-D numerical simulations employed by Tromp’s team isn’t constrained in this way. “We can use the entire data—anything and everything,” Bozdag said.

Running its GPU version of the SPECFEM3D_GLOBE code, Tromp’s team used Titan to apply full-waveform inversion at a global scale. The team then compared these “synthetic seismograms” with observed seismic data supplied by the Incorporated Research Institutions for Seismology (IRIS), calculating the difference and feeding that information back into the model for further optimization. Each repetition of this process improves global models.

“This is what we call the adjoint tomography workflow, and at a global scale it requires a supercomputer like Titan to be executed in reasonable timeframe,” Bozdag said. “For our first-generation model, we completed 15 iterations, which is actually a small number for these kinds of problems. Despite the small number of iterations, our enhanced global model shows the power of our approach. This is just the beginning, however.”

Automating to augment

For its initial global model, Tromp’s team selected earthquake events that registered between 5.8 and 7 on the Richter scale—a standard for measuring earthquake intensity. That range can be extended slightly to include more than 6,000 earthquakes in the IRIS database—about 20 times the amount of data used in the original model.

Getting the most out of all the available data requires a robust automated workflow capable of accelerating the team’s iterative process. Collaborating with OLCF staff, Tromp’s team has made progress toward this goal.

For the team’s first-generation model, Bozdag carried out each step of the workflow manually, taking about a month to complete one model update. Team members Matthieu Lefebvre, Wenjie Lei, and Youyi Ruan of Princeton University and the OLCF’s Judy Hill developed new automated workflow processes that hold the promise of reducing that cycle to a matter of days.

“Automation will really make it more efficient, and it will also reduce human error, which is pretty easy to introduce,” Bozdag said.

Additional support from OLCF staff has contributed to the efficient use and accessibility of project data. Early in the project’s life, Tromp’s team worked with the OLCF’s Norbert Podhorszki to improve data movement and flexibility. The end result, called Adaptable Seismic Data Format (ASDF), leverages the Adaptable I/O System (ADIOS) parallel library and gives Tromp’s team a superior file format to record, reproduce, and analyze data on large-scale parallel computing resources.

In addition, the OLCF’s David Pugmire helped the team implement in situ visualization tools. These tools enabled team members to check their work more easily from local workstations by allowing visualizations to be produced in conjunction with simulation on Titan, eliminating the need for costly file transfers.

“Sometimes the devil is in the details, so you really need to be careful and know what you’re looking at,” Bozdag said. “David’s visualization tools help us to investigate our models and see what is there and what is not.”

With visualization, the magnitude of the team’s project comes to light. The billion-year cycle of molten rock rising from the core–mantle boundary and falling from the crust—not unlike the motion of globules in a lava lamp—takes form, as do other geologic features of interest.

At this stage, the resolution of the team’s global model is becoming advanced enough to inform continental studies, particularly in regions with dense data coverage. Making it useful at the regional level or smaller, such as the mantle activity beneath Southern California or the earthquake-prone crust of Istanbul, will require additional work.

“Most global models in seismology agree at large scales but differ from each other significantly at the smaller scales,” Bozdag said. “That’s why it’s crucial to have a more accurate image of Earth’s interior. Creating high-resolution images of the mantle will allow us to contribute to these discussions.”

Digging deeper

To improve accuracy and resolution further, Tromp’s team is experimenting with model parameters under its most recent INCITE allocation. For example, the team’s second-generation model will introduce anisotropic inversions, which are calculations that better capture the differing orientations and movement of rock in the mantle. This new information should give scientists a clearer picture of mantle flow, composition, and crust–mantle interactions.

Additionally, team members Dimitri Komatitsch of Aix-Marseille University in France and Daniel Peter of King Abdullah University in Saudi Arabia are leading efforts to update SPECFEM3D_GLOBE to incorporate capabilities such as the simulation of higher-frequency seismic waves. The frequency of a seismic wave, measured in Hertz, is equivalent to the number of waves passing through a fixed point in one second. For instance, the current minimum frequency used in the team’s simulation is about 0.05 hertz (1 wave per 20 seconds), but Bozdag said the team would also like to incorporate seismic waves of up to 1 hertz (1 wave per second). This would allow the team to model finer details in the Earth’s mantle and even begin mapping the Earth’s core.

To make this leap, Tromp’s team is preparing for Summit, the OLCF’s next-generation supercomputer. Set to arrive in 2018, Summit will provide at least five times the computing power of Titan. As part of the OLCF’s Center for Accelerated Application Readiness, Tromp’s team is working with OLCF staff to take advantage of Summit’s computing power upon arrival.

“With Summit, we will be able to image the entire globe from crust all the way down to Earth’s center, including the core,” Bozdag said. “Our methods are expensive—we need a supercomputer to carry them out—but our results show that these expenses are justified, even necessary.”

“Global Adjoint Tomography: First-Generation Model,” Ebru Bozdag, Daniel Peter, Matthieu Lefebvre, Dimitri Komatitsch, Jeroen Tromp, Judith Hill, Norbert Podhorszki, and David Pugmire,  Geophysical Journal International 207, no. 3 (2016): 1739–1766.

This article appears courtesy of Oak Ridge National Laboratory. See the original article.

Oak Ridge National Laboratory is supported by the US Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Flexibility is key in mechanism of biological self-assembly

By Catherine Zandonella, Office of the Dean for Research

A new study has modeled a crucial first step in the self-assembly of cellular structures such as drug receptors and other protein complexes, and found that the flexibility of the structures has a dramatic impact on how fast they join together.

The study, published this week in the journal Proceedings of the National Academy of Sciences, explored what happens when two water-repelling surfaces connect to build more complex structures. Using molecular simulations, researchers at Princeton University illustrated the mechanism by which the process occurs and explored factors that favor self-assembly.

A surprise finding was the sensitivity with which the surfaces’ flexibility determined the rate at which the surfaces eventually came together, with more flexible surfaces favoring joining. “Flexibility is like a knob that nature can tune to control the self-assembly of molecules,” said Pablo Debenedetti, senior author on the study and Princeton’s Dean for Research. Debenedetti is the Class of 1950 Professor in Engineering and Applied Science and a professor of chemical and biological engineering.

Researchers have long been interested in how biological structures can self-assemble according to physical laws. Tapping the secrets of self-assembly could, for example, lead to new methods of building nanomaterials for future electronic devices. Self-assembled protein complexes are the basis not only of drug receptors but also many other cellular structures, including ion channels that facilitate the transmission of signals in the brain.

The study illustrated the process by which two water-repelling, or hydrophobic, structures come together. At the start of the simulation, the two surfaces were separated by a watery environment. Researchers knew from previous studies that these surfaces, due to their hydrophobic nature, will push water molecules away until only a very few water molecules remain in the gap. The evaporation of these last few molecules allows the two surfaces to snap together.

The new molecular simulation conducted at Princeton yielded a more detailed look at the mechanism behind this process. In the simulation, when the surfaces are sufficiently close to each other, their hydrophobic nature triggered fluctuations in the number of water molecules in the gap, causing the liquid water to evaporate and form bubbles on the surfaces. The bubbles grew as more water molecules evaporated. Eventually two bubbles on either surface connected to form a gap-spanning tube, which expanded and pushed away any remaining water until the two surfaces collided.

Biological surfaces, such as cellular membranes, are flexible, so the researchers explored how the surfaces’ flexibility affected the process. The researchers tuned the flexibility of the surfaces by varying the strength of the coupling between the surface atoms. The stronger the coupling, the less each atom can wiggle relative to its neighbors.

The researchers found that the speed at which the two surfaces snap together depended greatly on flexibility. Small changes in flexibility led to large changes in the rate at which the surfaces stuck together. For example, two very flexible surfaces adhered in just nanoseconds, whereas two inflexible surfaces fused incredibly slowly, on the order of seconds.

Another finding was that the last step in the process, where the vapor tube expands, was critical for ensuring that the surfaces came together. In simulations where the tube failed to expand, the surfaces never joined. Flexibility was key to ensuring that the tube expanded, the researchers found. Making the material more flexible lowered the barriers to evaporation and stabilized the vapor tube, increasing the chances that the tube would expand.

The molecular simulation provides a foundation for understanding how biological structures assemble and function, according to Elia Altabet, a graduate student in Debenedetti’s group, and first author on the study. “A deeper understanding of the formation and function of protein assemblies such as drug receptors and ion channels could inform the design of new drugs to treat diseases,” he said.

Funding for this study was provided by National Science Foundation grants CHE-1213343 and CBET-1263565. Computations were performed at the Terascale Infrastructure for Groundbreaking Research in Engineering and Science (TIGRESS) at Princeton University.

The study, “Effect of material flexibility on the thermodynamics and kinetics of hydrophobically induced evaporation of water,” by Y. Elia Altabet, Amir Haji-Akbari and Pablo Debenedetti, was published online in the journal Proceedings of the National Academy of Sciences the week of March 13, 2017. doi: 10.1073/pnas.1620335114

Deep-sea corals reveal why atmospheric carbon was lower during the ice ages

Deep sea corals reveal that efficient nutrient consumption by plankton drove carbon sequestration in the deep ocean during the ice ages. Photo courtesy of Caltech.

By Robert Perkins, Caltech

We know a lot about how carbon dioxide (CO2) levels can drive climate change, but how about the way that climate change can cause fluctuations in CO2 levels? New research from an international team of scientists reveals one of the mechanisms by which a colder climate was accompanied by depleted atmospheric CO2 during past ice ages.

The overall goal of the work is to better understand how and why the earth goes through periodic climate change, which could shed light on how man-made factors could affect the global climate.

Now, an international team of scientists has shown that periods of colder climates are associated with higher phytoplankton efficiency and a reduction in nutrients in the surface of the Southern Ocean (the ocean surrounding the Antarctic), which is related to an increase in carbon sequestration in the deep ocean. A paper about their research appears this week in the online edition of the Proceedings of the National Academy of Sciences.

“It is critical to understand why atmospheric CO2 concentration was lower during the ice ages. This will help us understand how the ocean will respond to ongoing anthropogenic CO2 emissions,” says Xingchen (Tony) Wang, lead author of the study. Wang was a graduate student at Princeton University while conducting the research in the lab of Daniel Sigman, the Dusenbury Professor of Geological and Geophysical Sciences. Wang is now a Simons Foundation Postdoctoral Fellow on the Origins of Life at Caltech. The study used a library of 10,000 deep-sea corals collected by Caltech’s Jess Adkins.

Xingchen (Tony) Wang and Jess Adkins. Photo courtesy of Caltech

Earth’s average temperature has naturally fluctuated by about 4 to 5 degrees Celsius over the course of the past million years as the planet has cycled in and out of glacial periods. During that time, the earth’s atmospheric CO2 levels have fluctuated between roughly 180 and 280 parts per million (ppm) every 100,000 years or so. (In recent years, man-made carbon emissions have boosted that concentration up to over 400 ppm.)

About 10 years ago, researchers noticed a close correspondence between the fluctuations in CO2 levels and in temperature over the last million years. When the earth is at its coldest, the amount of CO2 in the atmosphere is also at its lowest. During the most recent ice age, which ended about 11,000 years ago, global temperatures were 5 degrees Celsius lower than they are today, and atmospheric CO2 concentrations were at 180 ppm.

There is 60 times more carbon in the ocean than in the atmosphere—partly because the ocean is so big. The mass of the world’s oceans is roughly 270 times greater than that of the atmosphere. As such, the ocean is the greatest regulator of carbon in the atmosphere, acting as both a sink and a source for atmospheric CO2.

Biological processes are the main driver of CO2 absorption from the atmosphere to the ocean. Just like photosynthesizing trees and plants on land, plankton at the surface of the sea turn CO2 into sugars that are eventually consumed by other creatures. As the sea creatures who consume those sugars—and the carbon they contain—die, they sink to the deep ocean, where the carbon is locked away from the atmosphere for a long time. This process is called the “biological pump.”

A healthy population of phytoplankton helps lock away carbon from the atmosphere. In order to thrive, phytoplankton need nutrients—notably, nitrogen, phosphorus, and iron. In most parts of the modern ocean, phytoplankton deplete all of the available nutrients in the surface ocean, and the biological pump operates at maximum efficiency.

However, in the modern Southern Ocean, there is a limited amount of iron—which means that there are not enough phytoplankton to fully consume the nitrogen and phosphorus in the surface waters. When there is less living biomass, there is also less that can die and sink to the bottom—which results in a decrease in carbon sequestration. The biological pump is not currently operating as efficiently as it theoretically could.

To track the efficiency of the biological pump over the span of the past 40,000 years, Adkins and his colleagues collected more than 10,000 fossils of the coral Desmophyllum dianthus.

Why coral? Two reasons: first, as it grows, coral accretes a skeleton around itself, precipitating calcium carbonate (CaCO3) and other trace elements (including nitrogen) out of the water around it. That process creates a rocky record of the chemistry of the ocean. Second, coral can be precisely dated using a combination of radiocarbon and uranium dating.

“Finding a few centimeter-tall fossil corals 2,000 meters deep in the ocean is no trivial task,” says Adkins, the Smits Family Professor of Geochemistry and Global Environmental Science at Caltech.

Adkins and his colleagues collected coral from the relatively narrow (500-mile) gap known as the Drake Passage between South America and Antarctica (among other places). Because the Southern Ocean flows around Antarctica, all of its waters funnel through that gap—making the samples Adkins collected a robust record of the water throughout the Southern Ocean.

Coauthors include scientists from Caltech, Princeton University, Pomona College, the Max Planck Institute for Chemistry in Germany, University of Bristol, and ETH Zurich in Switzerland.

Wang analyzed the ratios of two isotopes of nitrogen atoms in these corals – nitrogen-14 (14N, the most common variety of the atom, with seven protons and seven neutrons in its nucleus) and nitrogen-15 (15N, which has an extra neutron). When phytoplankton consume nitrogen, they prefer 14N to 15N. As a result, there is a correlation between the ratio of nitrogen isotopes in sinking organic matter (which the corals then eat as it falls to the seafloor) and how much nitrogen is being consumed in the surface ocean—and, by extension, the efficiency of the biological pump.

A higher amount of 15N in the fossils indicates that the biological pump was operating more efficiently at that time. An analogy would be monitoring what a person eats in their home. If they are eating more of their less-liked foods, then one could assume that the amount of food in their pantry is running low.

Indeed, Wang found that higher amounts of 15N were present in fossils corresponding to the last ice age, indicating that the biological pump was operating more efficiently during that time. As such, the evidence suggests that colder climates allow more biomass to grow in the surface Southern Ocean—likely because colder climates experience stronger winds, which can blow more iron into the Southern Ocean from the continents. That biomass consumes carbon, then dies and sinks, locking it away from the atmosphere.

Adkins and his colleagues plan to continue probing the coral library for further details about the cycles of ocean chemistry changes over the past several hundred thousand years.

The research was funded by the National Science Foundation, Princeton University, the European Research Council, and the Natural Environment Research Council.

The study, “Deep-sea coral evidence for lower Southern Ocean surface nitrate concentrations during the last ice age,” Xingchen Tony Wang, Daniel M. Sigman, Maria G. Prokopenko, Jess F. Adkins, Laura F. Robinson, Sophia K. Hines, Junyi Chai, Anja S. Studer, Alfredo Martínez-García, Tianyu Chen, and Gerald H. Haug, was published in the journal Proceedings of the National Academy of Sciences early edition the week of March 13, 2017. doi: 10.1073/pnas.1615718114

Article provided courtesy of Caltech