Study reveals ways in which cells feel their surroundings

Model of fibrin network
Researchers used computer modeling to show how cells can feel their way through their surroundings, important when, for example, a tumor cell invades a new tissue or organ. This computer simulation depicts collagen fibers that make up the extracellular matrix in which cells live. Local arrangements of these fibers are extremely variable in their flexibility, with some fibers (blue) responding strongly to the cell and others (red) responding hardly at all. The surprising amount of variability in a local area makes it difficult for cells (represented by green arrows) to determine the overall amount of stiffness in a local area, and suggests that cells need to move or change shape to sample more of the surrounding area.

By Catherine Zandonella, Office of the Dean for Research

Cells push out tiny feelers to probe their physical surroundings, but how much can these tiny sensors really discover? A new study led by Princeton University researchers and colleagues finds that the typical cell’s environment is highly varied in the stiffness or flexibility of the surrounding tissue, and that to gain a meaningful amount of information about its surroundings, the cell must move around and change shape. The finding aids the understanding of how cells respond to mechanical cues and may help explain what happens when migrating tumor cells colonize a new organ or when immune cells participate in wound healing.

“Our study looks at how cells literally feel their way through an environment, such as muscle or bone,” said Ned Wingreen, Princeton’s Howard A. Prior Professor in the Life Sciences and professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics. “These tissues are highly disordered on the cellular scale, and the cell can only make measurements in the immediate area around it,” he said. “We wanted to model this process.” The study was published online on July 18 in the journal Nature Communications.

The organs and tissues of the body are enmeshed in a fiber-rich structure known as the extracellular matrix, which provides a scaffold for the cells to live, move and differentiate to carry out specific functions. Cells interact with this matrix by extending sticky proteins out from the cell surface to pull on nearby fibers. Previous work, mostly employing artificial flat surfaces, has shown that cells can use this tactile feedback to determine the elasticity or stiffness in a process called mechanosensing. But because the fibers of the natural matrix are all interconnected in a jumbled, three-dimensional network, it was not clear how much useful information the cell could glean from feeling its immediate surroundings.

To find out, the researchers built a computer simulation that mimicked a typical cell in a matrix made of collagen protein, which is found in skin, bones, muscles and connective tissue. The team also modeled a cell in a network of fibrin, the strong, stringy protein that makes up blood clots. To accurately capture the composition of these networks, the researchers worked with Chase Broedersz, a former Princeton Lewis-Sigler Fellow who is now professor of physics at Ludwig-Maximilians-University of Munich, and his colleagues Louise Jawerth and Stefan Münster to first create physical models of the matrices, using approaches originally developed in the group of collaborator David Weitz, a systems biologist at Harvard University. Princeton graduate student Farzan Beroz then used those models to recreate virtual versions of the collagen and fibrin networks in computer models.

With these virtual networks, Beroz, Broedersz and Wingreen could then ask the question: can cells glean useful information about the elasticity or stiffness of their environment by feeling their surroundings? If the answer is yes, then the finding would shed light on how cells can change in response to those surroundings. For example, the work might help explain how cancer cells are able to detect that they’ve arrived at an organ that has the right type of scaffold to support tumor growth, or how cells that arrive at a wound know to start secreting proteins to promote healing.

Using mathematics, the researchers calculated how the networks would deform when nearby fibers are pulled on by cells. They found that both the collagen and fibrin networks contained configurations of fibers with remarkably broad ranges of collective stiffness, from rather bendable to very rigid, and that these regions could be immediately next to each other. As a result, the cell could have two nearby probes whereby one detects hardness and the other detects softness, making it difficult for a cell to learn by mechanosensing what type of tissue it inhabits. “We were surprised to find that the cell’s environment can vary quite a lot even across a small distance,” Wingreen said.

The researchers concluded that to obtain an accurate assessment of its environment, a cell must move around and also change shape, for example elongating to cover a different area of the matrix. “What we found in our simulation conforms to what experimentalists have found,” Wingreen said, “and reveals new, ‘intelligent’ strategies that cells could employ to feel their way through tissue environments.”

The study was supported in part by the National Science Foundation (grants DMR-1310266, DMR-1420570, PHY-1305525 and PHY-1066293) the German Excellence Initiative, and the Deutsche Forschungsgemeinschaft.

The study, “Physical limits to biomechanical sensing in disordered fiber networks,” by Farzan Beroz, Louise Jawerth, Stefan Münster, David Weitz, Chase Broedersz, and Ned Wingreen, was published in the journal Nature Communications on July 18, 2017. DOI 10.1038/NCOMMS16096.

New method identifies protein-protein interactions on basis of sequence alone (PNAS)

By Catherine Zandonella, Office of the Dean for Research

Protein-protein interaction
Researchers can now identify which proteins will interact just by looking at their sequences. Pictured are surface representations of a histidine kinase dimer (HK, top) and a response regulator (RR, bottom), two proteins that interact with each other to carry out cellular signaling functions. (Image based on work by Casino, et. al. credit: Bitbol et. al 2016/PNAS.)

Genomic sequencing has provided an enormous amount of new information, but researchers haven’t always been able to use that data to understand living systems.

Now a group of researchers has used mathematical analysis to figure out whether two proteins interact with each other, just by looking at their sequences and without having to train their computer model using any known examples. The research, which was published online today in the journal Proceedings of the National Academy of Sciences, is a significant step forward because protein-protein interactions underlie a multitude of biological processes, from how bacteria sense their surroundings to how enzymes turn our food into cellular energy.

“We hadn’t dreamed we’d be able to address this,” said Ned Wingreen, Princeton University‘s Howard A. Prior Professor in the Life Sciences, and a professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics, and a senior co-author of the study with Lucy Colwell of the University of Cambridge. “We can now figure out which protein families interact with which other protein families, just by looking at their sequences,” he said.

Although researchers have been able to use genomic analysis to obtain the sequences of amino acids that make up proteins, until now there has been no way to use those sequences to accurately predict protein-protein interactions. The main roadblock was that each cell can contain many similar copies of the same protein, called paralogs, and it wasn’t possible to predict which paralog from one protein family would interact with which paralog from another protein family.  Instead, scientists have had to conduct extensive laboratory experiments involving sorting through protein paralogs one by one to see which ones stick.

In the current paper, the researchers use a mathematical procedure, or algorithm, to examine the possible interactions among paralogs and identify pairs of proteins that interact. The method was able to correctly predict 93% of the protein-protein paralog pairs that were present in a dataset of more than 20,000 known paired protein sequences, without being first provided any examples of correct pairs.

Interactions between proteins happen when two proteins come into physical contact and stick together via weak bonds. They may do this to form part of a larger piece of machinery used in cellular metabolism. Or two proteins might interact to pass a signal from the exterior of the cell to the DNA, to enable a bacterial organism to react to its environment.

When two proteins come together, some amino acids on one chain stick to the amino acids on the other chain. Each site on the chain contains one of 20 possible amino acids, yielding a very large number of possible amino-acid pairings. But not all such pairings are equally probable, because proteins that interact tend to evolve together over time, causing their sequences to be correlated.

The algorithm takes advantage of this correlation. It starts with two protein families, each with multiple paralogs in any given organism. The algorithm then pairs protein paralogs randomly within each organism and asks, do particular pairs of amino acids, one on each of the proteins, occur much more or less frequently than chance? Then using this information it asks, given an amino acid in a particular location on the first protein, which amino acids are especially favored at a particular location on the second protein, a technique known as direct coupling analysis. The algorithm in turn uses this information to calculate the strengths of interactions, or “interaction energies,” for all possible protein paralog pairs, and ranks them. It eliminates the unlikely pairings and then runs again using only the top most likely protein pairs.

The most challenging part of identifying protein-protein pairs arises from the fact that proteins fold and kink into complicated shapes that bring amino acids in proximity to others that are not close by in sequence, and that amino-acids may be correlated with each other via chains of interactions, not just when they are neighbors in 3D. The direct coupling analysis works surprisingly well at finding the true underlying couplings that occur between neighbors.

The work on the algorithm was initiated by Wingreen and Robert Dwyer, who earned his Ph.D. in the Department of Molecular Biology at Princeton in 2014, and was continued by first author Anne-Florence Bitbol, who was a postdoctoral researcher in the Lewis-Sigler Institute for Integrative Genomics and the Department of Physics at Princeton and is now a CNRS researcher at Universite Pierre et Marie Curie – Paris 6. Bitbol was advised by Wingreen and Colwell, an expert in this kind of analysis who joined the collaboration while a member at the Institute for Advanced Study in Princeton, NJ, and is now a lecturer in chemistry at the University of Cambridge.

The researchers thought that the algorithm would only work accurately if it first “learned” what makes a good protein-protein pair by studying ones discovered in experiments. This required that the researchers give the algorithm some known protein pairs, or “gold standards,” against which to compare new sequences. The team used two well-studied families of proteins, histidine kinases and response regulators, which interact as part of a signaling system in bacteria.

But known examples are often scarce, and there are tens of millions of undiscovered protein-protein interactions in cells. So the team decided to see if they could reduce the amount of training they gave the algorithm. They gradually lowered the number of known histidine kinase-response regulator pairs that they fed into the algorithm, and were surprised to find that the algorithm continued to work. Finally, they ran the algorithm without giving it any such training pairs, and it still predicted new pairs with 93 percent accuracy.

“The fact that we didn’t need a gold standard was a big surprise,” Wingreen said.

Upon further exploration, Wingreen and colleagues figured out that their algorithm’s good performance was due to the fact that true protein-protein interactions are relatively rare. There are many pairings that simply don’t work, and the algorithm quickly learned not to include them in future attempts. In other words, there is only a small number of distinctive ways that protein-protein interactions can happen, and a vast number of ways that they cannot happen. Moreover, the few successful pairings were found to repeat with little variation across many organisms. This it turns out, makes it relatively easy for the algorithm to reliably sort interactions from non-interactions.

Wingreen compared this observation – that correct pairs are more similar to one another than incorrect pairs are to each other – to the opening line of Leo Tolstoy’s Anna Karenina, which states, “All happy families are alike; each unhappy family is unhappy in its own way.”

The work was done using protein sequences from bacteria, and the researchers are now extending the technique to other organisms.

The approach has the potential to enhance the systematic study of biology, Wingreen said. “We know that living organisms are based on networks of interacting proteins,” he said. “Finally we can begin to use sequence data to explore these networks.”

The research was supported in part by the National Institutes of Health (Grant R01-GM082938) and the National Science Foundation (Grant PHY–1305525).

Read the abstract.

The paper, “Inferring interaction partners from protein sequences,” by Anne-Florence Bitbol, Robert S. Dwyerd, Lucy J. Colwell and Ned S. Wingreen, was published in the Early Edition of the journal Proceedings of the National Academy of Sciences on September 23, 2016.
doi: 10.1073/pnas.1606762113

Role for enhancers in bursts of gene activity (Cell)


By Marisa Sanders for the Office of the Dean for Research

A new study by researchers at Princeton University suggests that sporadic bursts of gene activity may be important features of genetic regulation rather than just occasional mishaps. The researchers found that snippets of DNA called enhancers can boost the frequency of bursts, suggesting that these bursts play a role in gene control.

The researchers analyzed videos of Drosophila fly embryos undergoing DNA transcription, the first step in the activation of genes to make proteins. In a study published on July 14 in the journal Cell, the researchers found that placing enhancers in different positions relative to their target genes resulted in dramatic changes in the frequency of the bursts.

“The importance of transcriptional bursts is controversial,” said Michael Levine, Princeton’s Anthony B. Evnin ’62 Professor in Genomics and director of the Lewis-Sigler Institute for Integrative Genomics. “While our study doesn’t prove that all genes undergo transcriptional bursting, we did find that every gene we looked at showed bursting, and these are the critical genes that define what the embryo is going to become. If we see bursting here, the odds are we are going to see it elsewhere.”

The transcription of DNA occurs when an enzyme known as RNA polymerase converts the DNA code into a corresponding RNA code, which is later translated into a protein. Researchers were puzzled to find about ten years ago that transcription can be sporadic and variable rather than smooth and continuous.

In the current study, Takashi Fukaya, a postdoctoral research fellow, and Bomyi Lim, a postdoctoral research associate, both working with Levine, explored the role of enhancers on transcriptional bursting. Enhancers are recognized by DNA-binding proteins to augment or diminish transcription rates, but the exact mechanisms are poorly understood.

Until recently, visualizing transcription in living embryos was impossible due to limits in the sensitivity and resolution of light microscopes. A new method developed three years ago has now made that possible. The technique, developed by two separate research groups, one at Princeton led by Thomas Gregor, associate professor of physics and the Lewis-Sigler Institute for Integrative Genomics, and the other led by Nathalie Dostatni at the Curie Institute in Paris, involves placing fluorescent tags on RNA molecules to make them visible under the microscope.

The researchers used this live-imaging technique to study fly embryos at a key stage in their development, approximately two hours after the onset of embryonic life where the genes undergo fast and furious transcription for about one hour. During this period, the researchers observed a significant ramping up of bursting, in which the RNA polymerase enzymes cranked out a newly transcribed segment of RNA every 10 or 15 seconds over a period of perhaps 4 or 5 minutes per burst. The genes then relaxed for a few minutes, followed by another episode of bursting.

The team then looked at whether the location of the enhancer – either upstream from the gene or downstream – influenced the amount of bursting. In two different experiments, Fukaya placed the enhancer either upstream of the gene’s promoter, or downstream of the gene and saw that the different enhancer positions resulted in distinct responses. When the researchers positioned the enhancer downstream of the gene, they observed periodic bursts of transcription. However when they positioned the enhancer upstream of the gene, the researchers saw some fluctuations but no discrete bursts. They found that the closer the enhancer is to the promoter, the more frequent the bursting.

To confirm their observations, Lim applied further data analysis methods to tally the amount of bursting that they saw in the videos. The team found that the frequency of the bursts was related to the strength of the enhancer in upregulating gene expression. Strong enhancers produced more bursts than weak enhancers. The team also showed that inserting a segment of DNA called an insulator reduced the number of bursts and dampened gene expression.

In a second series of experiments, Fukaya showed that a single enhancer can activate simultaneously two genes that are located some distance apart on the genome and have separate promoters. It was originally thought that such an enhancer would facilitate bursting at one promoter at a time—that is, it would arrive at a promoter, linger, produce a burst, and come off. Then, it would randomly select one of the two genes for another round of bursting. However, what was instead observed was bursting occurring simultaneously at both genes.

“We were surprised by this result,” Levine said. “Back to the drawing board! This means that traditional models for enhancer-promoter looping interactions are just not quite correct,” Levine said. “It may be that the promoters can move to the enhancer due to the formation of chromosomal loops. That is the next area to explore in the future.”

The study was funded by grants from the National Institutes of Health (U01EB021239 and GM46638).

Access the paper here:

Takashi Fukaya, Bomyi Lim & Michael Levine. Enhancer Control of Transcriptional Bursting, Cell (2016), Published July 14. EPub ahead of print June 9.

Revisiting the mechanics of the action potential (Nature Communications)

By Staff

The action potential (AP) and the accompanying action wave (AW) constitute an electromechanical pulse traveling along the axon.

The action potential is widely understood as an electrical phenomenon. However, a long experimental history has documented the existence of co-propagating mechanical signatures.

In a new paper in the journal Nature Communications, two Princeton University researchers have proposed a theoretical model to explain these mechanical signatures, which they term “action waves.” The research was conducted by Ahmed El Hady, a visiting postdoctoral research associate at the Princeton Neuroscience Institute and a postdoctoral fellow at the Howard Hughes Medical Institute, and Benjamin Machta, an associate research scholar at the Lewis-Sigler Institute for Integrative Genomics and a lecturer in physics and the Lewis-Sigler Institute for Integrative Genomics.

In the model, the co-propagating waves are driven by changes in charge separation across the axonal membrane, just as a speaker uses charge separation to drive sound waves through the air. The researchers argue that these forces drive surface waves involving both the axonal membrane and cytoskeleton as well as its surrounding fluid. Their model may help shed light on the functional role of the surprisingly structured axonal cytoskeleton that recent super-resolution techniques have uncovered, and suggests a wider role for mechanics in neuronal function.

Read the paper.

Ahmed El Hady & Benjamin B. Machta. Mechanical surface waves accompany action potential propagation. Nature Communications 6, No. 6697 doi:10.1038/ncomms7697