Mysterious force harnessed in a silicon chip

By Catherine Zandonella for the Office of the Dean for Research

Getting something from nothing sounds like a good deal, so for years scientists have been trying to exploit the tiny amount of energy found in nearly empty space. It’s a source of energy so obscure it was once derided as a source of “perpetual motion.” Now, a research team including Princeton scientists has found a way to harness this energy using a silicon-chip device, potentially enabling applications.

This energy, predicted seven decades ago by the Dutch scientist Hendrik Casimir, arises from quantum effects and can be seen experimentally by placing two opposing plates very close to each other in a vacuum. At close range, the plates attract each other. Until recently, however, harnessing this “Casimir force” to do anything useful seemed impossible.

A new silicon chip built by researchers at Hong Kong University of Science and Technology and Princeton University is a step toward harnessing the Casimir force. Using a clever assembly of micron-sized shapes etched into the plates, the researchers demonstrated that the plates can instead repel each other as they are brought close together. Constructing this device entirely out of a single silicon chip could open the way to using the Casimir force for practical applications such as keeping tiny machine parts from sticking to each other. The work was published in the February issue of the journal Nature Photonics.

Energy of a vacuum

Image of a Casimir-on-a-chip device
Researchers created a silicon device that enabled them to observe the Casimir force. (Image credit: Nature Photonics)

“This is among the first experimental verifications of the Casimir effect on a silicon chip,” said Alejandro Rodriguez, an assistant professor of electrical engineering at Princeton University, who provided theoretical calculations for the device, which was built by a team led by Ho Bun Chan at Hong Kong University of Science and Technology. “And it also allows you to make measurements of forces in very nontrivial structures like these that cause repulsion. It is a double-whammy.”

The silicon structure looks like two plates lined with teeth that face each other across a tiny gap which is only about 100 nanometers wide. (A human hair is 60,000-80,000 nanometers wide.) As the two plates are pushed closer together, the Casimir force comes into play and pushes them apart.

This repulsive effect happens without any input of energy and to all appearances, in a vacuum. These characteristics led this energy to be called “zero-point energy.” They also fueled earlier claims that the Casimir force could not exist because its existence would imply some sort of perpetual motion, which would be impossible according to the laws of physics.

The force, which has since been experimentally confirmed to exist, arises from the normal quantum fluctuations of the few atoms that persist in the chasm despite the evacuation of all the air.

The team demonstrated that it is possible to build a device in silicon to control the Casimir force.

“Our paper shows that it is possible to control the Casimir force using structures of complex, tailor-made shapes,” said Ho Bun Chan, senior author on the paper and a scientist at the Hong Kong University of Science and Technology. His team drew on earlier work by Rodriguez published in 2008 that proposed shapes that would be expected to yield a Casimir force that could both attract and repel. “This paper is the experimental realization using a structure inspired by Rodriguez’s design,” Chan said.

Rodriguez and his team at Princeton developed techniques that allowed the researchers to compute interactions between two parallel plates as they approach each other. With these tools, they were then able to explore what would happen if more complex geometries were used. This led to some of the first predictions of a repulsive Casimir force in 2008.

The Rodriguez group used nanophotonic techniques, which involved measuring how light would interact with the structures, to get at the complex equations of how the force arises from the interaction of two plates.

The silicon device included a small mechanical spring that the researchers used to measure the force between the two plates, and to verify that the quantum force can be repulsive. The roughly T-shaped silicon teeth are what allow the repulsive force to form. The repulsion comes from how different parts of the surface interact with the opposite surface.

“We tried to think about what kind of shapes Chan’s group would have to fabricate to lead to a significant repulsive force, so we did some background studies and calculations to make sure they would see enough non-monotonicity as to be measurable,” Rodriguez said.

Going forward, the researchers plan to explore other configurations that may give rise to even larger repulsive forces and more well-defined repulsion at larger separations.

Funding for the study came from the Research Grants Council of Hong Kong and the National Science Foundation (grant no. DMR-1454836).

The paper, “Measurement of non-monotonic Casimir forces between silicon nanostructures,” by L. Tang, M. Wang, C. Y. Ng, M. Nikolic, C. T. Chan, A. W. Rodriguez and H. B. Chan was published in the journal Nature Photonics online Jan. 9, 2017 and in the February 2017 issue. Nature Photonics 97–101(2017) doi:10.1038/nphoton.2016.254.

Artificial topological matter opens new research directions

By Catherine Zandonella, Office of the Dean for Research

An international team of researchers have created a new structure that allows the tuning of topological properties in such a way as to turn on or off these unique behaviors. The structure could open up possibilities for new explorations into the properties of topological states of matter.

“This is an exciting new direction in topological matter research,” said M. Zahid Hasan, professor of physics at Princeton University and an investigator at Lawrence Berkeley National Laboratory in California who led the study, which was published March 24th in the journal Science Advances. “We are engineering new topological states that do not occur naturally, opening up numerous exotic possibilities for controlling the behaviors of these materials.”

The new structure consists of alternating layers of topological and normal, or trivial, insulators, an architecture that allows the researchers to turn on or off the flow of current through the structure. The ability to control the current suggests possibilities for circuits based on topological behaviors, but perhaps more importantly presents a new artificial crystal lattice structure for studying quantum behaviors.

Theories behind the topological properties of matter were the subject of the 2016 Nobel Prize in physics awarded to Princeton University’s F. Duncan Haldane and two other scientists. One class of matter is topological insulators, which are insulators on the inside but allow current to flow without resistance on the surfaces.

In the new structure, interfaces between the layers create a one-dimensional lattice in which topological states can exist. The one-dimensional nature of the lattice can be thought of as if one were to cut into the material and remove a very thin slice, and then look at the thin edge of the slice. This one-dimensional lattice resembles a chain of artificial atoms. This behavior is emergent because it arises only when many layers are stacked together.

Artificial topological matter
The researchers made different samples where they could control how the electrons tunnel from interface to interface through alternating layers of trivial and topological insulators, forming an emergent, tunable one-dimensional quantum lattice. The top panel (A, B, C, and D) shows a structure where the trivial layer is relatively thin, enabling electron-like particles to tunnel through the layers (topological phase). The bottom panel (G, H, I, and J) shows a structure where the trivial insulator is relatively thick and blocks tunneling (trivial phase). (Image courtesy of Science/AAAS)

By changing the composition of the layers, the researchers can control the hopping of electron-like particles, called Dirac fermions, through the material. For example, by making the trivial-insulator layer relatively thick – still only about four nanometers – the Dirac fermions cannot travel through it, making the entire structure effectively a trivial insulator. However, if the trivial-insulator layer is thin – about one nanometer – the Dirac fermions can tunnel from one topological layer to the next.

To fashion the two materials, the Princeton team worked with researchers at Rutgers University led by Seongshik Oh, associate professor of physics, who in collaboration with Hasan and others showed in 2012 that adding indium to a topological insulator, bismuth selenide, caused it to become a trivial insulator. Prior to that, bismuth selenide (Bi2Se3) was theoretically and experimentally identified as a topological insulator by Hasan’s team, a finding which was published in the journal Nature in 2009.

“We had shown that, depending on how much indium you add, the resulting material had this nice tunable property from trivial to topological insulator,” Oh said, referring to the work published in Physical Review Letters in 2012.

Graduate students Ilya Belopolski of Princeton and Nikesh Koirala of Rutgers combined two state-of-the-art techniques with new instrumentation development and worked together on layering these two materials, bismuth selenide and indium bismuth selenide, to design the optimal structure. One of the challenges was getting the lattice structures of the two materials to match up so that the Dirac fermions can hop from one layer to the next. Belopolski and Suyang Xu worked with colleagues at Princeton University, Lawrence Berkeley National Laboratory and multiple institutions to use high resolution angle-resolved photoemission spectroscopy  to optimize the behavior of the Dirac fermions based on a growth to measurement feedback loop.

Photo of research team
Princeton research team from L to R: Guang Bian, M. Zahid Hasan, Nasser Alidoust, Hao Zheng, Daniel Sanchez, Suyang Xu and Ilya Belopolski (Image credit: Princeton University)

Although no topologically similar states exist naturally, the researchers note that analogous behavior can be found in a chain of polyacetylene, which is a known model of one-dimensional topological behavior as described by the 1979 Su-Schrieffer-Heeger’s theoretical model of an organic polymer.

The research presents a foray into making artificial topological materials, Hasan said. “In nature, whatever a material is, topological insulator or not, you are stuck with that,” Hasan said. “Here we are tuning the system in a way that we can decide in which phase it should exist; we can design the topological behavior.”

The ability to control the travel of light-like Dirac fermions could eventually lead future researchers to harness the resistance-less flow of current seen in topological materials. “These types of topologically tunable heterostructures are a step toward applications, making devices where topological effects can be utilized,” Hasan said.

The Hasan group plans to further explore ways to tune the thickness and explore the topological states in connection to the quantum Hall effect, superconductivity, magnetism, and Majorana and Weyl fermion states of matter.

In addition to work done at Princeton and Rutgers, the research featured contributions from the following institutions: South University of Science and Technology of China; Swiss Light Source, Paul Scherrer Institute; National University of Singapore; University of Central Florida; Universität Würzburg; Diamond Light Source, Didcot, U.K.; and Synchrotron SOLEIL, Saint-Aubin, France.

Work at Princeton University and synchrotron-based ARPES measurements led by Princeton researchers were supported by the U.S. Department of Energy under Basic Energy AQ29 Sciences grant no. DE-FG-02-05ER46200 (to M.Z.H.). I.B. was supported by an NSF Graduate Research Fellowship. N.K., M.B., and S.O. were supported by the Emergent Phenomena in Quantum Systems Initiative of the Gordon and Betty Moore Foundation under grant no. GBMF4418 and by the NSF under grant no. NSF-EFMA-1542798. H.L. acknowledges support from the Singapore National Research Foundation under award no. NRF-NRFF2013-03. M.N. was supported by start-up funds from the University of Central Florida. The work acknowledges support of Diamond Light Source, Didcot, U.K., for time on beamline I05 under proposal SI11742-1. Some measurements were carried out at the ADRESS beamline (24) of the Swiss Light Source, Paul Scherrer Institute, Switzerland. This study was in part supported by grant AQ30 no. 11504159 of the National Natural Science Foundation of China (NSFC), grant no. 2016A030313650 of NSFC Guangdong, and project no. JCY20150630145302240 of the Shenzhen Science and Technology Innovations Committee.

The paper, “A novel artificial condensed matter lattice and a new platform for one-dimensional topological phases,” by Ilya Belopolski, Su-Yang Xu, Nikesh Koirala, Chang Liu, Guang Bian, Vladimir Strocov, Guoqing Chang, Madhab Neupane, Nasser Alidoust, Daniel Sanchez, Hao Zheng, Matthew Brahlek, Victor Rogalev, Timur Kim, Nicholas C. Plumb, Chaoyu Chen, François Bertran, Patrick Le Fèvre, Amina Taleb-Ibrahimi, Maria-Carmen Asensio, Ming Shi, Hsin Lin, Moritz Hoesch, Seongshik Oh and M. Zahid Hasan, was published in the journal Science Advances on March 24, 2017. (Belopolski et al., Sci. Adv. 2017;3: e1501692 24 March 2017)

 

Study reveals the multitasking secrets of an RNA-binding protein

RNA-binding domains
Two views of one of Glo’s RNA-binding domains highlight the amino acids required for binding G-tract RNA (left) and U-A stem structures (right). Courtesy of Cell Reports.

By Staff, Department of Molecular Biology

Researchers from Princeton University and the National Institute of Environmental Health Sciences have discovered how a fruit fly protein binds and regulates two different types of RNA target sequence. The study, published April 4 in the journal Cell Reports, may help explain how various RNA-binding proteins, many of which are implicated in cancer and neurodegenerative disease, perform so many different functions in the cell.

There are hundreds of RNA-binding proteins in the human genome that together regulate the processing, turnover and localization of the many thousands of RNA molecules expressed in cells. These proteins also control the translation of RNA into proteins. RNA-binding proteins are crucial for maintaining normal cellular function, and defects in this family of proteins can lead to disease. For example, RNA-binding proteins are overexpressed in many human cancers, and mutations in some of these proteins have been linked to neurological and neurodegenerative disorders such as amyotrophic lateral sclerosis. “Understanding the fundamental properties of this class of proteins is very relevant,” said Elizabeth Gavis, the Damon B. Pfeiffer Professor in the Life Sciences and a professor of molecular biology.

Gavis and colleagues are particularly interested in a protein called Glorund (Glo), a type of RNA-binding protein that performs several functions in fruit fly development. This protein was originally identified due to its ability to repress the translation of an RNA molecule called nanos to protein in fly eggs. By binding to a stem structure formed by uracil and adenine nucleotides in the nanos RNA, Glo prevents the production of Nanos protein at the front of the embryo, a step that enables the fly’s head to form properly.

Like many other RNA-binding proteins, however, Glo is multifunctional. It regulates several other steps in fly development, apparently by binding to RNAs other than nanos. The mammalian counterparts of Glo, known as heterogeneous nuclear ribonucleoprotein (hnRNP) F/H proteins, bind to RNAs containing stretches of guanine nucleotides known as G-tracts, and, rather than repressing translation, mammalian hnRNP F/H proteins regulate processes such as RNA splicing, in which RNAs are rearranged to produce alternative versions of the proteins they encode.

To understand how Glo might bind to diverse RNAs and regulate them in different ways, Gavis and graduate student Joel Tamayo collaborated with Traci Tanaka Hall and Takamasa Teramoto from the National Institute of Environmental Health Sciences to generate X-ray crystallographic structures of Glo’s three RNA-binding domains. As expected, the three domains were almost identical to the corresponding domains of mammalian hnRNP F/H proteins. They retained, for example, the amino acid residues that bind to G-tract RNA, and the researchers confirmed that, like their mammalian counterparts, each RNA-binding domain of Glo can bind to this type of RNA sequence.

However, the researchers also saw something new. “When we looked at the structures, we realized that there were also some basic amino acids that projected from a different part of the RNA-binding domains that could be involved in contacting RNA,” Gavis explained.

The researchers found that these basic amino acids mediate binding to uracil-adenine (U-A) stem structures like the one found in nanos RNA. Each of Glo’s RNA-binding domains therefore contains two distinct binding surfaces that interact with different types of RNA target sequence. “While there have been examples previously of RNA-binding proteins that carry more than one binding domain, each with a different specificity, this represents the first example of a single domain harboring two different specificities,” said Howard Lipshitz, a professor of molecular genetics at the University of Toronto who was not involved in the study.

To investigate which of Glo’s two RNA-binding modes was required for its different functions in flies, Gavis and colleagues generated insects carrying mutant versions of the RNA-binding protein. Glo’s ability to repress nanos translation during egg development required both of the protein’s RNA-binding modes. The researchers discovered that, as well as binding the U-A stem in the nanos RNA, Glo also recognized a nearby G-tract sequence. But Glo’s ability to regulate other RNAs at different developmental stages only depended on the protein’s capacity to bind G-tracts.

“We think that the binding mode may correlate with Glo’s activity towards a particular RNA,” said Gavis. “If it binds to a G-tract, Glo might promote RNA splicing. If it simultaneously binds to both a G-tract and a U-A stem, Glo acts as a translational repressor.”

The RNA-binding domains of mammalian hnRNP F/H proteins probably have a similar ability to bind two different types of RNA, allowing them to regulate diverse target RNAs within the cell. “This paper represents an exciting advance in a field that has become increasingly important with the discovery that defects in RNA-binding proteins contribute to human diseases such as metabolic disorders, cancer and neurodegeneration,” Lipshitz said. “Since these proteins are evolutionarily conserved from fruit flies to humans, experiments of this type tell us a lot about how their human versions normally work or can go wrong.”

The research was supported in part by a National Science Foundation Graduate Research Fellowship (DGE 1148900), a Japan Society for the Promotion of Science fellowship, the National Institutes of Health (R01 GM061107) and the Intramural Research Program of the National Institute of Environmental Health Sciences. The Advanced Photon Source used for this study is supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, under contract W-31-109-Eng-38.

The study, “The Drosophila hnRNP F/H Homolog Glorund Uses Two Distinct RNA-binding Modes to Diversify Target Recognition,” by Joel Tamayo, Takamasa Teramoto, Seema Chatterjee, Traci Tanaka Hall, and Elizabeth Gavis, was published in the journal Cell Reports on April 4, 2017.  http://dx.doi.org/10.1016/j.celrep.2017.03.022

Princeton-led team produces 3-D map of Earth’s interior

Magna plumes and hotspots, such as the one below Yellowstone, are visible in this subterranean simulation. (Credit: David Pugmire, ORNL)

By Jonathan Hines, Oak Ridge National Laboratory

Because of Earth’s layered composition, scientists have often compared the basic arrangement of its interior to that of an onion. There’s the familiar thin crust of continents and ocean floors; the thick mantle of hot, semisolid rock; the molten metal outer core; and the solid iron inner core.

But unlike an onion, peeling back Earth’s layers to better explore planetary dynamics isn’t an option, forcing scientists to make educated guesses about our planet’s inner life based on surface-level observations. Clever imaging techniques devised by computational scientists, however, offer the promise of illuminating Earth’s subterranean secrets.

Using advanced modeling and simulation, seismic data generated by earthquakes, and one of the world’s fastest supercomputers, a team led by Jeroen Tromp of Princeton University is creating a detailed 3-D picture of Earth’s interior. Currently, the team is focused on imaging the entire globe from the surface to the core–mantle boundary, a depth of 1,800 miles.

These high-fidelity simulations add context to ongoing debates related to Earth’s geologic history and dynamics, bringing prominent features like tectonic plates, magma plumes, and hotspots into view. In September 2016, the team published a paper in Geophysical Journal International on its first-generation global model. Created using data from 253 earthquakes captured by seismograms scattered around the world, the team’s model is notable for its global scope and high scalability.

“This is the first global seismic model where no approximations—other than the chosen numerical method—were used to simulate how seismic waves travel through the Earth and how they sense heterogeneities,” said Ebru Bozdag, a coprincipal investigator of the project and an assistant professor of geophysics at the University of Nice Sophia Antipolis. “That’s a milestone for the seismology community. For the first time, we showed people the value and feasibility of running these kinds of tools for global seismic imaging.”

The project’s genesis can be traced to a seismic imaging theory first proposed in the 1980s. To fill in gaps within seismic data maps, the theory posited a method called adjoint tomography, an iterative full-waveform inversion technique. This technique leverages more information than competing methods, using forward waves that travel from the quake’s origin to the seismic receiver and adjoint waves, which are mathematically derived waves that travel from the receiver to the quake.

The problem with testing this theory? “You need really big computers to do this,” Bozdag said, “because both forward and adjoint wave simulations are performed in 3-D numerically.”

In 2012, just such a machine arrived in the form of the Titan supercomputer, a 27-petaflop Cray XK7 managed by the US Department of Energy’s (DOE’s) Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at DOE’s Oak Ridge National Laboratory. After trying out its method on smaller machines, Tromp, who is Princeton’s Blair Professor of Geology and a professor of geosciences and applied and computational mathematics, and his team gained access to Titan in 2013 through the Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program.

Working with OLCF staff, the team continues to push the limits of computational seismology to deeper depths.

Stitching together seismic slices

When an earthquake strikes, the release of energy creates seismic waves that often wreak havoc for life at the surface. Those same waves, however, present an opportunity for scientists to peer into the subsurface by measuring vibrations passing through the Earth.

As seismic waves travel, seismograms can detect variations in their speed. These changes provide clues about the composition, density, and temperature of the medium the wave is passing through. For example, waves move slower when passing through hot magma, such as mantle plumes and hotspots, than they do when passing through colder subduction zones, locations where one tectonic plate slides beneath another.

Each seismogram represents a narrow slice of the planet’s interior. By stitching many seismograms together, researchers can produce a 3-D global image, capturing everything from magma plumes feeding the Ring of Fire, to Yellowstone’s hotspots, to subducted plates under New Zealand.

This process, called seismic tomography, works in a manner similar to imaging techniques employed in medicine, where 2-D x-ray images taken from many perspectives are combined to create 3-D images of areas inside the body.

In the past, seismic tomography techniques have been limited in the amount of seismic data they can use. Traditional methods forced researchers to make approximations in their wave simulations and restrict observational data to major seismic phases only. Adjoint tomography based on 3-D numerical simulations employed by Tromp’s team isn’t constrained in this way. “We can use the entire data—anything and everything,” Bozdag said.

Running its GPU version of the SPECFEM3D_GLOBE code, Tromp’s team used Titan to apply full-waveform inversion at a global scale. The team then compared these “synthetic seismograms” with observed seismic data supplied by the Incorporated Research Institutions for Seismology (IRIS), calculating the difference and feeding that information back into the model for further optimization. Each repetition of this process improves global models.

“This is what we call the adjoint tomography workflow, and at a global scale it requires a supercomputer like Titan to be executed in reasonable timeframe,” Bozdag said. “For our first-generation model, we completed 15 iterations, which is actually a small number for these kinds of problems. Despite the small number of iterations, our enhanced global model shows the power of our approach. This is just the beginning, however.”

Automating to augment

For its initial global model, Tromp’s team selected earthquake events that registered between 5.8 and 7 on the Richter scale—a standard for measuring earthquake intensity. That range can be extended slightly to include more than 6,000 earthquakes in the IRIS database—about 20 times the amount of data used in the original model.

Getting the most out of all the available data requires a robust automated workflow capable of accelerating the team’s iterative process. Collaborating with OLCF staff, Tromp’s team has made progress toward this goal.

For the team’s first-generation model, Bozdag carried out each step of the workflow manually, taking about a month to complete one model update. Team members Matthieu Lefebvre, Wenjie Lei, and Youyi Ruan of Princeton University and the OLCF’s Judy Hill developed new automated workflow processes that hold the promise of reducing that cycle to a matter of days.

“Automation will really make it more efficient, and it will also reduce human error, which is pretty easy to introduce,” Bozdag said.

Additional support from OLCF staff has contributed to the efficient use and accessibility of project data. Early in the project’s life, Tromp’s team worked with the OLCF’s Norbert Podhorszki to improve data movement and flexibility. The end result, called Adaptable Seismic Data Format (ASDF), leverages the Adaptable I/O System (ADIOS) parallel library and gives Tromp’s team a superior file format to record, reproduce, and analyze data on large-scale parallel computing resources.

In addition, the OLCF’s David Pugmire helped the team implement in situ visualization tools. These tools enabled team members to check their work more easily from local workstations by allowing visualizations to be produced in conjunction with simulation on Titan, eliminating the need for costly file transfers.

“Sometimes the devil is in the details, so you really need to be careful and know what you’re looking at,” Bozdag said. “David’s visualization tools help us to investigate our models and see what is there and what is not.”

With visualization, the magnitude of the team’s project comes to light. The billion-year cycle of molten rock rising from the core–mantle boundary and falling from the crust—not unlike the motion of globules in a lava lamp—takes form, as do other geologic features of interest.

At this stage, the resolution of the team’s global model is becoming advanced enough to inform continental studies, particularly in regions with dense data coverage. Making it useful at the regional level or smaller, such as the mantle activity beneath Southern California or the earthquake-prone crust of Istanbul, will require additional work.

“Most global models in seismology agree at large scales but differ from each other significantly at the smaller scales,” Bozdag said. “That’s why it’s crucial to have a more accurate image of Earth’s interior. Creating high-resolution images of the mantle will allow us to contribute to these discussions.”

Digging deeper

To improve accuracy and resolution further, Tromp’s team is experimenting with model parameters under its most recent INCITE allocation. For example, the team’s second-generation model will introduce anisotropic inversions, which are calculations that better capture the differing orientations and movement of rock in the mantle. This new information should give scientists a clearer picture of mantle flow, composition, and crust–mantle interactions.

Additionally, team members Dimitri Komatitsch of Aix-Marseille University in France and Daniel Peter of King Abdullah University in Saudi Arabia are leading efforts to update SPECFEM3D_GLOBE to incorporate capabilities such as the simulation of higher-frequency seismic waves. The frequency of a seismic wave, measured in Hertz, is equivalent to the number of waves passing through a fixed point in one second. For instance, the current minimum frequency used in the team’s simulation is about 0.05 hertz (1 wave per 20 seconds), but Bozdag said the team would also like to incorporate seismic waves of up to 1 hertz (1 wave per second). This would allow the team to model finer details in the Earth’s mantle and even begin mapping the Earth’s core.

To make this leap, Tromp’s team is preparing for Summit, the OLCF’s next-generation supercomputer. Set to arrive in 2018, Summit will provide at least five times the computing power of Titan. As part of the OLCF’s Center for Accelerated Application Readiness, Tromp’s team is working with OLCF staff to take advantage of Summit’s computing power upon arrival.

“With Summit, we will be able to image the entire globe from crust all the way down to Earth’s center, including the core,” Bozdag said. “Our methods are expensive—we need a supercomputer to carry them out—but our results show that these expenses are justified, even necessary.”

“Global Adjoint Tomography: First-Generation Model,” Ebru Bozdag, Daniel Peter, Matthieu Lefebvre, Dimitri Komatitsch, Jeroen Tromp, Judith Hill, Norbert Podhorszki, and David Pugmire,  Geophysical Journal International 207, no. 3 (2016): 1739–1766.

This article appears courtesy of Oak Ridge National Laboratory. See the original article.

Oak Ridge National Laboratory is supported by the US Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Flexibility is key in mechanism of biological self-assembly

By Catherine Zandonella, Office of the Dean for Research

A new study has modeled a crucial first step in the self-assembly of cellular structures such as drug receptors and other protein complexes, and found that the flexibility of the structures has a dramatic impact on how fast they join together.

The study, published this week in the journal Proceedings of the National Academy of Sciences, explored what happens when two water-repelling surfaces connect to build more complex structures. Using molecular simulations, researchers at Princeton University illustrated the mechanism by which the process occurs and explored factors that favor self-assembly.

A surprise finding was the sensitivity with which the surfaces’ flexibility determined the rate at which the surfaces eventually came together, with more flexible surfaces favoring joining. “Flexibility is like a knob that nature can tune to control the self-assembly of molecules,” said Pablo Debenedetti, senior author on the study and Princeton’s Dean for Research. Debenedetti is the Class of 1950 Professor in Engineering and Applied Science and a professor of chemical and biological engineering.

Researchers have long been interested in how biological structures can self-assemble according to physical laws. Tapping the secrets of self-assembly could, for example, lead to new methods of building nanomaterials for future electronic devices. Self-assembled protein complexes are the basis not only of drug receptors but also many other cellular structures, including ion channels that facilitate the transmission of signals in the brain.

The study illustrated the process by which two water-repelling, or hydrophobic, structures come together. At the start of the simulation, the two surfaces were separated by a watery environment. Researchers knew from previous studies that these surfaces, due to their hydrophobic nature, will push water molecules away until only a very few water molecules remain in the gap. The evaporation of these last few molecules allows the two surfaces to snap together.

The new molecular simulation conducted at Princeton yielded a more detailed look at the mechanism behind this process. In the simulation, when the surfaces are sufficiently close to each other, their hydrophobic nature triggered fluctuations in the number of water molecules in the gap, causing the liquid water to evaporate and form bubbles on the surfaces. The bubbles grew as more water molecules evaporated. Eventually two bubbles on either surface connected to form a gap-spanning tube, which expanded and pushed away any remaining water until the two surfaces collided.

Biological surfaces, such as cellular membranes, are flexible, so the researchers explored how the surfaces’ flexibility affected the process. The researchers tuned the flexibility of the surfaces by varying the strength of the coupling between the surface atoms. The stronger the coupling, the less each atom can wiggle relative to its neighbors.

The researchers found that the speed at which the two surfaces snap together depended greatly on flexibility. Small changes in flexibility led to large changes in the rate at which the surfaces stuck together. For example, two very flexible surfaces adhered in just nanoseconds, whereas two inflexible surfaces fused incredibly slowly, on the order of seconds.

Another finding was that the last step in the process, where the vapor tube expands, was critical for ensuring that the surfaces came together. In simulations where the tube failed to expand, the surfaces never joined. Flexibility was key to ensuring that the tube expanded, the researchers found. Making the material more flexible lowered the barriers to evaporation and stabilized the vapor tube, increasing the chances that the tube would expand.

The molecular simulation provides a foundation for understanding how biological structures assemble and function, according to Elia Altabet, a graduate student in Debenedetti’s group, and first author on the study. “A deeper understanding of the formation and function of protein assemblies such as drug receptors and ion channels could inform the design of new drugs to treat diseases,” he said.

Funding for this study was provided by National Science Foundation grants CHE-1213343 and CBET-1263565. Computations were performed at the Terascale Infrastructure for Groundbreaking Research in Engineering and Science (TIGRESS) at Princeton University.

The study, “Effect of material flexibility on the thermodynamics and kinetics of hydrophobically induced evaporation of water,” by Y. Elia Altabet, Amir Haji-Akbari and Pablo Debenedetti, was published online in the journal Proceedings of the National Academy of Sciences the week of March 13, 2017. doi: 10.1073/pnas.1620335114

Deep-sea corals reveal why atmospheric carbon was lower during the ice ages

Deep sea corals reveal that efficient nutrient consumption by plankton drove carbon sequestration in the deep ocean during the ice ages. Photo courtesy of Caltech.

By Robert Perkins, Caltech

We know a lot about how carbon dioxide (CO2) levels can drive climate change, but how about the way that climate change can cause fluctuations in CO2 levels? New research from an international team of scientists reveals one of the mechanisms by which a colder climate was accompanied by depleted atmospheric CO2 during past ice ages.

The overall goal of the work is to better understand how and why the earth goes through periodic climate change, which could shed light on how man-made factors could affect the global climate.

Now, an international team of scientists has shown that periods of colder climates are associated with higher phytoplankton efficiency and a reduction in nutrients in the surface of the Southern Ocean (the ocean surrounding the Antarctic), which is related to an increase in carbon sequestration in the deep ocean. A paper about their research appears this week in the online edition of the Proceedings of the National Academy of Sciences.

“It is critical to understand why atmospheric CO2 concentration was lower during the ice ages. This will help us understand how the ocean will respond to ongoing anthropogenic CO2 emissions,” says Xingchen (Tony) Wang, lead author of the study. Wang was a graduate student at Princeton University while conducting the research in the lab of Daniel Sigman, the Dusenbury Professor of Geological and Geophysical Sciences. Wang is now a Simons Foundation Postdoctoral Fellow on the Origins of Life at Caltech. The study used a library of 10,000 deep-sea corals collected by Caltech’s Jess Adkins.

Xingchen (Tony) Wang and Jess Adkins. Photo courtesy of Caltech

Earth’s average temperature has naturally fluctuated by about 4 to 5 degrees Celsius over the course of the past million years as the planet has cycled in and out of glacial periods. During that time, the earth’s atmospheric CO2 levels have fluctuated between roughly 180 and 280 parts per million (ppm) every 100,000 years or so. (In recent years, man-made carbon emissions have boosted that concentration up to over 400 ppm.)

About 10 years ago, researchers noticed a close correspondence between the fluctuations in CO2 levels and in temperature over the last million years. When the earth is at its coldest, the amount of CO2 in the atmosphere is also at its lowest. During the most recent ice age, which ended about 11,000 years ago, global temperatures were 5 degrees Celsius lower than they are today, and atmospheric CO2 concentrations were at 180 ppm.

There is 60 times more carbon in the ocean than in the atmosphere—partly because the ocean is so big. The mass of the world’s oceans is roughly 270 times greater than that of the atmosphere. As such, the ocean is the greatest regulator of carbon in the atmosphere, acting as both a sink and a source for atmospheric CO2.

Biological processes are the main driver of CO2 absorption from the atmosphere to the ocean. Just like photosynthesizing trees and plants on land, plankton at the surface of the sea turn CO2 into sugars that are eventually consumed by other creatures. As the sea creatures who consume those sugars—and the carbon they contain—die, they sink to the deep ocean, where the carbon is locked away from the atmosphere for a long time. This process is called the “biological pump.”

A healthy population of phytoplankton helps lock away carbon from the atmosphere. In order to thrive, phytoplankton need nutrients—notably, nitrogen, phosphorus, and iron. In most parts of the modern ocean, phytoplankton deplete all of the available nutrients in the surface ocean, and the biological pump operates at maximum efficiency.

However, in the modern Southern Ocean, there is a limited amount of iron—which means that there are not enough phytoplankton to fully consume the nitrogen and phosphorus in the surface waters. When there is less living biomass, there is also less that can die and sink to the bottom—which results in a decrease in carbon sequestration. The biological pump is not currently operating as efficiently as it theoretically could.

To track the efficiency of the biological pump over the span of the past 40,000 years, Adkins and his colleagues collected more than 10,000 fossils of the coral Desmophyllum dianthus.

Why coral? Two reasons: first, as it grows, coral accretes a skeleton around itself, precipitating calcium carbonate (CaCO3) and other trace elements (including nitrogen) out of the water around it. That process creates a rocky record of the chemistry of the ocean. Second, coral can be precisely dated using a combination of radiocarbon and uranium dating.

“Finding a few centimeter-tall fossil corals 2,000 meters deep in the ocean is no trivial task,” says Adkins, the Smits Family Professor of Geochemistry and Global Environmental Science at Caltech.

Adkins and his colleagues collected coral from the relatively narrow (500-mile) gap known as the Drake Passage between South America and Antarctica (among other places). Because the Southern Ocean flows around Antarctica, all of its waters funnel through that gap—making the samples Adkins collected a robust record of the water throughout the Southern Ocean.

Coauthors include scientists from Caltech, Princeton University, Pomona College, the Max Planck Institute for Chemistry in Germany, University of Bristol, and ETH Zurich in Switzerland.

Wang analyzed the ratios of two isotopes of nitrogen atoms in these corals – nitrogen-14 (14N, the most common variety of the atom, with seven protons and seven neutrons in its nucleus) and nitrogen-15 (15N, which has an extra neutron). When phytoplankton consume nitrogen, they prefer 14N to 15N. As a result, there is a correlation between the ratio of nitrogen isotopes in sinking organic matter (which the corals then eat as it falls to the seafloor) and how much nitrogen is being consumed in the surface ocean—and, by extension, the efficiency of the biological pump.

A higher amount of 15N in the fossils indicates that the biological pump was operating more efficiently at that time. An analogy would be monitoring what a person eats in their home. If they are eating more of their less-liked foods, then one could assume that the amount of food in their pantry is running low.

Indeed, Wang found that higher amounts of 15N were present in fossils corresponding to the last ice age, indicating that the biological pump was operating more efficiently during that time. As such, the evidence suggests that colder climates allow more biomass to grow in the surface Southern Ocean—likely because colder climates experience stronger winds, which can blow more iron into the Southern Ocean from the continents. That biomass consumes carbon, then dies and sinks, locking it away from the atmosphere.

Adkins and his colleagues plan to continue probing the coral library for further details about the cycles of ocean chemistry changes over the past several hundred thousand years.

The research was funded by the National Science Foundation, Princeton University, the European Research Council, and the Natural Environment Research Council.

The study, “Deep-sea coral evidence for lower Southern Ocean surface nitrate concentrations during the last ice age,” Xingchen Tony Wang, Daniel M. Sigman, Maria G. Prokopenko, Jess F. Adkins, Laura F. Robinson, Sophia K. Hines, Junyi Chai, Anja S. Studer, Alfredo Martínez-García, Tianyu Chen, and Gerald H. Haug, was published in the journal Proceedings of the National Academy of Sciences early edition the week of March 13, 2017. doi: 10.1073/pnas.1615718114

Article provided courtesy of Caltech

Researchers develop technique to track yellow fever virus replication

Infection with a strain of yellow fever virus
Infection with a strain of yellow fever virus (YFD-17D) in mouse liver. The liver of a mouse whose immune cells lack the immune signaling component known as STAT1 shows severe lymphocyte infiltration and inflammation, as well as necrosis, after infection with YFV-17D. Credit: Florian Douam and Alexander Ploss

By Staff, Department of Molecular Biology

Researchers from Princeton University‘s Department of Molecular Biology have developed a new method that can precisely track the replication of yellow fever virus in individual host immune cells. The technique, which is described in a paper published March 14 in the journal Nature Communications, could aid the development of new vaccines against a range of viruses, including Dengue and Zika.

Yellow fever virus (YFV) is a member of the flavivirus family that also includes Dengue and Zika virus. The virus, which is thought to infect a variety of cell types in the body, causes up to 200,000 cases of yellow fever every year, despite the widespread use of a highly effective vaccine. The vaccine consists of a live, attenuated form of the virus called YFV-17D, whose RNA genome is more than 99 percent identical to the virulent strain. This one percent difference in the attenuated virus’ genome may subtly alter interactions with the host immune system so that it induces a protective immune response without causing disease.

To explore how viruses interact with their hosts, and how these processes lead to virulence and disease, Alexander Ploss, assistant professor of molecular biology, and colleagues at Princeton University adapted a technique — called RNA Prime flow — that can detect RNA molecules within individual cells. They used the technique to track the presence of replicating viral particles in various immune cells circulating in the blood of infected mice. Mice are usually resistant to YFV, but Ploss and colleagues found that even the attenuated YFV-17D strain was lethal if the transcription factor STAT1, part of the antiviral interferon signaling pathway, was removed from mouse immune cells. The finding suggests that interferon signaling within immune cells protects mice from YFV, and that species-specific differences in this pathway allow the virus to replicate in humans and certain other primates but not mice.

Accordingly, YFV-17D was able to replicate efficiently in mice whose immune systems had been replaced with human immune cells capable of activating interferon signaling. However, just like humans immunized with the attenuated YFV vaccine, these “humanized” mice didn’t develop disease symptoms when infected with YFV-17D, allowing Ploss and colleagues to study how the attenuated virus interacts with the human immune system. Using their viral RNA flow technique, the researchers determined that the virus can replicate inside certain human immune cell types, including B lymphocytes and natural killer cells, in which the virus has not been detected previously. The researchers found that the panel of human cell types targeted by the virus changes over the course of infection in both the blood and the spleen of the animals, highlighting the distinct dynamics of YFV-17D replication in the human immune system.

The next step, said Florian Douam, a postdoctoral research associate in the Department of Molecular Biology and first author on the study, is to confirm YFV replication in these subsets of immune cells in YFV-infected patients and in recipients of the YFV-17D vaccine. Viral RNA flow now provides the means to perform such analyses, Douam said.

The researchers also plan to study whether the virulent and attenuated strains of yellow fever virus infect different host immune cells. The approach may help explain why some people infected with the virus die while others develop only the mildest of symptoms, as well as which changes in the YFV-17D genome weaken the virus’ ability to cause disease. “This could guide the rational design of vaccines against related pathogens, such as Zika and Dengue virus,” Ploss said.

This work was supported by a grant from the Health Grand Challenge program from Princeton University, the New Jersey Commission on Cancer Research (Grant No. DHFS16PPC007), the Genentech Foundation and Princeton University’s Anthony Evnin ’62 Senior Thesis Fund.

Florian Douam, Gabriela Hrebikova, Yentli E. Soto Albrecht, Julie Sellau, Yael Sharon, Qiang Ding and Alexander Ploss. Single-cell tracking of flavivirus RNA uncovers species-specific interactions with the immune system dictating disease outcome. Nature Communications. 8: 14781. (2017). doi: 10.1038/ncomms14781

A new cosmic survey offers unprecedented view of galaxies

View of the galaxies
A color composite image in the green, red and infrared bands of a patch of the sky known as the COSMOS field, as imaged by the Subaru Telescope in Hawaii. The galaxies are seen at such large distances that the light from them has taken billions of years to reach Earth. The light from the faintest galaxies in this image was emitted when the universe was less than 10 percent of its present age. Click here to pan around the image. (Credit: Princeton University/HSC Project)

By the Office of the Dean for Research

The universe has come into sharper focus with the release this week of new images from one of the largest telescopes in the world. A multinational collaboration led by the National Astronomical Observatory of Japan that includes Princeton University scientists has published a “cosmic census” of a large swath of the night sky containing roughly 100 million stars and galaxies, including some of the most distant objects in the universe. These high-quality images allow an unprecedented view into the nature and evolution of galaxies and dark matter.

The images and accompanying data were collected using a digital optical-imaging camera on the Subaru Telescope, located at the Mauna Kea Observatory in Hawaii. The camera, known as Hyper Suprime-Cam, is mounted directly in the optical path, at the “prime focus,” of the Subaru Telescope. A single image from the camera captures an amount of sky equal to the area of about nine full moons.

The project, known as the Hyper Suprime-Cam Subaru Strategic Program, is led by the National Astronomical Observatory of Japan (NAOJ) in collaboration with the Kavli Institute for the Physics and Mathematics of the Universe in Japan, the Academia Sinica Institute of Astronomy and Astrophysics in Taiwan, and Princeton University.

The release includes data from the first one-and-a-half years of the project, consisting of 61.5 nights of observations beginning in 2014. The project will take 300 nights over five to six years.

The data will allow researchers to look for previously undiscovered galaxies and to search for dark matter, which is matter that neither emits nor absorbs light but which can be detected via its effects on gravity. A 2015 study using Hyper Suprime-Cam surveyed 2.3 square degrees of sky and found gravitational signatures of nine clumps of dark matter, each weighing as much as a galaxy cluster (Miyazaki et al., 2015). The current data release covers about 50 times more sky than was used in that study, showing the potential of these data to reveal the statistical properties of dark matter.

The survey consists of three layers: a Wide survey that will eventually cover an area equal to 7000 full moons, or 1400 square degrees; a Deep survey that will look farther into the universe and encompass 26 square degrees; and an UltraDeep survey that will cover 3.5 square degrees and penetrate deep into space, allowing observations of some of the most distant galaxies in the universe. The surveys use optical and near infrared wavelengths in five broad wavelength bands (green, red, infrared, z, and y) and four narrow-band filters. In the multi-band images, the images are extremely sharp, with star images only 0.6 to 0.8 arcseconds across. (One arcsecond equals 3600th part of a degree.)

Figure 2: Cluster of galaxies
An image of a massive cluster of galaxies in the Virgo constellation showing numerous strong gravitational lenses. The distance to the central galaxy is 5.3 billion light years, while the lensed galaxies, apparent as the arcs around the cluster, are much more distant. This is a composite image in the green, red, and infrared band, and has a spatial resolution of about 0.6 arcsecond. (Credit: NAOJ/HSC Project)

The ability to capture images from deep in space is made possible by the light-collection power of the Subaru Telescope’s mirror, which has an aperture of 8.2 meters, as well as the image exposure time. The depth into space that one can look is measured in terms of the magnitude, or brightness of objects that can be seen from Earth in a given wavelength band. The depths of the three surveys are characterized by magnitudes in the red band of 26.4, 26.6 and 27.3 in the Wide, Deep and Ultradeep data, respectively. As the survey continues, the Deep and Ultradeep surveys will be able to image fainter objects.

The Hyper Suprime-Cam contains 104 scientific charge-coupled devices (CCDs) for a total of 870 million pixels. The total amount of data taken so far comprises 80 terabytes, which is comparable to the size of about 10 million images by a typical digital camera, and covers 108 square degrees. Because it is difficult to search such a huge dataset with standard tools, NAOJ has developed a dedicated database and interface for ease of access and use of the data.

Figure 3: Interation between galaxies
A color composite image in the green, red and infrared bands of UGC 10214, known as the Tadpole Galaxy in the ELAIS-N1 region. The distance to this galaxy is about 400 million light years. The long tail of stars is due to gravitational interaction between two galaxies. (Credit: NAOJ/HSC Project)

“Since 2014, we have been observing the sky with HSC, which can capture a wide-field image with high resolution,” said Satoshi Miyazaki, the leader of the project and a scientist at NAOJ. “We believe the data release will lead to many exciting astronomical results, from exploring the nature of dark matter and dark energy, as well as asteroids in our own solar system and galaxies in the early universe. The team members are now preparing a number of scientific papers based on these data. We plan to publish them in a special issue of the Publication of Astronomical Society of Japan. Moreover, we hope that interested members of the public will also access the data and enjoy the real universe imaged by the Subaru telescope, one of the largest the world.”

At Princeton, the project is co-led by Michael Strauss and Robert Lupton of the Department of Astrophysical Sciences. “The HSC data are really beautiful,” Strauss said. “Princeton scientists are using these data to explore the nature of merging galaxies, to search for the most distant quasars in the universe, to map the outer reaches of the Milky Way Galaxy, and for many other projects. We are delighted to make these wonderful images available to the world-wide astronomical community.”

Funding for the HSC Project was provided in part by the following grants: Grant-in-Aid for Scientific Research (B) JP15340065; Grant-in-Aid for Scientific Research on Priority Areas JP18072003; and the Funding Program for World-Leading Innovative R&D on Science and Technology (FIRST) entitled, “Uncovering the origin and future of the Universe-ultra-wide-field imaging and spectroscopy reveal the nature of dark matter and dark energy.” Funding was also provided by Princeton University.

This article was adapted from a press release from the National Astronomical Observatory of Japan.

Come together: Nucleolus forms via combination of active and passive processes

Movie caption: Researchers at Princeton studied the temperature dependence of the formation of the nucleolus, a cellular organelle. The movie shows the nuclei of intact fly cells as they are subjected to temperature changes in the surrounding fluid. As the temperature is shifted from low to high, the spontaneously assembled proteins dissolve, as can be seen in the disappearance of the bright spots.

By Catherine Zandonella, Office of the Dean for Research

Researchers at Princeton found that the nucleolus, a cellular organelle involved in RNA synthesis, assembles in part through the passive process of phase separation – the same type of process that causes oil to separate from water. The study, published in the journal Proceedings of the National Academy of Sciences, is the first to show that this happens in living, intact cells.

Understanding how cellular structures form could help explain how organelles change in response to diseases. For example, a hallmark of cancer cells is the swelling of the nucleolus.

To explore the role of passive processes – as opposed to active processes that involve energy consumption – in nucleolus formation, Hanieh Falahati, a graduate student in Princeton’s Lewis-Sigler Institute for Integrative Genomics, looked at the behavior of six nucleolus proteins under different temperature conditions. Phase separation is enhanced at lower temperatures, which is why salad dressing containing oil and vinegar separates when stored in the refrigerator. If phase separation were driving the assembly of proteins, the researchers should see the effect at low temperatures.

Falahati showed that four of the six proteins condensed and assembled into the nucleolus at low temperatures and reverted when the temperature rose, indicating that the passive process of phase separation was at work. However, the assembly of the other two proteins was irreversible, indicating that active processes were in play.

“It was kind of a surprising result, and it shows that cells can take advantage of spontaneous processes for some functions, but for other things, active processes may give the cell more control,” said Falahati, whose adviser is Eric Wieschaus, Princeton’s Squibb Professor in Molecular Biology and a professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics, and a Howard Hughes Medical Institute researcher.

The research was funded in part by grant 5R37HD15587 from the National Institute of Child Health and Human Development (NICHD), and by the Howard Hughes Medical Institute.

The study, “Independent active and thermodynamic processes govern the nucleolus assembly in vivo,” by Hanieh Falahatia and Eric Wieschaus, was published online ahead of print in the journal Proceedings of the National Academy of Sciences on January 23, 2017, doi: 10.1073/pnas.1615395114.

 

Theorists propose new class of topological metals with exotic electronic properties (Physics Review X)

Band structure spectral function
A new theory explains the behavior of a class of metals with exotic electronic properties. Credit: Muechler et al., Physics Review X

By Tien Nguyen, Department of Chemistry

Researchers at Princeton, Yale, and the University of Zurich have proposed a theory-based approach to characterize a class of metals that possess exotic electronic properties that could help scientists find other, similarly-endowed materials.

Published in the journal Physical Review X, the study described a new class of metals based on their symmetry and a mathematical classification known as a topological number, which is predictive of special electronic properties. Topological materials have drawn intense research interest since the early 2000s culminating in last year’s Nobel Prize in Physics awarded to three physicists, including F. Duncan Haldane, Princeton’s Eugene Higgins Professor of Physics, for theoretical discoveries in this area.

“Topological classification is a very general way of looking at the properties of materials,” said Lukas Muechler, a Princeton graduate student in the laboratory of Roberto Car, Princeton’s Ralph W. *31 Dornte Professor in Chemistry and lead author on the article.

A popular way of explaining this abstract mathematical classification involves breakfast items. In topological classification, donuts and coffee cups are equivalent because they both have one hole and can be smoothly deformed into one another. Meanwhile donuts cannot deform into muffins which makes them inequivalent. The number of holes is an example of a topological invariant that is equal for the donut and coffee cup, but distinguishes between the donut and the muffin.

“The idea is that you don’t really care about the details. As long as two materials have the same topological invariants, we can say they are topologically equivalent,” he said.

Muechler and his colleagues’ interest in the topological classification of this new class of metals was sparked by a peculiar discovery in the neighboring laboratory of Robert Cava, Princeton’s Russell Wellman Moore Professor of Chemistry. While searching for superconductivity in a crystal called tungsten telluride (WTe2), the Cava lab instead found that the material could continually increase its resistance in response to ever stronger magnetic fields – a property that might be used to build a sensor of magnetic fields.

The origin of this property was, however, mysterious. “This material has very interesting properties, but there had been no theory around it,” Muechler said.

The researchers first considered the arrangement of the atoms in the WTe2 crystal. Patterns in the arrangement of atoms are known as symmetries, and they fall into two fundamentally different classes – symmorphic and nonsymmorphic – which lead to profound differences in electronic properties, such as the transport of current in an electromagnetic field.

a) Symmorphic symmetry b) Nonsymmorphic symmetry
a) Symmorphic symmetry b) Nonsymmorphic symmetry Credit: Lukas Muechler

While WTe2 is composed of many layers of atoms stacked upon each other, Car’s team found that a single layer of atoms has a particular nonsymmorphic symmetry, where the atomic arrangement is unchanged overall if it is first rotated and then translated by a fraction of the lattice period (see figure).

Having established the symmetry, the researchers mathematically characterized all possible electronic states having this symmetry, and classified those states that can be smoothly deformed into each other as topologically equivalent, just as a donut can be deformed into a cup. From this classification, they found WTe2 belongs to a new class of metals which they coined nonsymmorphic topological metals. These metals are characterized by a different electron number than the nonsymmorphic metals that have previously been studied.

In nonsymmorphic topological metals, the current-carrying electrons behave like relativistic particles, in other words, as particles traveling at nearly the speed of light. This property is not as susceptible to impurities and defects as ordinary metals, making them attractive candidates for electronic devices.

The abstract topological classification also led the researchers to suggest some explanations for some of the outstanding electronic properties of bulk WTe2, most importantly its perfect compensation, meaning that it has an equal number of holes and electrons. Through theoretical simulations, the researchers found that this property could be achieved in the three-dimensional crystalline stacking of the WTe2 monolayers, which was a surprising result, Muechler said.

“Usually in theory research there isn’t much that’s unexpected, but this just popped out,” he said. “This abstract classification directly led us to explaining this property. In this sense, it’s a very elegant way of looking at this compound and now you can actually understand or design new compounds with similar properties.”

Recent photoemission experiments have also shown that the electrons in WTe2 absorb right-handed photons differently than they would left-handed photons. The theory formulated by the researchers showed that these photoemission experiments on WTe2 can be understood based on the topological properties of this new class of metals.

In future studies, the theorists want to test whether these topological properties are also present in atomically-thin layers of these metals, which could be exfoliated from a larger crystal to make electronic devices. “The study of this phenomena has big implications for the electronics industry, but it’s still in its infant years,” Muechler said.

This work was supported by the U.S. Department of Energy (DE-FG02-05ER46201), the Yale Postdoctoral Prize Fellowship, the National Science Foundation (NSF CAREER DMR-095242 and NSF-MRSEC DMR-0819860), the Office of Naval Research (ONR-N00014-11-1- 0635), the U.S. Department of Defense (MURI-130-6082), the David and Lucile Packard Foundation, the W. M. Keck Foundation, and the Eric and Wendy Schmidt Transformative Technology Fund.