At OIT’s Lunch ‘n Learn presentation on October 11, three of the faculty who were instrumental in architecting the new high performance facility – Bill Tang (Chief Scientist at PPPL and Associate Director of PICSciE), Jim Stone (Astrophysical Sciences with a joint appointment in PACM), and Mikko Haataja (Assistant Professor in the Materials Group in Mechanical and Aerospace Engineering) – discussed their use of the University’s new centrally available high-performance computational facilities recently featured in a Princeton Weekly Bulletin article.
Curt Hillegas, OIT’s Manager of Computation Science and Engineering Support, began by reviewing the University’s recent progress in the area of high performance computing. He stressed the partnerships that made these advances possible, with significant contributions from PICSciE (Princeton Institute for Computation Science in Engineering), OIT, SEAS
(The School of Engineering and Applied Sciences), the Lewis Siegler Institute for Integrative Genomics, Astrophysical Sciences, and the Princeton Plasma Physics Laboratory. Individual faculty members have also contributed significant research funding. Hillegas revealed the name recently chosen for the infrastructure: TIGRES, or Terascale Infrastructure for Groundbreaking Research in Engineering and Science.
Also mentioned in Hillegas’ talk was a new large data storage system with 38 TB of storage that is going online at the end of the month. The system supports a data access speed of approximately 200 MB/second to each of the three supercomputer systems. A fee of $2K/year/TB will be charged to recover half the cost.
Those interested in gaining access to these supercomputer systems can submit, to firstname.lastname@example.org, a 1-3 page proposal and include the scientific background/merit of the proposal, a summary of resource requirements, and a few references. A faculty committee will review the proposals submitted.
Mikko Haataja (Assistant Professor in the Materials Group in Mechanical and Aerospace Engineering) highlighted two projects within structure dynamics and the self assembly of materials that are taking full advantage of the high performance computing facilities here at Princeton. Self assembly can be the aggregation of molecules into structures that then act as larger structures. Many examples of self-assembly within cells occur at every possible scale in nature. The computer simulations help to explain where and how these structures form. The two topics being investigated involve the molecular dymanics of soft and “squishy” materials as well as deformation in hard materials, all in the microsecond range.
High performance facilities have proved to be essential for these molecular scale simulations. Examining even a modest system with approximately 50,000 atoms, such as soap molecules aggregating in water, without these supercomputers would have been impossible. When we run the code using 32 of Orangena’s processors, every 24 hours of computing time advances the system approximately 1.5 nanoseconds, or half a lifetime of such a system. Researchers can now start the system in a completely random state and observe the aggregation. Also being investigated are the dynamics, orientation, and interaction of individual molecules on a surface.
Another aspect of the research is understanding how materials deform. Most crystalline materials deform by way of motion of dislocations or defects in a crystalline lattice. The supercomputers have permitted researchers to model dislocations, a complex task given the fact that dislocations become quite complex once dislocations begin to cross one another. Many such interesting questions in materials science are being posed now that we have the computational resources.
Jim Stone (Astrophysical Sciences) began by commenting that much has changed in just the past three years, a substantial ten-fold increase in high performance computational capabilities at the University that permits researchers to tackle problems that are ten times larger.
Stone’s interest is primarily in understanding the magneto-hydrodynamics [MHD] of plasmas in physical systems. One example is how plasma is transferred from stars onto compact objects such as white dwarfs, neutron stars, and black holes in close binary systems. These systems are about 20 orders of magnitude large than that in Mikko Haataja’s research. Plasma is being stripped off the surface of a nearby star, losing mass within close binary systems. The matter, with angular momentum, does not fall directly onto the other object but rather spirals inwards.
Stone explained that the University’s computational facilities have been extremely important in tackling long standing questions associated with magnetic fields, viscosity, turbulence, and orbital evolution of the plasma.
Bill Tang (Chief Scientist at PPPL and Associate Director of PICSciE) also investigates plasma turbulence but on a different scale and with a different purpose, attempting to control and harness the energies inherent in fusion reactions.
Plasmas are called the fourth state of matter, essentially very hot gasses that comprise 99% of the visible universe. The main mission of the Plasma Physics Laboratory is to understand what is takes to harness the fusion reaction in order to take advantage of this environmentally attractive power that the world demands.
The key issue in the research is finding an efficient method to keep the plasma confined long enough for the fusion reaction to take place. In keeping with thermodynamic principles, the plasma wants to escape and a myriad of instabilities can happen within a closed system. Magnetic trapping keeps particles confined and operating at very high temperatures and at low densities.
With the use of powerful computational capabilities, PPPL is able to perform realistic simulations that study the structures of particles with and without flow. Tang noted that the lab is delighted with the University’s progress with these computational facilities and with their availability to both students and faculty.
You may listen to a podcast here.