Few people know that Princeton University’s association with computers and computing predates the ENIAC. Jon goes back to the days of John von Neumann, Oswald Veblen, Alan Turing, John Tukey, and winds his way forward through the memorable days of the mainframes to 1985 when Ira Fuchs arrived to create the University’s high speed network and begin the drive toward ubiquity of access and use. His many stories all have one thing in common… they all used to be funny.
About the speaker:
Jon Edwards graduated from Princeton in 1975 with a degree in history. He got his PhD from Michigan State University in Ethiopian economic history. After a three year stint as Review Editor of Byte Magazine, he returned to Princeton in 1986 to serve as the Assistant to the VP for Computing and Information Technology. He served as the Coordinator of OIT Institutional Communications and Outreach until his retirement on November 11, 2010.
The last decade has witnessed a rapid emergence of larger and faster computing systems in the US. Massively parallel machines have gone mainstream and are now the tool of choice for large scientific simulations. Keeping up with the continuously evolving technology is quite a challenge though. Scientific applications need to be modified, adapted, and optimized for each new system being introduced. In this talk, the evolution of a gyrokinetic particle-in-cell code developed at Princeton University’s Plasma Physics Laboratory is presented as it was adapted and improved to run on successively larger computing platforms.
About the speaker:
Dr. Stephane Ethier is a Computational Physicist in the Computational Plasma Physics Group at the Princeton Plasma Physics Laboratory (PPPL). He received a Ph.D. from the Department of Energy and Materials of the Institut National de la Recherche Scientifique (INRS) in Montreal, Canada. His current research involves large-scale gyrokinetic particle-in-cell simulations of microturbulence in magnetic confinement fusion devices as well as all aspects of high-performance computing on massively parallel systems.
In its youth, which seems only now to be ending, film-making and film-editing required an immense amount of expensive and specialized hardware and a hefty range of fine technical skills. Today, suggested Dave Hopkins and Jim Grassi at the October 27 Lunch ‘n Learn, even teenagers with affordable hand-held devices can shoot, edit, and even distribute films for the mass market.
Be sure to run through their slides which contain a range of clips that tell the story through film. There you can watch Francis Ford Coppola predicting in the 1970s that children would someday be able to make movies of quality. There too you can watch Gus van Sant, a master film editor splicing tapes. Imagine the cumbersome task, when every scene and every noise involves a separate reel of 35 mm film stock. There are still editors who persist with such handiwork, manipulating bins of reels, but the immense power of new software, notably Final Cut Pro, has compelled most filmmakers to make the transition to digital. Films are now shot, edited, and delivered digitally. The films never touch tape.
And watch the simple film made by a father of his young son after a trip to the Dentist. Meant to be shared with grandparents and close friends, 70 million through YouTube have now viewed the amusing clip. An 8th grader named Brook Peters made a documentary about 9/11 that was so good that it is up for consideration at Tribecca. The point is, of course, that anyone with a camera, an idea, and some talent can now reach a very large audience. The barriers to entry have been drastically reduced.
Such technologies always trickle downward, suggests Hopkins. Quality no longer costs $15K. He showed a remarkable piece of footage taken with an iPhone. Without having to rely on tape, there’s also an immediacy with the film. There’s no longer a need to wait for post-production. Efforts, good and bad, can be sent instantly to YouTube.
New light panels are not only less expensive, he adds, but they also do not overheat and no filters are required for indoor shots.
Expect to see more use of the smaller technologies. The final episode of House this season was filmed on a very small camera, making possible footage in very closed spaces.
Hopkins and Grassi suggest that, as a result of the new technologies, a new breed of producer has evolved, a videographer “preditor,” a one-person film shoot, from idea, to the writing, the shooting, the editing, and even the distribution.
Software certainly plays an important role in making the technology so accessible. With Apple iLife, users can easily locate related clips and produce compelling movie trailers.
In the future, they suggest that we can look forward to better compression to compensate for larger hard drives, more video on walls, sidewalks, streets, and 4-D TVs that will fill all the senses.
Wikipedia, said David Goodman at the October 13 Lunch ‘n Learn seminar, is by far the most used online encyclopedia, and the most referenced source in the world, with more than 338 million unique visitors a month. It contains articles in more than 260 languages, has an impressive geographic reach, and extensive coverage of topics, currently with more than 16 million articles and 5 million illustrations and media files.
It owes its success as a modern, comprehensive, encyclopedia, and its challenges, to its five pillars. It is designed for its online environment, it has a neutral point of view (which sometimes requires multiple points of view), its content is free, and all involved should act in a respectful and civil manner. Beyond that, suggests the fifth pillar, Wikipedia does not have firm rules.
Number of articles on en.wikipedia.org [Source: Wikipedia]
The staggering and unexpected growth, even to those close to the project, carries with it an inherent problem: the reliability of the information. Conventional methods of certifying information are not applicable: basic principles of the site are that anyone can edit, and decisions on content are made by consensus among whoever wishes to participate, rather than by any form of centralized editorial control or peer review. There is therefore considerable resistance to its use for serious purposes. Nevertheless it is inevitably being used for such purposes, including in the academic world. This imposes a responsibility on those working at the encyclopedia to try to upgrade and maintain the quality.
This responsibility has given rise to multiple layers of control , for preventing the inclusion of improper material, and evaluating the accuracy of what is included. In his talk, Goodman explained some of these procedures, and demonstrated them in action. Though they have an effect, he acknowledged that they work erratically and unsystematically.
Their effectiveness depends upon a sufficient number of suitably qualified people participating in writing, screening, and upgrading the articles. Therefore, there are organized efforts to recruit qualified users to work in a systematic way on content in specific areas. There are informal workgroups of skilled amateur and professionals in some subject areas. And there are experiments where some college faculty use Wikipedia writing assignments in their courses.
Most successful method, says Goodman, is the individual participation of knowledgeable people. Most involved encounter certain barriers: an anti-elitist lack of respect for formal qualifications, the somewhat artificial prevailing style, the peculiarities of the interface, the difficulties in writing simultaneously for readers with a wide range of background, the impossibility of getting one’s own way with an article, the impossibility of stabilizing a finished article, and the lack of personal authorship for completed work–in short, the crowd-sourcing environment. Goodman recognizes that Wikipedia will never be a medium for academic authorship. But it is an unmatchable medium for communicating knowledge to the widest possible audience. The barriers can be overcome with skill and patience, he insists, and the necessary abilities are the same as those for teaching a class of beginners.
Above all, he hopes that more will become involved with the writing projects. Some you you, he hopes, will also become addicted.
Speaker Bio: David Goodman is one of the volunteer administrators at Wikipedia, and Vice-President of the New York City chapter. David was previously Biological Sciences Bibliographer and Research Librarian at the Princeton University Library. He has a Ph.D in Biology from the University of California at Berkeley, and a MLS from Rutgers University. Goodman’s Wikipedia page contains a link to the notes he presented at the Lunch ‘n Learn talk.
All who listen to Jerry Ostriker, Professor of Astrophysical Sciences at Princeton University, come to know that we live in profoundly exciting times. We have learned only recently the age and composition of the universe, and for the first time, we are coming to understand how the galactic structures we observe throughout the sky came to be. Simply put, where do they come from, and how could they form if the early universe was relatively uniform? And how can we use them as standard objects unless we understand how and when they formed and how they evolved?
One of the key findings, said Ostriker at the September 29 Lunch ‘n Learn seminar, came from the WMAP satellite. Its observations of the Cosmic Background Radiation show the beginnings of structure in the aftermath of the Big Bang.