Launched in 1998, Google’s stated its mission: “to organize the world’s information and make it universally accessible and useful.” And so it is. Today, everyone Googles – in the U.S, about 12 billion times a month (including search engines that aren’t Google). We are mostly pleased with the results we get. How can it be that we give an automated system a couple of words and it finds reasonably relevant documents among one hundred billion or so possibilities? Will our satisfaction with these tools increase or decrease as the Web and our expectations grow?
At the March 4 Lunch ‘n Learn seminar, Computer Science Professor Andrea LaPaugh gave a peek “under the hood” of major search engines. Core techniques range from word occurrence analysis for text documents, which originating in the 1960s, to Web linking analysis, pioneered by Google’s 1998 PageRank document ranking method.
In 1965, Intel co-founder Gordon Moore predicted that the number of transistors placed on an integrated circuit would double approximately every two years. That prediction, notes Bernard Chazelle, Computer Science Professor at Princeton, if anything underestimated the results during the past half century and should continue for at least another decade. Moore’s Law, he posits, is responsible for most of the desktop and hip-pocket wonders of the computer age, notably remarkable improvements in processing speed, memory capacity, and network bandwidth.
Moore’s Law correctly predicted revolutionary technological and social change in the late 20th century. But by 2020 if not before, as transistor features approach just atoms in width, Moore’s Law will have run its course. New technologies may replace integrated circuit technologies to extend Moore’s Law for decades; Chazelle argues that the years ahead will usher in the era of the “Algorithm,” a notion which, he contends, may prove to be the most disruptive and revolutionary scientific paradigm since quantum mechanics.