Having highlighted his work in a previous post, we invited Professor Sam Wang to speak at Lunch ‘n Learn on February 11. He graciously forwarded the following thoughts:
It was great fun to be invited to give the first Lunch ‘n’ Learn talk this spring. My topic was the ups and downs of Campaign 2008, and the understanding that statistical geeks can bring to the process. I also put on my neuroscience hat and got into how and why people form false beliefs, especially political ones. The audience was great and the questions were interesting.
The slides and my talk are available. In addition, some notes:
– I got into statistical analysis of Presidential polls in 2004. The Bush-Kerry race was close enough to make looking at state-level polls worthwhile. This required some math to boil down the complexity. A few of us did this and attracted hundreds of thousands of readers. I even ended up in The Wall Street Journal, BBC, and Fox News. But really, it was a geeky cult activity. In 2008, I was back in the game with the Princeton Election Consortium. But the whole thing got really huge with the increasing popularity of sites such as Electoral-Vote, Pollster, RealClearPolitics, and FiveThirtyEight. It wasn’t just for geeks any more!
My approach is to take a well-designed statistical snapshot of all the polls. A snapshot of the last 2-3 weeks of polls was 364 electoral votes (EV) for Obama, within 1 EV of the final outcome, 365 EV. Even a one-week snapshot was 353 EV, within 12 EV. Either number came closer than the other sites. Score one for meta-analysis!
– Having an accurate snapshot allowed readers to see that only at a few times did the 2008 Presidential race shift. Those times included the McCain campaign’s attacks comparing Obama to “other celebrities” like Britney Spears and Paris Hilton, the rise and crash of Sarah Palin, the first debate – and of course the economic meltdown.
– There’s no evidence for a “Bradley effect” in which people fib about their feelings about black candidates, a “cell-phone effect” in which cell phone users are undersampled, or other biases. If you collect enough polls, they’re amazingly accurate.
– I mentioned the false belief that 18% of U.S. citizens believe that the sun goes around the earth.
(1) In fact one can tell the difference without much effort. For example, a sun-centered theory is necessary to account for why planets sometimes look like they are orbiting backward; basically, this happens when we “lap” them on the track at some angles.
(2) The persistence of false scientific belief is very damaging to the possibility of using science as a positive force in society. The 18% of people who have a pre-technological view of basic facts about the world are not in a good position to make informed decisions as citizens. The fact that so many people harbor this and other false beliefs says something quite bad about our educational system.
– I did make one factual error. I said that if Candidate A is ahead in a single poll by an amount equal to the margin of error (a “z-score” equal to one), there’s a 5 in 6 chance that he/she is ahead in the population polled. That’s true. But then I said that a margin twice as large, a z-score of 2, implies a 95% chance that he/she is ahead. Actually, it’s over 97%. Sorry, fellow geeks!