Even though I don’t consider my blog a nag, for some reason I feel I haven’t been blogging much. I guess I didn’t realize what the pressure of writing even a semi-regular column would do to my blogging. Anyway.
Over the holiday break I read a number of books dealing with science and pseudoscience, listed here in my order of preference: Pigliucci’s Nonsense on Stilts: How to Tell Science from Bunk, Shermer’s The Borderlands of Science: Where Sense Meets Nonsense, Wynn and Wigans’ Quantum Leaps in the Wrong Direction: Where Real Science Ends and Pseudoscience Begins, and Grant’s Denying Science: Conspiracy Theories, Media Distortions, and the War Against Reality. I was on a kick, I guess. (I also read How to Live: a Life of Montaigne in One Question and Twenty Attempts at an Answer, unrelated but highly recommended.)
I started the reading out of curiosity after noticing volley after volley of patent nonsense coming out of last year’s political campaigns, especially regarding topics like evolution and climate science, and after recently reading a couple of popular books on evolution and watching an interesting if flawed documentary on the Intelligent Design attack on science education. I was sort of shocked by people who rely upon the methodological naturalism that drives science and technology but completely disregarded it in very specific situations, as if picking and choosing beliefs about nature and the world were a matter of convenience. People who probably believe in germ theory and would want surgeons to wash their hands and sterilize their instruments before operating might also believe that dinosaurs and humans lived at the same time, despite those two beliefs being inconsistent according to the methods commonly accepted among scientists. They believe in their iPhones, but not the science and technology that allows them to be.
It was the documentary on ID that inadvertently led me to Nonsense on Stilts, because both address the Kitzmiller v. Dover case, where the efforts of some ID proponents trying to force ID into biology education in Dover, PA were so obviously motivated by religion that a conservative Christian judge appointed by George W. Bush had to rule against them. The ID crowd’s new motto is “teach the controversy,” which would be fine if it were taught in a politics or religion course where the only controversy exists, but since there is no scientific controversy and ID is so obviously not scientific (from the nonfalsifiability to the lack of hypotheses to test to the inability to say why some things seem well designed but not why others are obviously not well designed), there’s no reason to waste what little time for science education there is to teach that particular controversy.
Nonsense on Stilts was the best of the bunch because it most clearly laid out the theoretical arguments involved, rather than just bringing up case after case of non- or pseudoscience posing as science (which is mostly what Denying Science did). It dealt with the “demarcation problem” between science and nonscience, and examined the difference between a hard science like physics, a soft science like psychology, an “almost science” like SETI, and a pseudoscience like Intelligent Design. Pigliucci argues that, “the common thread in all science is the ability to produce and test hypotheses based on systematically collected empirical data (via experiments or observations),” and distinguishes between the way the scientific method is applied in historical sciences like astronomy or evolutionary biology (where hypotheses are tested on observations) to ahistorical sciences like chemistry or physics (where hypotheses are tested by experiment), arguing that “the more historical a discipline, the more its methods take advantage of the ‘smoking gun’ approach that we have seen working so well with the extinction of the dinosaurs and the beginning of the universe,” while “the more ahistorical a science, the more it can produce highly reliable predictions about the behavior of its objects of study” (23). There is also a rigorous debunking of the book The Skeptical Environmentalist that is a model of evaluating information, even though it does indulge in some humorous jabs while pointing to the discrepancy between the reviews of scientists compared to those of applauding conservative political pundits. (Quoting from a Scientific American review of the book: “[in his] preface, Lomborg admits, ‘I am not myself an expert as regards environmental problems’–truer words are not found in the rest of the book”).
Librarians usually don’t get to the evaluation of information itself, the third standard of the ACRL Information Literacy Standards. That might be where the real meat of information literacy is if we think if it as the benefit of a liberal education. When I taught writing, that was the bulk of what I did, leading students through evaluations of arguments and evidence and rigorously questioning their own attempts at argument, but as librarians we usually just give guidelines. Nonsense on Stilts provides some good case studies, but also gives some general principles by which to judge information, besides the “common thread of science.” One set is a summary of Alvin Goldman’s “Experts: Which Ones Should You Trust?” (Philosophy and Phenomenological Research, 63: 85–110):
“The five kinds of evidence that a novice can use to determine whether someone is a trustworthy expert are:
• an examination of the argument presented by the expert and his rival(s);
• evidence of agreement by other experts;
• some independent evidence that the expert is, indeed, an expert;
• an investigation into what biases the expert may have concerning the question at hand;
• the track record of the expert.” (293)
The Borderlands of Science provides a similar checklist that Shermer calls the “Boundary Detection Kit,” as in the boundary between sense and nonsense (pp. 18-22):
1. How reliable is the source of the claim?
2. Does this source often make similar claims?
3. Have the claims been verified by another source?
4. How does this fit with what we know about the world and how it works?
5. Has anyone, including and especially the claimant, gone out of the way to disprove the claim, or has only confirmatory evidence been sought?
6. In the absence of clearly defined proof, does the preponderance of evidence converge to the claimant’s conclusion, or a different one?
7. Is the claimant employing the accepted rules of reason and tools of research, or have these been abandoned in favor of others that lead to the desired conclusion?
8. Has the claimant provided a different explanation for the observed phenomena, or is it strictly a process of denying the existing explanation?
9. If the claimant has proffered a new explanation, does it account for as many phenomena as the old explanation?
10. Do the claimants’ personal beliefs and biases drive the conclusions, or vice versa?
While not all the questions might be relevant for humanities fields, the general trend of scientific thinking is. Humanists tend to value the principle of noncontradiction, and have standards for the presentation of argument and the interpretation of evidence, all the sorts of things that are systematically treated in textbooks on argumentation, rhetoric, informal logic, or critical thinking. Not everyone understands or accepts these norms of thought, of course. I recently read an essay on how the digital humanities are racist that was completely devoid of argument or evidence (and even included a footnote by the author explaining that people outside her narrow academic subfield often resisted the claims of the essay, which I found laughable). You can wade through a lot of nonsense that passed for postmodernism before finding anything worthwhile. But generally rational values about argument, evidence, analysis, and interpretation taught in basic writing or philosophy classes find adherents in the bulk of academic work in the humanities.
Even though these books are dealing with science and pseudoscience, some of the questions could be useful for evaluating information in other fields. For the humanities, a good example of nonsense on stilts would be most of the anti-Stratfordians, those who ignore Occam’s Razor and any counterarguments against whoever it is they think wrote Shakespeare’s plays other than William Shakespeare of Avon. Consider Ignatius Donnelly’s The Great Cryptogram, which argues that Francis Bacon wrote the works of Shakespeare. Run Shermer’s Boundary Detection Kit against that one and it becomes clear that Donnelly isn’t particularly scientific despite this being a question where one should theoretically be able to test hypotheses based on observation. Just answering question three–does this source often make similar claims–starts to make Donnelly look suspect, since not only does he claim that Bacon wrote the works of Shakespeare, but also the works of Montaigne and Christopher Marlowe as well as Burton’s Anatomy of Melancholy. That claim reminds me of a quote attributed to a prince when presented with another volume of Gibbon’s Decline and Fall of the Roman Empire: “Another damned thick book! Always scribble, scribble, scribble! Eh, Mr. Gibbon?” Always scribble, scribble, eh, Mr. Bacon! Rather than apply Occam’s Razor and consider a full range of evidence, fanatics and ideologues cling to their fantasies and gather all the evidence for their point of view they can while ignoring all evidence to the contrary.
This is a problem for higher education, because the more people who can’t think clearly but can vote, the worse off funding for higher education and noncommercial scientific research will be. It becomes a problem for librarians in those situations where we are expected to teach something about evaluating information. How do we teach that? Do we have clear guidelines for every field? Could we, or do we ever, apply them in practice, especially in the classroom? Of the five criteria in Goldman’s summary, do we ever use any but the last three in practice, the ones relying more on reputation rather than substance? And even then, how often do we rely on proxies for expertise like the place of publication or employment of an author because we have to?
I have to admit, while I sometimes do this sort of analysis on
the blog, I almost never get a chance to do it with students in my capacity as a librarian. Lately, I’ve been wondering if I should seek out the opportunity, or try to create the opportunity, but I’m not sure how I’d go about it, and so far haven’t seen any examples of librarians doing that sort of thing.