Evaluating Information

Even though I don’t consider my blog a nag, for some reason I feel I haven’t been blogging much. I guess I didn’t realize what the pressure of writing even a semi-regular column would do to my blogging. Anyway.

Over the holiday break I read a number of books dealing with science and pseudoscience, listed here in my order of preference: Pigliucci’s Nonsense on Stilts: How to Tell Science from Bunk, Shermer’s The Borderlands of Science: Where Sense Meets Nonsense, Wynn and Wigans’ Quantum Leaps in the Wrong Direction: Where Real Science Ends and Pseudoscience Begins, and Grant’s Denying Science: Conspiracy Theories, Media Distortions, and the War Against Reality. I was on a kick, I guess. (I also read How to Live: a Life of Montaigne in One Question and Twenty Attempts at an Answer, unrelated but highly recommended.)

I started the reading out of curiosity after noticing volley after volley of patent nonsense coming out of last year’s political campaigns, especially regarding topics like evolution and climate science, and after recently reading a couple of popular books on evolution and watching an interesting if flawed documentary on the Intelligent Design attack on science education. I was sort of shocked by people who rely upon the methodological naturalism that drives science and technology but completely disregarded it in very specific situations, as if picking and choosing beliefs about nature and the world were a matter of convenience. People who probably believe in germ theory and would want surgeons to wash their hands and sterilize their instruments before operating might also believe that dinosaurs and humans lived at the same time, despite those two beliefs being inconsistent according to the methods commonly accepted among scientists. They believe in their iPhones, but not the science and technology that allows them to be.

It was the documentary on ID that inadvertently led me to Nonsense on Stilts, because both address the Kitzmiller v. Dover case, where the efforts of some ID proponents trying to force ID into biology education in Dover, PA were so obviously motivated by religion that a conservative Christian judge appointed by George W. Bush had to rule against them. The ID crowd’s new motto is “teach the controversy,” which would be fine if it were taught in a politics or religion course where the only controversy exists, but since there is no scientific controversy and ID is so obviously not scientific (from the nonfalsifiability to the lack of hypotheses to test to the inability to say why some things seem well designed but not why others are obviously not well designed), there’s no reason to waste what little time for science education there is to teach that particular controversy.

Nonsense on Stilts was the best of the bunch because it most clearly laid out the theoretical arguments involved, rather than just bringing up case after case of non- or pseudoscience posing as science (which is mostly what Denying Science did). It dealt with the “demarcation problem” between science and nonscience, and examined the difference between a hard science like physics, a soft science like psychology, an “almost science” like SETI, and a pseudoscience like Intelligent Design. Pigliucci argues that, “the common thread in all science is the ability to produce and test hypotheses based on systematically collected empirical data (via experiments or observations),” and distinguishes between the way the scientific method is applied in historical sciences like astronomy or evolutionary biology (where hypotheses are tested on observations) to ahistorical sciences like chemistry or physics (where hypotheses are tested by experiment), arguing that “the more historical a discipline, the more its methods take advantage of the ‘smoking gun’ approach that we have seen working so well with the extinction of the dinosaurs and the beginning of the universe,” while “the more ahistorical a science, the more it can produce highly reliable predictions about the behavior of its objects of study” (23). There is also a rigorous debunking of the book The Skeptical Environmentalist that is a model of evaluating information, even though it does indulge in some humorous jabs while pointing to the discrepancy between the reviews of scientists compared to those of applauding conservative political pundits. (Quoting from a Scientific American review of the book: “[in his] preface, Lomborg admits, ‘I am not myself an expert as regards environmental problems’–truer words are not found in the rest of the book”).

Librarians usually don’t get to the evaluation of information itself, the third standard of the ACRL Information Literacy Standards. That might be where the real meat of information literacy is if we think if it as the benefit of a liberal education. When I taught writing, that was the bulk of what I did, leading students through evaluations of arguments and evidence and rigorously questioning their own attempts at argument, but as librarians we usually just give guidelines. Nonsense on Stilts provides some good case studies, but also gives some general principles by which to judge information, besides the “common thread of science.” One set is a summary of Alvin Goldman’s “Experts: Which Ones Should You Trust?” (Philosophy and Phenomenological Research, 63: 85–110):

“The five kinds of evidence that a novice can use to determine whether someone is a trustworthy expert are:
• an examination of the argument presented by the expert and his rival(s);
• evidence of agreement by other experts;
• some independent evidence that the expert is, indeed, an expert;
• an investigation into what biases the expert may have concerning the question at hand;
• the track record of the expert.” (293)

The Borderlands of Science provides a similar checklist that Shermer calls the “Boundary Detection Kit,” as in the boundary between sense and nonsense (pp. 18-22):

1. How reliable is the source of the claim?
2. Does this source often make similar claims?
3. Have the claims been verified by another source?
4. How does this fit with what we know about the world and how it works?
5. Has anyone, including and especially the claimant, gone out of the way to disprove the claim, or has only confirmatory evidence been sought?
6. In the absence of clearly defined proof, does the preponderance of evidence converge to the claimant’s conclusion, or a different one?
7. Is the claimant employing the accepted rules of reason and tools of research, or have these been abandoned in favor of others that lead to the desired conclusion?
8. Has the claimant provided a different explanation for the observed phenomena, or is it strictly a process of denying the existing explanation?
9. If the claimant has proffered a new explanation, does it account for as many phenomena as the old explanation?
10. Do the claimants’ personal beliefs and biases drive the conclusions, or vice versa?

While not all the questions might be relevant for humanities fields, the general trend of scientific thinking is. Humanists tend to value the principle of noncontradiction, and have standards for the presentation of argument and the interpretation of evidence, all the sorts of things that are systematically treated in textbooks on argumentation, rhetoric, informal logic, or critical thinking. Not everyone understands or accepts these norms of thought, of course. I recently read an essay on how the digital humanities are racist that was completely devoid of argument or evidence (and even included a footnote by the author explaining that people outside her narrow academic subfield often resisted the claims of the essay, which I found laughable). You can wade through a lot of nonsense that passed for postmodernism before finding anything worthwhile. But generally rational values about argument, evidence, analysis, and interpretation taught in basic writing or philosophy classes find adherents in the bulk of academic work in the humanities.

Even though these books are dealing with science and pseudoscience, some of the questions could be useful for evaluating information in other fields. For the humanities, a good example of nonsense on stilts would be most of the anti-Stratfordians, those who ignore Occam’s Razor and any counterarguments against whoever it is they think wrote Shakespeare’s plays other than William Shakespeare of Avon. Consider Ignatius Donnelly’s The Great Cryptogram, which argues that Francis Bacon wrote the works of Shakespeare. Run Shermer’s Boundary Detection Kit against that one and it becomes clear that Donnelly isn’t particularly scientific despite this being a question where one should theoretically be able to test hypotheses based on observation. Just answering question three–does this source often make similar claims–starts to make Donnelly look suspect, since not only does he claim that Bacon wrote the works of Shakespeare, but also the works of Montaigne and Christopher Marlowe as well as Burton’s Anatomy of Melancholy. That claim reminds me of a quote attributed to a prince when presented with another volume of Gibbon’s Decline and Fall of the Roman Empire: “Another damned thick book! Always scribble, scribble, scribble! Eh, Mr. Gibbon?” Always scribble, scribble, eh, Mr. Bacon! Rather than apply Occam’s Razor and consider a full range of evidence, fanatics and ideologues cling to their fantasies and gather all the evidence for their point of view they can while ignoring all evidence to the contrary.

This is a problem for higher education, because the more people who can’t think clearly but can vote, the worse off funding for higher education and noncommercial scientific research will be. It becomes a problem for librarians in those situations where we are expected to teach something about evaluating information. How do we teach that? Do we have clear guidelines for every field? Could we, or do we ever, apply them in practice, especially in the classroom? Of the five criteria in Goldman’s summary, do we ever use any but the last three in practice, the ones relying more on reputation rather than substance? And even then, how often do we rely on proxies for expertise like the place of publication or employment of an author because we have to?
I have to admit, while I sometimes do this sort of analysis on the blog, I almost never get a chance to do it with students in my capacity as a librarian. Lately, I’ve been wondering if I should seek out the opportunity, or try to create the opportunity, but I’m not sure how I’d go about it, and so far haven’t seen any examples of librarians doing that sort of thing.

4 thoughts on “Evaluating Information

  1. Yes, seek out the opportunity to question the reliability of information! Add value to our profession and to your work! It seems to me (a public librarian) that you might have fewer chances than I do to point out to your patrons that some research or website or whatever may not be the final answer, since I would assume (?) that the college material is collected in a more rigorous way (as opposed to the popular materials emphasis taking over many public libraries – to the detriment of educating patrons there…). Can’t some of these methods be incorporated into the research instruction most academic librarians are required to do? You could always do an online pathfinder (do people still do these? Or is there a new term of art?) or info sheet of some kind if people still read those.

  2. You’re right, as far as I know, that the situation is a little different in a public library, although I would assume a lot of the same questions we ask about evaluating information, especially info on the web, would be pretty similar. The collection is somewhat of a protection, but it’s mostly that students are sent to us to find “scholarly” sources or peer-reviewed articles or something like that, and the discovery mechanisms tend to be subject specific databases rather than the general web or big aggregators like ProQuest. (Although I do frequently discuss the value of things like Wikipedia for certain kinds of discovery.)

    So rarely would I be dealing with a person with a general information need that wasn’t tied to a class or research project and that could be answered by websites alone, where we would have to dig a little deeper to teach people the difference between decent information and garbage.

    However, I rarely get beyond the most superficial stage of evaluation of even scholarly sources, and I suspect I’m typical. That’s a peer-reviewed journal and thus worthy of some attention. That’s a good scholarly press. That person is respected in the field. I might make distinctions among journals or presses in certain cases, but librarians rarely get to the point of evaluating a particular book or article, especially when doing something like first-year writing classes where the subjects are all over the place. There are academic areas I can speak confidently about, but no librarian can cover everything. But that’s the sort of evaluation I wonder whether it can be done within a classroom setting as a librarian.

    What we mostly do is offer guidance about the sorts of questions to ask, and then leave the students to do it themselves. That’s a good thing to do, but I’m sort of tinkering with ideas that would let that go further.

    I go a little further when I talk to students about sources in general and tell them that there are no bad sources, only bad ways to use them, but that’s a slightly different topic (and one that I have in a draft of a post I can’t quite seem to finish to my satisfaction).

    • Librarians who are relying on more critical or constructivist pedagogies when they teach may be inadvertently moving into evaluation. Critical Library Instruction has examples of librarians having students compare and contrast Wikipedia and traditional encyclopedia articles and think about when they might be useful, rather than starting with a lecture on what an encyclopedia article is. The goal there isn’t to start blending library instruction with rhetoric classes, it’s to get students to understand and retain the information better. But some librarians taking that approach are definitely having the evaluation discussion.

      That said, plenty of faculty at my institution are not on board with the idea of librarians engaging students in evaluating sources, and mostly want us to get them to stop using Wikipedia and start relying on superficial markers like the publisher. They would not be happy if I used the “there is no such thing as a bad source, only a badly used source,” line even though I agree with it.

  3. That comparison is a good, focused example of the kind of thing I don’t often do, but that might fit in with what’s already being done on evaluating sources in classes. I should look into that and other possibilities. As for Wikipedia, I’ve had some luck persuading instructors of its value by telling an anecdote about a research consultation in which it was invaluable, not for the article so much as for the notes, which led to highly relevant sources that might not have been found otherwise. The last time I framed it like that, even the professor agreed that aren’t bad sources, just bad ways of using them.

Comments are closed.