A couple of posts ago I took a stance that was apparently controversial. That’s not like me. I usually save my controversial opinions for lunchtime conversation after making sure I’m not being recorded surreptitiously. After I criticized lies and deception in fake reference, someone very rightly asked if I meant just the particular type of deception that particular library school student tried to use on me, which had nothing to do with assessment as such, or did I instead mean to question the value of all so-called unobtrusive reference assessment that makes use of such deception. Just to clarify, I am definitely questioning the value of such assessment, and indeed do not believe that the end (producing a research article that might or might not be useful) justifies the means (lying to and deceiving people). I believe such practices are ethically suspect, as should be clear by now.
The commenter, Steven Chabot, rightly notes that “unobtrusive evaluation of reference services is a generally accepted methodology when investigating questions of the quality of reference service. Are we then to say that all of these useful studies completed by actual librarians and scholars in the field are wasting librarians’ time?”
Such deception is indeed a generally accepted methodology, but I think it should not be. Fraud is fraud, and I don’t see how the means justifies the end here. If the end is vitally important and can be achieved by no other means, then just maybe, but such is not the case here. Such lies and deception are ethically unsound and are unnecessary to boot.
And yes, they are a waste of librarians’ time, which is why it doesn’t surprise me that every one of these unobtrusive studies that I’ve read has been conducted by non-librarians. Perhaps we should have librarians posing as fake students in library school courses evaluating the teaching effectiveness and feedback on assignments. Then we can all have a discussion on the ethics and effectiveness of deception.
He apparently had a similar assignment in library school, and “had to cite relevant other unobtrusive studies, such as the classic by Hernon and McClure (1986) which posited the whole ’55 percent rule’: that only 55% of transactions are satisfying to the user. How are we to improve that statistic without precise measurement of it first?”
Here we get into tricky ground, indeed. I have to disagree on so many levels. Perhaps this is heresy among librarians, but I will boldly state first, that I don’t think the so-called “55% Rule” tells us much about the state of reference in any given library; second, that I don’t think such studies in general provide a “precise measurement” of anything useful; and third, that there are ways to assess reference without resorting to lies and deception.
What follows is primarily an excerpt from an annotated bibliography I wrote on reference assessment a couple of years ago. If you want to read the whole thing, you can find it here:
“Best of the Literature: Reference Assessment.” Public Services Quarterly. 2: 2/3 (July 2006), 215-220.
Part of my opinion of the 55% Rule, which I never completely trusted, was formed by the following article:
Hubbertz, Andrew. “The Design and Interpretation of Unobtrusive Evaluations.” Reference & User Services Quarterly 44, no. 4 (Summer 2005): 327-35.
Hubbertz provides an excellent, sustained critique of the normal methods of unobtrusive evaluation of reference services, arguing that for the evaluations to be useful and meaningful the subjects need to be given uniform tests, that the results need to be interpreted to provide a comparison rather than an overall assessment of reference service quality, and that the one area in which such observations may be useful is to evaluate the ways libraries organize their collections and deliver services. His analysis of various published studies of unobtrusive evaluations shows them to be inconsistent and “for practical purposes, nearly worthless.” Not administering uniform tests “may be a principal culprit for these perplexing and disappointing results.” He criticizes in particular the domination of the “55 percent rule,” arguing clearly that such evaluations are designed specifically to generate middle range results, and in fact test reference questions that almost no one or almost everyone answers are excluded from the evaluations. Thus, the evaluations are designed to generate something like a 55% success rate. Hubbertz amusingly shows how we can design the tests to improve the rate of reference success. While middle range results may be useful for comparing the services of different libraries or different ways of providing reference service in the same library, they are useless for determining the overall quality of reference service. He concludes that in the future unobtrusive evaluations may have some use, but they “must be properly implemented, with a uniform test and an adequate sample and [their] application must be limited to the assessment of how best to manage library resources.”
Another article questioning the use of deceptive (err, unobtrusive) evaluation is the following article:
Jensen, Bruce. “The Case for Non-Intrusive Research: A Virtual Reference Librarian’s Perspective.” The Reference Librarian 85 (2004): 139-49.
Jensen argues against applying typical methods of unobtrusive reference evaluation to virtual reference services, because of both practical and ethical concerns. Practically, having pseudo-patrons ask fake questions online does not take advantage of the wealth of transcripts of virtual reference questions available to researchers. Ethically, such evaluation is “an irresponsible misuse of the time of librarians and research assistants” and can degrade the service, though, he notes, “there will always be researchers convinced that their own work somehow trumps the work and lives of the people under study.” This argument both further develops and contrasts with that of Hubbertz, developing the ethical critique of unobtrusive evaluation more and extending the criticism to the evaluation of virtual reference, but not considering the problems with typical unobtrusive evaluation of traditional reference services. He concludes with a call for more research on virtual reference that takes advantage of the wealth of transcripts available, shares the research findings with the objects of study, and does not attempt to deceive virtual reference librarians with pseudo-patrons and false questions.
Curiously, Jensen deems acceptable such methods to evaluate traditional reference services as “the price that must be paid for an intimate view of the reference desk from the user’s side.” Only here do I disagree with Jensen, since I don’t believe deception and time-wasting are worth the price to be paid.
Arnold and Kaske give us an example of such a study based on transcripts:
Arnold, Julie, and Neal Kaske. “Evaluating the Quality of a Chat Service.” portal: Libraries and the Academy 5, no. 2 (2005): 177-193.
Arnold and Kaske establish a clear criterion by which to evaluate their chat reference service: providing correct answers. Using the categories of reference questions supplied by William Katz in his Introduction to Reference Work, the authors analyze 419 questions in 351 transcripts of chat reference transactions at the University of Maryland and provide a model for assessing the value of that service. After coding and classifying the questions, they studied what types of users (students, faculty, other campus persons, outsiders, etc.) asked which types of questions (directional, ready reference, specific search, research, policy and procedural, and holdings/do you own?) and how often those users got a correct answer. Policy and procedural questions topped the list of almost all user groups and represented 41.25% of the total, followed by “specific search (19.66 percent), holdings/do you own (15.59 percent), ready reference (14.15 percent), directional (6.24 percent), and research (3.12 percent).” “Students (41.3 percent), outsiders (25.1 percent), [and] other UM individuals (22.0 percent)” asked the bulk of the questions, and the librarians staffing the service answered the questions correctly 91.72% of the time. Different user groups tended to ask different types of questions. Since other studies of reference transactions have claimed that reference questions are correctly answered about 55% of the time, the authors conclude that future research should study this apparent discrepancy. However, in light of Hubbertz’s study the discrepancy may be less puzzling.
Thus, it would seem that I’m certainly not the only one who believes that deception is ethically tolerable for assessing chat reference. However, there’s still the reference desk. Is deception ethically tolerable there? Certainly not. But is it even necessary?
For an alternative to the deceptive model of reference desk assessment, see the following article:
Moysa, Susan. “Evaluation of Customer Service Behaviour at the Reference Desk in an Academic Library.” Feliciter 50, no. 2 (2004): 60-63.
Moysa describes in a concise and readable article the process used by her library to evaluate their librarians’ customer service behaviors. Basing its criteria upon the ALA Reference and User Services Association’s “Guidelines for Behavioral Performance of Reference and Information Services Professionals” (1996) [ed. note: rev. in 2004, referenced above], the reference department used a combination of self-assessment and observation. Moysa considers both the ethical problems of unobtrusive evaluation and the practical problem that normal observation affects behavior. She concludes that the literature indicates that observation over a sustained time eliminates many of the negative practical effects and notes that having the reference staff participate in the process of creating this evaluation model from the beginning mitigates most of the ethical objections. Moysa has described a method of evaluation and assessment that deliberately avoids lies and deception, and for the reference desk at that, so it would seem that we both disagree with Jensen that deception is the price we pay for reference assessment.
Thus, there are other ways to assess reference. Then the question becomes, how are we to improve the quality of reference. Rather than (or at least in addition to) these sorts of ethically sound assessment tools, we should spend much more time thinking about the education, training, and culture of reference, and especially of the proper character required of a good reference librarian. If we have reference librarians with the proper ethos, the character appropriate to their profession–educated, intellectually curious, driven by a desire and equipped with a capacity to solve information problems, practiced in the appropriate ways to respond to various audiences, adaptable to changing circumstances–and a culture that supports them, then we won’t need such reference assessment, because good reference will take care of itself.