Libraries, Neoliberalism, and Oppression

I just read Beerbrarian’s post on libraries and neoliberalism, partly responding to this post on locating the library in institutionalized oppression by nina de jesus. I wanted to enter the discussion, but then realized I’ve already pretty much said what I have to say on the subject. I’ve addressed neoliberalism and libraries some before, particularly in a post on Libraries and the Commodification of Culture. I wanted to make that a research project a couple of years ago, but frankly after a lot of reading I found the topic too overwhelming. Nevertheless, the gist of that and other writings provides some view of where I think libraries are located in “institutionalized oppression.”

At the end of Libraries and the Enlightenment, I suggest that libraries are places “where values other than the strictly commercial survive and inspire, places people can go, physically or virtually, and emerge better people, their lives improved and through them perhaps our society improved.” The key is “values other than the strictly commercial,” because I think public and academic libraries are examples of public spaces where commercial values don’t dominate. They are public goods founded upon the values of democratic freedom and critical reason and provide a possible location within society to promote and protect anti-neoliberal values. Librarians in general are committed to open access to information and education. As Barbara Fister just wrote, they are gatekeepers who want to keep the gates open.

de jesus says that she has “seen very few people take a critical and sincere approach to analysing how the library, as institution, is actually oppressive and designed to create and perpetuate inequity.” The reason for that could be that the library, as an institution, isn’t that oppressive or designed to create and perpetuate inequity. That’s a strong and counterintuitive claim, and the burden of proof rests on de jesus. However, there have been two  books arguing just that, both published in the 1970s and both still worth reading (although as you’ll see below I disagree with some of their conclusions). First is Michael Harris’ The Role of the Public Library in American Life, second is Rosemary DuMont’s Reform and Reaction: the Big City Public Library in American Life. Excerpted below are three pages from Libraries and the Enlightenment where I address Harris and Dumont and the possible counterargument to my claims that libraries are institutions philosophically founded upon Enlightenment values of freedom and reason, and are instead instruments of oppression.

From Libraries and the Enlightenment:

The taste elevation theory has also been criticized for its “elitism” and “authoritarianism.” In The Role of the Public Library in American Life,” for example, Michael Harris argues that the entire democratic argument behind the founding of the Boston Public Library is flawed because of its elitist authoritarianism. By the eighteen forties, Boston had developed into a major destination for new immigrants, who in the opinion of the Standing Committee of the Boston Public Library thought “little of moral and intellectual culture.” George Ticknor believed the massive influx of immigrants could be a problem because, in Ticknor’s words, they “at no time, consisted of persons who, in general, were fitted to understand our free institutions or to be intrusted with the political power given by universal suffrage,” and thus the city needed to “assimilate their masses” and accommodate them to democratic institutions, primarily through education. Harris criticizes “Ticknor’s belief in the library’s potential as one means of restraining the ‘dangerous classes’ and inhibiting the chances of unscrupulous politicians who would lead the ignorant astray,” and claims this belief “explains his insistence that the public library be as popular in appeal as possible” (6). The most significant motivation behind the founding of the Boston Public Library and other libraries in the nineteenth century, Harris argues, was a fear that the masses would destabilize society, especially the immigrant masses unused to republican regimes. Any attempt to “Americanize” immigrants was “elitist” and “authoritarian,” a critique developed further in Rosemary DuMont’s Harris-inspired Reform and Reaction. The desire to elevate the reading taste of the people is just a desire to control the lower orders and prevent radical social change.

I mention this revisionist history of the founding of public libraries because it calls into question my argument that such foundings were inspired by the Enlightenment goal to educate and improve the lot of everyone, rich and poor alike. For Harris and like-minded historians, such idealistic rhetoric always masks the ambitions of the powerful to control the powerless. However, one does not have to disagree with Harris’ account of George Ticknor—who did seem to be an authoritarian prig—to recognize that something as complex as the founding of a large public library could be motivated by multiple reasons, some of them perhaps contradictory. Though the 1852 “Report” goes out of its way to argue that while good books should be supplied, no one should be forced to read them, one could still argue that even thinking some books were better than others and that people should read those books is “elitist,” etc. One question is whether such elitism and alleged authoritarianism are anti-democratic, and potentially counter-Enlightenment. The revisionist critique seems to imply that to be democratic in relation to books and learning means to consider all books equally good and useful and to consider all political beliefs and values worth defending, even if they are hostile or foreign to the needs of a democratic republic.

These days we would say this is a question of the value, or perhaps even the meaning, of multiculturalism, and addressing this debate in depth is out of our scope here. Harris and others (rightly in my opinion) would argue that the culture of the immigrants should be respected, but the question is, to what degree and in what areas? Let us assume that Ticknor and other upper-class Bostonians had a very conservative idea of what democracy should be; nevertheless, that does not show that they did not believe in democratic institutions. If we believe in the value of democratic institutions, then we must support those institutions, and what is more we must insist that everyone supports those institutions publicly, regardless of their private beliefs. Groups in democracies might fervently believe in fascism, but a democratic society cannot allow them to act on those beliefs. We can have a reasonable pluralism in society, but only if everyone acknowledges the authority of the public democratic institutions. What democracies cannot allow is a mere “modus vivendi,” as the philosopher John Rawls argues, where groups abide by democratic institutions until they can be overthrown. Carrying this argument back to Ticknor, why would he not believe that immigrants from countries without democracies would need some sort of education regarding democratic institutions? How could anyone possibly believe otherwise? Is there any difference in motivation behind this belief and the practice we have in the United States of giving extensive tests on American democracy to naturalizing immigrants, tests which most natural born Americans themselves cannot pass? While some supposedly democratic criticisms of practical educational institutions are no doubt valid, we must resist the tendency to believe that all educational efforts not derived from the group being educated are inherently undemocratic. Undemocratic groups require an education in democracy.

Harris and DuMont are quite critical of the admittedly stuffy movement in nineteenth century libraries to Americanize immigrants through education, arguing that Ticknor and others merely wanted to suppress dissent and the rising ideologies of socialism and communism. Even if Ticknor and other conservatives were motivated by a fear of, say, communist demagogues convincing the undemocratic masses to revolt, or whatever the fear was, this does not undercut the fact that they did indeed seek to educate people and to provide them with the means to educate themselves throughout their lives. That the founders of the Boston Public Library were not trying to educate revolutionaries does not take away from their accomplishment. We could just as easily interpret their actions as an early stage of progressivism. For example, Jane Addams and the settlement workers in the early twentieth century wanted to “’Americanize’ immigrants into the norms of their new society,” but they definitely improved the lives of urban immigrants (Flanagan 37). Indeed, by the standards of the anti-immigrant movements that gained control of the American government in the nineteen twenties, George Ticknor looks like a raging liberal. Citizens of a democracy must be acculturated into democratic institutions, and criticizing this necessity because the action first arose from the conservative fear of uneducated immigrants ignores this. Even Harris is forced to admit the value public libraries had for everyone, including immigrants. “That the library’s services to the immigrant had definite positive values for those able to take advantage of them cannot be denied,” though he still claimed that librarians had little to do with benefit, arguing that “these positive values were the result of the immigrant’s persistence and not the librarian’s conscious attitude” (14). In his zeal to deny the beneficial accomplishments of anyone remotely conservative, Harris acts as if the libraries which benefitted the immigrants sprung into existence without influential citizens to found them and working librarians to run them. Regardless of whether or not an enlightened and democratic ideal was not realized in practice, it is undeniable that the Trustees of the Boston Public Library wanted to found an educational institution to allow people access to useful knowledge and give them the opportunity to educate themselves for life and citizenship, and that the Boston Public Library became such an institution whatever its flaws. It is also clear from the founding of the Boston Public Library to the founding of libraries throughout the century, that the most important motivating reason was the link between the public library and public education. (pp. 110-14)

Smart People Doing Foolish Things

Many of you probably saw the article in Slate a couple of weeks ago arguing passionately that nobody should go to graduate school to study literature. The author’s experience is typical for most people who graduate with PhDs in literature in that she hasn’t gotten a tenure-track job. She earned her PhD in German literature in 2010, so she might some day find that elusive TT job, but it doesn’t sound like she’s planning to stick around academia working for below minimum wage as an adjunct instructor. And good for her. The week after brought this insightful analysis at Aljazeera of the “adjunct crisis,” from another recent PhD who also can’t find a TT job. It’s much more analytical and less emotionally wrought than the Slate article, including speculation (and that seems to be all that’s available on the subject) of why presumably intelligent and well educated people would submit themselves to adjunct conditions.

One political scientist argues that it’s “path dependence and sunk costs.” Once people have spent so much their lives and money aiming for the TT job, it’s apparently hard to realize that you rolled the academic dice and came up craps and should just move on. Indeed, that analogy is rather poor, because if the 6% chance of finding a TT job in literature that the Slate article estimates is correct, you’ve a far better chance of beating the house at craps than you do of getting that job.

The Slate author provides a psychologically devastating alternative to relying on statistics:

During graduate school, you will be broken down and reconfigured in the image of the academy. By the time you finish—if you even do—your academic self will be the culmination of your entire self, and thus you will believe, incomprehensibly, that not having a tenure-track job makes you worthless. You will believe this so strongly that when you do not land a job, it will destroy you, and nobody outside of academia will understand why.

My only criticism of the statement is that I believe she put in the second person what was obviously a personal experience. I’ve known people with PhDs and no TT jobs and none of them thought themselves worthless, regardless of whatever bitterness they might have had about the experience. Several of them were philosophers, so maybe that makes a difference.

Based on a lot of people I’ve met, it’s not that they view themselves as worthless; it’s that they view any other work than traditional professorial work as worthless, or at least beneath them. This attitude shows up occasionally in librarianship, where people with PhDs who will never get TT teaching jobs sometimes decide to “settle” for librarianship. One person told me to my face that with his PhD in philosophy he couldn’t get a decent teaching job, but since he was willing to settle for being a philosophy librarian he wanted my advice on getting one of those jobs. Talk about rhetorically challenged. I didn’t feel particularly resentful, because I have a great job and he doesn’t. I told him there really weren’t many jobs for philosophy librarians as such, and I probably should have added that with that attitude he probably wouldn’t get any available ones anyway.  Tens of thousands of highly educated people with that attitude would rather work for low wages and no benefits than do anything else.

That attitude puzzles me, but then again I never had the sense of entitlement some people seem to have about graduate school. It’s that entitlement that provides me with brief moments of irritation in what is generally a sympathetic assessment of the plight of adjuncts and what their plight says about higher education, namely that it’s being priced out of the market for the vast majority of Americans while its quality is being reduced by reliance upon poorly paid contingent instructors the universities view as disposable. If there’s an economic term for something that’s increasing in price while decreasing in quality I’d use it, but I don’t know what it is, unless it’s “scam.” Or, more likely, “bubble.” Regardless, it’s hard to feel sympathy for someone so obviously intelligent and well educated who then whines and complains about how much worse her life is for pursuing that education.

It’s also difficult to understand how someone could have begun a PhD in 2005 without knowing what was going on in higher education, but that seems to have happened. It puzzles me that so many people finish humanities PhDs and only then realize they won’t get jobs, because people not getting jobs was the most obvious part of my graduate school experience. I started grad school at a top-20 English department in 1992. By 1994 two things were obvious to me: first, I found the study of literature increasingly boring, and second, that even if I finished a PhD I almost certainly wouldn’t find a good TT job. I didn’t have William Pannapacker around to clue me in. All I had to do was look at the jobs people in my department were getting, or not getting. One year the best job someone acquired was in Arlington,Texas. There’s nothing inherently wrong with Arlington as far as I know, except that it’s hot as blazes down there in the summer and I’d left the south partly to get away from the excruciating summer heat. But when you’re on the academic job market, you don’t get to think about things like that. You go wherever you’re fortunate enough to land a job. Another person got a job teaching a 5/4 load at a regional university in a much cooler state. I wouldn’t have minded at all going to that university, but a 5/4 load? That’s brutal, especially when every class is going to have 25 or more students. No, thanks. And most of the people weren’t getting TT jobs at all.

This wasn’t some hidden conspiracy. Everyone knew about it early on in their graduate school career. Is that not the case now? Heck, my first year in grad school the department had a meeting of faculty and grad students just to talk about the problem. (Besides the general sense of malaise, the only thing I remember clearly about that meeting is that some sexagenarian associate professor hired in the 1960s complained that new assistant professors were making more than he was. He didn’t get a very sympathetic hearing.) Given that a lot of programs don’t publicly give out their placement statistics, it might be understandable that someone would start a program with a naive hope for the perfect TT job, but once you’re in a program all you have to do is look around. Are people getting jobs or not? It’s an easy question to answer, and your likely fate should be pretty clear. Someone should do a study on why so many people continue while knowing the odds are against them rather than just speculate.

It was very clear that my chances of getting a job I’d want in a place I wouldn’t mind living were almost nil. So I desultorily finished my MA work and started teaching rhetoric as an adjunct while also working halftime at the local public library as a circulation clerk. I didn’t feel bad about myself, or feel that it was somehow beneath me to have an MA and be checking out videos for $10/hour alongside people with high school educations. A job’s a job. I also didn’t resent the department I left. They let in a lot of grad students every year to teach first-year courses, many more than could ever find TT jobs. It was a bit of a racket. On the other hand, I got a lot of good teaching experience and a few years free to read a lot. I didn’t make much money, but then again I didn’t need much money. I’d never had any money anyway. And it certainly never occurred to me to be resentful of the system as such, even though it puzzled me why so many people stayed the course, finished their PhDs, and then stayed there teaching as adjuncts making the same thing I made teaching as an adjunct, all the while complaining about not getting a job.

There was possibly no resentment because I didn’t bother finishing a PhD and didn’t “settle” on being a librarian. I just sort of stumbled into it since Illinois’ library school seems to suck in a lot of humanities grad students looking for something to do. The years I spent teaching and studying have been highly useful for my library career, so it would be foolish to resent the fact that while I at one point wanted to be a professor, and still think I would have made a pretty good one, academia didn’t owe me a TT job. Graduate school turned out rather well for me. I had no money when I graduated college, and neither did my parents. I was able to go to school for free, get some experience, find a wife, make some friends, and get paid $10K a year to teach four courses. It seems like a pittance, even though 20 years later it’s still what a lot of adjuncts make who aren’t in their early twenties as I was. Because of that opportunity and the ways I’ve exploited it, I’m a first generation college student from a poor family in the south who works at an Ivy League university library. My wife, an ABD dropout from the same program, now works as a test developer for ETS. There are worse fates. Almost up until she died, my mother would ask me whether I thought grad school in English was a waste of time. My answer was always definitely not, even during the time I wasn’t quite sure what I was going to do with the rest of my life. Education is always good. You just have to know what to do with it.

So all this overheated rhetoric about how foolish it is to go to graduate school doesn’t do much for me. By smart people doing foolish things, I don’t mean that the foolish thing is to go to grad school or even earn a PhD in a field without jobs, but to feel sorry for yourself and complain about it afterward. To turn the historically rare privilege of advanced education into an excuse to complain shows a lot of arrogance but not much perspective. A couple of years ago someone was asking around for advice about her daughter going to grad school in some humanities field. My advice: if she’s interested in the subject, the school supports her with a stipend or assistantship, and she can get a degree without going into debt, go, but assume that a tenure-track teaching job is not going to happen and plan accordingly. Graduate school is only a negative experience if your expectations for where it leads differ from the well known statistical likelihood that you won’t get a TT job, and even humanities grad students should have a basic grasp of statistics. There might be social, ethical, and political issues with the increasing use of contingent adjuncts in higher education, but seeing grad school education itself as the problem is a personal issue. I never thought I’d say this, but going to grad school in English was one of the best decisions I ever made.

Signs Taken for Wonders

Reading through some of the commentary on the Mellen/Askey case, I ran across a comment from the ACRL Board of Directors’ statement of support for Askey:

I find this whole debate to be nuts. Every book is a unique product. Some are good and some are poor. The actual publisher is no indication of quality. Every book needs to be judged on its individual merits. I know of some excellent books published by EMP which have had excellent reviews in leading scholarly journals.

The person who left it obviously wanted the point more broadly known, because he left the same comment at Slaw and Annoyed Librarian. In response to a critical comment on the latter post, the person claims to be an academic who has published with Edwin Mellen, which would make his sensitivity to Askey’s criticisms and librarian support for Askey understandable.

Regardless of who this person is, we can look past the biography and examine the claim on its own merits, just as he would have us do with books. That “every book needs to be judged on its individual merits” seems so obvious as not to need defending. Just as we say one shouldn’t judge a book by its cover, we shouldn’t judge a book by its publisher, and in an ideal world we might not. Ideally, we wouldn’t take the signs of quality for the wonder of true quality.

However, to say that “the actual publisher is no indication of quality” requires some argument, because anyone who knows how academia and scholarly publishing work would be unlikely to agree with this immediately. The actual publisher might not be proof of quality, but it is certainly an indication of the quality we are likely to expect from the book, and everyone in academia, from graduate students to faculty to librarians, knows it. If you tell an academic you published a book, the first question is often, “which press?” It matters, and everyone knows it matters. At a university like mine, filled with top scholars in every field, the expectation is that they will publish with the top presses. We see evidence of this in the Leiter Reports post that inadvertently led to the now viral campaign to free Dale Askey. That reports the result of a survey among academic philosophers as to how they would rank scholarly presses. Oxford is the first by a wide margin. In the full survey, Edwin Mellen Press is last by a similarly wide margin. “34. Edwin Mellen Press loses to Oxford University Press by 407–1, loses to Peter Lang by 73–39.” In academic philosophy, there is no doubt that a book from Oxford or Cambridge would automatically get more respect than a book from Lang or Mellen.

There are numerous reasons for this expectation, perhaps not all of them fair. Over time one can see that the recognized top scholars in that field tend to publish at the top-ranked presses. Also over time, the quality of the books generally coming out of the presses builds the expectation that if a book comes from OUP, it’s probably good of its kind. That could be an unfair assumption, and I can think of one recent philosophy book from OUP that has come in for some serious criticism from numerous reviewers. That book, though, is published by someone who is outstanding in his field and has published numerous high-quality works in the past, so even if it isn’t good (and I haven’t read it so have no opinion), people would expect it to be of high quality.

Which brings us to another sign of possible quality, the reputation of the scholar in addition to the reputation of the press. The top scholars and researchers in any field generally gravitate to the top-ranked presses and journals for their field, but they might very well publish with a less respected or even unknown publisher and their name would still be an indicator of what to expect. What’s more, there are good reasons sometimes for scholars to do this. An argument I’ve read regarding publishers like Mellen, and that I have no reason to disbelieve, is that they might be more willing to accept work that is pushing the boundaries of the discipline in ways that make mainstream scholars uncomfortable, and thus make the likelihood of publication with the top publishers in their field less likely.

The reputation of a press or journal or scholar developed over time are signs of quality, and it might be unfair to consider them as wonders of genuine worth. That reputations are indeed developed over time is a good reason to take the signs for wonders, though, even if it turns out the signs sometime mislead. We see the process at work very concretely with scientific journals as well, where instead of informal polls or blog posts, we have things like impact factors that are supposed to judge the relative impact of the journals, and which are judgments that librarians and researchers take seriously when deciding what to purchase, where to publish, or what counts for tenure. How often things are cited is another sign of their relative quality, and one that it makes sense to take seriously, even if “high impact” journals might occasionally publish awful articles and even if journals no one reads or cites publish the occasional gem. And the researchers who publish lots of articles in high-impact journals are more likely to get tenure than the ones that publish in low-impact ones.

That’s the argument for why it makes sense to take signs for wonders, even if the signs are sometimes wrong. It’s not perfect, and it’s not always fair, but generally it works.

However, it doesn’t really matter if it works, because it’s what all academics do anyway. Academia fetishizes signs and takes them for wonders. We’ve seen how it works with presses and journals, but it works with everything. Consider the rankings of universities and colleges, or the academic programs within those colleges. The US News and World Report rankings are notoriously used as signs of relative quality among schools, with thousands of students applying to schools merely because of their high rank. The lower-ranked schools sometimes complain about the rankings and their flaws, and they’re right. But that’s the way it works.

The same philosopher who conducted the survey for philosophy publishers also surveys philosophers on philosophical graduate programs for the Philosophical Gourmet. If you got a PhD from the programs at the top of that list, you’d be more likely to get a tenure track job at a good college or university than from programs at the bottom, or that didn’t make the list at all. Why? For one thing, when search committees are looking through huge stacks of applications, where candidates got their graduate degrees is going to be a way of weeding them. Is that fair to the brilliant candidate from the University of Nebraska who is competing against candidates from NYU, Rutgers, Princeton, and Harvard? For that matter, is it fair that New York investment bankers would rather have graduates from Princeton than the College of New Jersey? No. But that’s the way it works, and everyone knows it.

Or consider the very existence of the PhD. The PhD is a research degree that over the decades has become a prerequisite for academic positions for which little to no research is expected, from teaching at small colleges to academic administration positions. PhDs usually aren’t required for librarian positions, but they’re often still considered a sign of some kind of quality, and candidates with them will have a leg up even if they are otherwise thoroughly mediocre. For the non-research positions, the reputation of the graduate program often doesn’t even matter. The PhD from anywhere is a sign.

So there are good reasons why we might take signs for wonders and the practical reality that we do in fact do this all the time in academia. For libraries in particular, there might not be anything else we can do. Tenure and search committees might be able to read all the publications of a candidate up for review, even though they might also just rely on the reputations of the publishers and journals as a sign of quality. But librarians can’t read all books they buy, especially in larger libraries. I might firm order several hundred philosophy and religion books a year, with hundreds or even thousands more coming in on approval. Other than by direct request, there’s no way other than signs of possible quality for me to set up approval profiles or firm order books en masse. To say that presses can’t be judged on their reputations or that each book should be judged on its own merits, is, from the standpoint of library collection development, naive, just as it is from the standpoint of who gets hired, promoted, and tenured.

The unpleasant truth is that the phenomenon I’ve been describing isn’t just how academia works, it’s how everything works. People want themselves and their publications to be judged on their inherent qualities, but the overwhelming amount of judgment people receive is based on external factors. Where you live, where you work, what you do, where or if you went to school, how you dress, how you talk, what kind of car you drive, and where or if you publish: the majority of people judge you by these signs regardless of what they reveal about your “true” self and its quality. Sometimes that’s the only thing they can do.

[Update: a Postscript to this post.

Debating the Liberal Arts

I haven’t been blogging much because I’m on leave trying to finish the manuscript of my book on Libraries and the Enlightenment, but today I ran across something tangential to the book, though related to various themes of the blog.

If you keep up with academic news, as you should, you’re no doubt aware of the siege against the liberal arts from so many quarters. A liberal education–and by that I mean an education not just in the humanities, but in the sciences and mathematics as well–is seen as an extravagance, a waste of time and money that should be devoted to graduating more folks in vocational fields or majors like business, even if business majors don’t work very hard.

Last fall, I demonstrated that the humanities, at least, have been under attack for decades, so any talk of a “crisis in the humanities” was misguided. The liberal arts education has been under attack for even longer. The 1828 Yale Report on the curriculum was a vigorous and influential defense of liberal education.

What then is the appropriate object of a college? …. its object is to lay the foundation of a superior education. The ground work of a thorough education, must be broad, and deep, and solid. For a partial or superficial education, the support may be of looser materials, and more hastily laid.

The two great points to be gained in intellectual culture, are the discipline and the furniture of the mind; expanding its powers, and storing it with knowledge. The former of these is, perhaps, the more important of the two. A commanding object, therefore, in a collegiate course, should be, to call into daily and vigorous exercise the faculties of the student. Those branches of study should be prescribed, and those modes of instruction adopted, which are best calculated to teach the art of fixing the attention, directing the train of thought, analyzing a subject proposed for investigation; following, with accurate discrimination, the course of argument; balancing nicely the evidence presented to the judgment; awakening, elevating, and controlling the imagination; arranging, with skill, the treasures which memory gathers; rousing and guiding the powers of genius. All this is not to be effected by a light and hasty course of study; by reading a few books, hearing a few lectures, and spending some months at a literary institution. The habits of thinking are to be formed, by long continued and close application. The mines of science must be penetrated far below the surface, before they will disclose their treasures. If a dexterous performance of the manual operations, in many of the mechanical arts, requires an apprenticeship, with diligent attention for years; much more does the training of the powers of the mind demand vigorous, and steady, and systematic effort….

In the course of instruction in this college, it has been an object to maintain such a proportion between the different branches of literature and science, as to form in the student a proper  balance of character. From the pure mathematics, he learns the art of demonstrative reasoning. In attending to the physical sciences, he becomes familiar with facts, with the process of induction, and the varieties of probable evidence. In ancient literature, he finds some of the most finished models of taste. By English reading, he learns the powers of the language in which he is to speak and write. By logic and mental philosophy, he is taught the art of thinking; by rhetoric and oratory, the art of speaking. By frequent exercise on written composition, he acquires copiousness and accuracy of expression. By extemporaneous discussion, he becomes prompt, and fluent, and animated. (7-8)

Contrast this with the purely utilitarian critics who believed only vocational training should be provided, at least at public expense. Here’s a quote from Christopher Lucas’ American Higher Education: a History:

Critics nonetheless continued with a barrage of attacks upon those colleges slow to adjust their programs. In California in 1858, the state’s superintendent of public instruction demanded to know, ‘For what useful occupation are the graduates of most of our old colleges fit?’ In Georgia the year before, a newspaper editorial criticized the professorate for its alleged intransigence in the face of social change and wondered aloud why its members deserved access to public funds. ‘We are now living in a different age, an age of practical utility,’ the paper announced, ‘one in which the State University does not, and cannot supply the demands of the state. The times require practical men, civil engineers, to take charge of public roads, railroads, mines, scientific agriculture.’ Rejecting claims that institutions of higher learning were never intended to supply the technical skills needed for the practice of any occupation whatsoever, the writer went on to argue that ‘practicality’ and ‘utility’ should become the watchwords of any public academic agency. (135)

Eventually, the question was settled with liberal arts colleges providing a more liberal education and universities providing both liberal education and vocational training, which has persisted until today, though that’s increasingly under attack as well. What it shows is that a lot of Americans have never responded too well to the higher learning. School is supposed to prepare people to enter “useful occupations,” but not to be reflective human beings or thoughtful citizens. The problem is, it’s difficult to explain the value of a liberal education to people who lack such an education. It’s like going back into the cave and explaining that the shadows on the wall aren’t real.

Things I Did and Did Not Learn in Library School

Within the past week, I’ve had both a personal visit and an email from people considering library school and most likely interested in the sort of work I do. Among other things, both asked whether the choice of school mattered, and my tentative answer was that for the most part it didn’t, because librarians are practical and generally more interested in skills, abilities, and results than academic pedigrees. I have colleagues from all ranks of library schools who somehow wound up at Princeton. It also seems to me that library school is a very short beginning to what is a lifetime of continuing education for successful librarians. One of the joys of the profession, for me, is the constant learning. Since I spent a year and a half in library school, and now am in the midst of my second semester teaching in one, the questions got me thinking about just what I did and did not learn while there. [Very quick update, after writing this, I read through the comments at this post at Agnostic, Maybe, which I hadn't clicked out of Google Reader to do before today. Teaching LIS now, and reading stuff like this, makes me realize how easy it is to forget about library school once you've graduated.]

The skills/attributes/traits/knowledge that I consider most valuable for my work would include:

  • Communication skills, in writing and in person
  • Presentation skills
  • Critical thinking skills
  • Problem solving ability
  • Intellectual curiosity
  • Autodidacticism
  • Knowledge of academic subjects/ scholarly communication

None of these did I acquire in library school, but some of which I certainly could have acquired because library school offered the motive and opportunity to learn. Most of these are so-called soft skills that are very transferable. The intellectual curiosity has been a personality trait at least since early high school, and there are a whole series of fields I’ve achieved some small mastery of just because I like to learn, and I read and assimilate information quickly. Instead of being a trait I developed as a librarian, I suspect it’s a trait that led me to becoming a librarian. Academic librarianship is a field where being an enthusiastic dilettante is a positive thing. Similar intellectual traits such as critical thinking or problem solving ability were honed during years of college and graduate school. The knowledge I have about academic subjects and everything else has increased since library school, but the foundation and development were independent of library school itself.

Aside from my own intellectual passions, my most formative practical experience has been teaching writing. Hundreds of hours in the classroom developed whatever presentation skills I have. Working with students on their own writing requires critical thinking about writing, which improved my own writing significantly as well as my interpersonal skills. It’s hard to imagine how painfully shy I was when I started teaching. I don’t do much project planning as such, but I’ve planned many syllabuses, and people who have never created a syllabus might not realize how much thought and planning goes into a good one. The pressure of facing the same students every week for a semester compelled me to work hard to get better.

Considering I’m a reference librarian, it might seem that I learned reference skills in library school. Well, yes and no. The general reference course I took was dated even as I was taking it, but I didn’t know that because I hadn’t done reference yet. (I was working at a circulation desk at a public library at the time.) We spent a lot of time with DIALOG blue books learning to create queries, and the rest of the time answering factual questions with traditional print sources. I never saw DIALOG again, and in my twelve and a half years of reference work, I’ve encountered very few factual, ready reference type of questions. I don’t recall if we covered the reference interview, which in my opinion is the most important thing about reference. The most useful thing was visiting campus libraries and talking to the librarians there. It was during the meeting with the English literature librarian that I got the first inkling of what I might to do in libraries.

My reference class was a mixed bag, but library school gave me the opportunity to work at the information desk in the Main Library at Illinois, where I was trained by a master reference librarian while fielding a wide range of questions at a busy desk. That’s where I learned to be a reference librarian, and it’s the contrast between my own reference class and my experience in the library that led to my later argument that phronesis, or practical wisdom, is the chief virtue of reference work. Good training is a good foundation, but only practice makes a good reference librarian. The coordinator for the graduate assistants was not only a great reference librarian, but a great trainer. What she did for training was what my own reference teacher should have been doing, which leads me to the conclusion that done well, reference courses can be a very useful preparation to begin reference work, and that I just had a weak reference class. It also led me to think carefully about my own teaching now that I teach a course I once took. When I took it, it was taught mostly as a humanities ready reference course, but I’m teaching it as more of a humanities librarianship course.

But what did I learn in library school? As I noted above, there were things I learned before library school, but that I could have learned there. I had to write various papers, give presentations, plan projects, etc., just like everyone else. The fact that I learned a lot about writing or presenting before library school doesn’t mean that others didn’t benefit. I also learned enough about cataloging to know I didn’t want to be a cataloger. I learned enough about technical services to know I didn’t want to work in other areas there. I learned enough about government documents to know I didn’t want to handle many of them. I learned enough about library management to know that it wasn’t a short term goal. I learned enough about social science reference and research to know I didn’t want to do them. And I learned enough about library buildings to realize how badly designed most of them are.

These might sound like bad lessons, but really they were negative lessons, which isn’t the same. I entered library school without much of an idea about the wide variety of things librarians actually do. I just knew I wanted to work at an academic library doing something, possibly rare books, and I explored a lot of areas to figure out what. That’s a benefit to library school that might go unnoticed amidst complaints that library schools don’t train people with the right skills to become librarians. Nobody who hasn’t worked in a library is going to leave library school able to do traditional library work well from day one. Library school is about exploring and eliminating possibilities, not advanced training in one particular area. It gives you a short introduction to a lot of different areas, but only practice in those areas makes one good. In the meantime, I learned a lot about how libraries work, even in areas I didn’t want to work in.

This is also related to the oft heard complaint that library school is boring. Parts of library school are boring, but different people find different parts boring, so it’s hard to generalize. I found cataloging boring, and liked working with students at the reference desk. Some of my friends thought working with students tedious, but loved cataloging. This was similar to my English graduate school experience, though. I found reading Charles Dickens incredibly boring. My wife loved it, which is why the seminar we were both taking when we met evoked such different responses from us. Much of the work at the master’s level and even above in any field will be boring, because you’re still exploring to see what you like. I briefly flirted with a PhD program in LIS this year, but the thought of too many required courses in areas I wasn’t interested deterred me. Most likely, library school students are going to take some courses they end up disliking, and others they end up liking, which makes it just like any other type of schooling. The trick is not to let your schooling get in the way of your education.

The germ of my information technology education started in library school, and was the most radical change I had during that time. I went in uninterested and came out enthusiastic. I’ve had computers since 1985, but mostly used the word processor. I was never an early adopter of technology, because I was too busy reading and the technology was clunky. It was around the time I started library school that online content started to become much more robust and there were more reasons for me to go online. Google started the year I started library school, which made things easier. It was in school that I began my active learning of new information technologies, which has become ever more important. For that matter, had I gotten one of the other jobs I interviewed for in the summer of 2001, I would have ended up a digital services librarian instead of a humanities selector.

The practical training isn’t the only thing important in library school. Library school is not just about training for a job, but entering a profession, which is why so much time is spent on theoretical discussions and readings that some people want to dismiss as impractical. It’s this part of school that I didn’t think much about then, but which led over time to my current interest in the history and philosophy of libraries. I remember fellow students, especially those working in libraries, complaining about the theory, but I’ve come to believe that it’s only with a philosophical theory about why libraries exist and the purposes they serve that you can make a case for what libraries should be doing as the world around us changes.

For others, the experience might have been different, which is why I also suggest to any prospective librarians that they talk to many others. My advice, based on my relatively limited experience and the experiences I’ve learned of from reading and talking to numerous librarians, is that the choice of school doesn’t matter much, that there are a lot of transferable skills that can be useful as a librarian, that academic knowledge acquired elsewhere is always helpful, and that library school is more about exploring options and understanding librarianship as a whole than advanced training for a particular job. Both during and after library school, creating or exploiting opportunities to learn and develop is the most important thing.

Notes on Truth and Librarianship

In a blog post at Sense and Reference, Lane Wilkinson asks whether misinformation is information, and proposes a project over the next few weeks that shows “how and why a realist approach to truth and information is the only way to meet” Standard Three of the ACRL Information Literacy Competency Standards for Higher Education. I look forward to following the progress of the argument.  If, like me, your recall of the Information Literacy Standards is fuzzy, I should remind you that according to Standard Three, “The information literate student evaluates information and its sources critically and incorporates selected information into his or her knowledge base and value system.” According to Wilkinson, though the ACRL Information Literacy Standards don’t mention truth, Standard Three requires an account of truth. (One might add that the Information Literacy Standards require a missing account of information as well.) Librarians sometimes have the oddest beliefs about truth, as Wilkinson shows in this excellent pair of posts on Wikipedia and truth.

The post also references an article on truth in librarianship that Wilkinson finds less than compelling, to put it mildly: The Philosophical Problem of Truth in Librarianship, by Labaree and Scimeca. He promises to dissect it for fact-value conflations and anti-realisms, which I also look forward to. In that article, the authors evaluate three traditional theories of truth–the correspondence, coherence, and pragmatic theories–and conclude that since none of them are adequate for a conception of libraries as a collection of the historical record, they must introduce a supposedly new theory of truth, the “historicist” theory, inspired by the historicism of Herder. I’m assuming it’s this new theory, or perhaps the belief that this is a theory of truth at all, that Wilkinson finds ridiculous, which makes sense when we see that the historicist theory of truth is merely the suspension of belief in truth, supposedly because a belief in truth might cause us to eliminate parts of the historical record that we consider untrue. From the article:

Our suspension of truth value does not arrive at epistemological certainty about the propositions contained in the many volumes housed in a library but rather at certainty that the historical record has not been compromised by the elimination of any these volumes. In other words, librarians must suspend the truth value of singular items and artifacts in the historical record in order that the whole truth of any given period of history be accurately analyzed and understood. As Herder states: “If history in its simplest sense were nothing but a description of an occurrence, of a production, then the first requirement is that the description be whole, exhaust the subject, show it to use from all sides”…. Totalitarianism is the opposite of what Herder intended in his philosophical reflections on the history of mankind. Only in a free and open society could Herder’s historicism become possible for scholars to use.

One might be tempted to read this as blatant, though well intentioned, nonsense. One should not resist that temptation. This “theory” of truth is not only incompatible with the ACRL Information Literacy Standards (no great sin there), but with any intellectual standards at all. It asserts that for librarians to do their job well, they must cease to believe in the truth or falsity of anything in their collection. The published results of a falsifiable and replicable astronomical experiment have the same truth value as a Renaissance book of astrology, or rather, if we believe that one is in fact truer than the other then we can’t responsibly build library collections. The problem is that the authors of this paper don’t provide much of an argument for our suspension of belief.

As I said, this is well intentioned. Their claim is that if we believe that X book is true and Y book is false, then we might be tempted not to collect Y, or not to keep it, which would in essence be to destroy it for future generations to study, just as, for example, medieval scribes would scrape classical texts from vellum to give themselves a clean surface to make another copy of the Bible, because the Bible was true and valuable, while Cicero or Aristotle were not. Or like the legend that Caliph Umar destroyed the Library at Alexandria, because if the books agreed with the Koran they were unnecessary, and if they disagreed they were heretical. Thus, it is only by suspending our belief in truth of individual items in the library collection that we escape the desire to destroy falsehoods.

This assumes that such a cavalier attitude to library collections was motivated by a theory of truth as such, which isn’t the case. Totalitarians don’t burn books simply because they believe those books are false. They burn books because they are motivated by ideologies that require the destruction of any alternative points of view. They don’t burn outdated works of science that have been superseded by more modern studies; they burn books containing worldviews antithetical to their own. Medieval scribes scraped classical works from their vellum not just because they believed them to be false, but because they believed them to be unimportant, the way we throw away takeout menus when maybe we should be collecting them.

What’s different for us isn’t that we don’t believe some works are true and others false, even in areas that lend themselves to easy dispute such as politics or religion. Religious non-believers consider the truth value of the Bible or the Koran to be nil, but in the liberal Enlightenment worldview that provides the framework for modern libraries, that consideration is unimportant. Our “historicism” doesn’t dictate that we don’t believe in truth, but that we believe we want to understand the past, and we believe the way to do so is studying as many documents as possible to come to a true understanding. We attempt to comprehensively collect the historical record in ways that previous eras didn’t, but it’s not necessarily because we have different theories of truth, it’s because we believe different things are true, which isn’t the same thing.

Modern scholars and academic librarians tend to believe that the following statement is true: “Understanding the past in as objective a way as possible is valuable for us in some way, and understanding that past requires saving all the documentary traces it leaves behind.” Totalitarians, book burners, and the like believe this statement to be false. Thus, when we build library collections, we don’t suspend our belief in truth; we just believe that untrue documents can also give us a sort of truth. It should be clear that I’m not objecting to isn’t so much the spirit of this article, but its letter. I agree that building comprehensive library collections is important, and even for the same reasons, but I don’t believe it’s true that we need a new theory of truth to justify this. We don’t really need a theory of truth at all. We just need to collect.

Which brings us back to the Sense and Reference post. Wilkinson believes that Standard Three requires a theory of truth, in particular a realist theory of some kind. That sounds plausible to me, at least for parts of Standard Three. We can’t really evaluate the reliability or accuracy of information without some standard against which to judge it. Nevertheless, I wonder whether truth is really the business we’re in, even when we’re working with students and helping them evaluate sources. By inculcating standards of information literacy, are we concerned with truth? Or rather, do we get to the level where a concern with truth is appropriate?

With students, we’re often helping them to find and evaluate scholarly sources, not assessing the factual accuracy of a statement. When doing this, is truth our standard? Is truth the standard of scholarship at all, especially in the humanities? Or is it something else? Maybe I’m not putting this right. Truth might be the ultimate standard, but how far along that path would we ever go with students? Even assuming information literacy is a meaningful goal for everyone to achieve and that it requires a theory of truth, how far towards information literacy do librarians ever take students? And if we don’t take them very far, do we need a theory of truth?

Librarians are typically there for the initial stages of research, when it really is a search for information. For students in the humanities, I suggest finding a good recent scholarly book or article on the topic and chasing footnotes. “Good” would typically mean an article from a good press or journal by a reputable scholar. Would such a book or article be “true”? Almost certainly not in its entirety, because there is bound to be a similarly reputable work that will disagree with the interpretation of various facts, if not the facts themselves. If this is the case, we find ourselves in the situation that Lebaree and Scimeca find themselves with true and false documents in a library. When evaluating a single scholarly source at the level we do with students, we’re not dealing with truth or falsity. We’re concerned with whether the work meets certain standards of scholarship, which are designed ultimately to discover truth, but which never guarantee the truthfulness of any given work of scholarship.

Despite recent claims that American college students don’t learn much, what “information literacy” they do learn takes place outside the library for the most part, in classrooms, dorm rooms, coffeehouses. And the part that takes place in libraries takes place without librarians. All that reading, interpreting, analyzing, synthesizing necessary for understanding and knowledge is far beyond what librarians see.

Or so one might argue. If that’s the case, if the bulk of our jobs is to build collections and give some initial guidance on search and evaluation, then it’s possible that “truth” isn’t a direct professional concern of ours, that while the ACRL Standards as a whole do require a theory of truth, the relationship of academic librarians to information literacy does not.

Or maybe not. I’m still working my way through this one.

The “Crisis” in the Humanities

As a humanities librarian and liberal humanist, I have both a professional and personal interest in the fate of the humanities, especially the professional study of the humanities. Thus, it is sometimes distressing to hear about the crisis in the humanities, especially the heated rhetoric of late. The “scenarios” from ARL threw a few sops to the humanities, but the general assumption seemed to be they would disappear from research universities within 20 years. The president of Cornell just issued a call to defend the humanities. The pages of the Chronicle of Higher Education and Inside Higher Education bring us frequent laments for the state of the humanities. Martha Nussbaum has a new book out about the humanities crisis. I haven’t read it yet, but according to this review it opens: “we are in the midst of a crisis of massive proportions and grave global significance.” The reviewer thinks she overstates the case, but she’s not alone in using such apocalyptic rhetoric. By now, most of you probably know that, despite still calling itself a university, SUNY Albany is planning to cut several of its foreign language departments, with the foreign-language classes to be replaced by talking- very-slowly classes.

Here’s one scholar on the crisis in the humanities, especially for foreign-language study: “in our days the field of modern languages is undergoing a severe crisis….There is a general crisis in the humanities, there is a particular and more acute crisis in modern foreign languages.” That sounds ominous, and given the current crisis it is prescient indeed. It’s from the introductory paragraph of an essay by Hans W. Rosenhaupt, “Modern Foreign Language Study and the Needs of Our Times,” published in the journal Monatshefte für deutschen Unterricht in 1940. And Rosenhaupt was right to be concerned, because SUNY Albany, which in 1940 was the New York State College for Teachers, would within seventy years slowly expand into a research university before beginning the gradual slide backward. Germaine Brée, writing in the Modern Language Journal, is just as concerned about this crisis. “For our literary heritage has come to seem more and more overwhelming in its mass, burdensome and without significance. We have tended to lose the sense of delight and newness all good literature gives. This, I would say, is one aspect of the crisis in the humanities.” That was in 1949.

In the South Atlantic Bulletin, you can read about the twelfth meeting of the Southern Humanities Conference: “The Crisis of the Humanities in the South” was the theme. “The participants seemed to agree that a real crisis does exist. But, as one panelist put it, the crisis is neither ‘new nor localized;’” The conference was in April 1959. Given the turmoil of the times, such as the Montgomery bus boycott in 1955 and the Little Rock Nine in 1957, I think there were bigger crises in the south to worry about, but fretting humanists often look inward in times of social unrest.

Throughout the 1960s the humanities stayed in crisis. In 1965, Penguin published the widely read book Crisis in the Humanities, edited by J.H. Plumb. That work analyzed the crisis in depth in art, philosophy, literary studies, and history. In 1964, The American Council of Learned Societies published The Commission on the Humanities, Report of the Commission on the Humanities. It’s a pessimistic report, in which we find that “the humanities in the age of super-science and supertechnology have an increasingly difficult struggle for existence,” and that “Today, more than ever, those concerns which nourish personality, and are at the heart of individual freedom, are being neglected in our free society. Those studies which refine the values and feed the very soul of a culture are increasingly starved of support.” I found out about this study through W. David Maxwell’s essay on “The Plight of the Humanities” (Journal of Aesthetic Education, April 1969), in which he argues that the humanities are in crisis because of a gap between their methods and their goals. In the same journal issue, Stuart A. Selby thinks the crisis results “from the fantastic specialization and fragmentation of scholarship which is incapable of presenting to the students a comprehensive enough view of the world.” It’s always something.

Unfortunately, the 1970s didn’t relieve the crisis in the humanities, either, maybe because of stagflation or Watergate or pet rocks. It was an acknowledged crisis that seemed to be spreading. In his essay, “Should Religious Studies Develop a Method?,” Richard E. Wentz warns that, “If religious study does not find a method appropriate to itself, it may fall victim to the crisis in the humanizing arts and to the crisis in theology.” (Journal of Higher Education , Jun., 1970). I think theology has been in crisis since the Origin of the Species was published, but it seems to keep on going. According to a professional note in the October 1975 PMLA, The School of Criticism and Theory Program at Irvine was created in 1976 “in the belief that a unifying conception of the humanities and humanistic discourse can be grounded in literary theory,” and that “a major reason for the crisis in the humanities” was that this belief didn’t “flourish in our intellectual communities.” Wolfgang Iser, in “The Current Situation of Literary Theory,” posits much the same development, and says that “As a reaction to the crisis in the humanities, literary theory became increasingly dependent on the relationship between literature and society-a relationship which stood in urgent need of clarification” (New Literary History, Autumn, 1979). Literary theory certainly took off in the next couple of decades, but it still didn’t fix the humanities, darn it.

In “The NEH and the Crisis in the Humanities,” Mel A. Topf tells us,”That the humanities are in trouble is no secret. Current discussion revolves around declining public support, declining enrollments as students turn away from the liberal arts to professional studies, and overproduction of Ph.D.’s.” As timely as today’s headlines! Except that was from the November 1975 issue of College English. Not everyone was convinced, though. In “Much Ado about Little? The Crisis in the Humanities,” Byrum E. Carter, opens, “The humanities, if we are to trust their academic spokesmen, are in trouble. They are plagued by declining student enrollments, a surplus of PhDs, a skeptical public, a sense of uncertainty as to mission, and a decline in available money. Dire predictions are made as to their future and cries arise for assistance in meeting the “crisis” that confronts humanistic scholarship” (Change, March 1978), but he doesn’t believe the situation is so dire, and predicts that the humanities will be around for a long time. It’s 32 years and counting so far.

In “Legacies of May,” Christopher I. Fynsk writes of economically driven education reform in France that is removing philosophy and the other humanities disciplines from the high place they traditionally held in the academy (MLN: Comparative Literature, Dec. 1978). He warns hat “some of the social forces that have made this reform possible in France are functioning similarly in the United States to create a situation of crisis in the humanities.” Apparently nobody told him the humanities had already been in crisis for 40 years. But again, as timely as today’s headlines, as philosophy departments are threatened with closure in several universities. Ellen Ashdown opens her essay “Humanities on the Front Lines” with an acknowledgement of the tenor of the times:

The threat to the humanities in colleges is now a common theme. Worried scholars and teachers face with dismay the public demand for “accountability” and its inappropriate consequences when applied to disciplines dealing unapologetically with questions of value. Those who feel the threat most deeply have responded with eloquence and passion that the traditional arts and letters are not antagonistic to scientific and practical studies, are not dispensable, are, in fact, central to education and life. (Change, March 1979)

I could go on, and on, and on. Search JSTOR for the phrase “crisis in the humanities.” Starting with the oldest articles first, I stopped reading at record 69 out of 217. The phrase first appears in a JSTOR journal in 1922, and from 1940 on becomes a steady stream of complaints. I think this is enough evidence to suggest that there has been a sense of crisis in the humanities almost as long as there have been departments of humanities. The organization of modern universities seems timeless, but the development of departments and disciplines as we know them now is a product of the late 19th century. Not only is the sense of crisis decades old and persistent, but for the most part the causes are as well. Students are choosing professional programs over the humanities; the sciences have the most authority and get the most funding; there are too many humanities PhDs; they’re evaluated by standards appropriate to the sciences but not the humanities. Every generation of scholars wakes up afresh, looks about, and thinks the sky is falling.

The sky might indeed be falling, but if it is, it seems to be falling very slowly. It could also be that the sky is not so much falling, as readjusting itself, if that makes any sense. The story at SUNY Albany exemplifies my scenario for the future of research universities and their libraries. After World War II, college enrollments and higher education funding swelled enormously, and the humanities benefited from the largesse heaped upon the universities to pursue scientific research. I knew a professor of English who claimed the Defense Department paid off the student loans he had taken out to fund his English PhD in the 1960s. Money was flowing, enrollments were up, and every teacher’s college wanted to become a university, and every research university was molded on a model of research appropriate for scientific investigation but inappropriate for the humanities. However, that level of support was not sustainable. The New York College for Teachers became the University at Albany, and it may become the New York College for Teachers again. Or, more likely, it may shed its humanistic programs and devolve into a technical and scientific research center and undergraduate vocational training school rather than a research university as such, dedicated to creating and disseminating new knowledge in all disciplines. Such may be life. But that doesn’t mean that Cornell and Columbia and NYU will undergo similar changes. “The humanities” will survive just fine, only they’re likely to survive at a research level at considerably fewer universities. Maybe there’s only so much new knowledge that can be created in the humanities.

The unfortunate thing is that state governments seem to think that higher education isn’t sustainable, but that’s not the case. It’s the current number of research universities with thousands of humanities professors teaching light loads and doing research that requires expensive libraries that aren’t sustainable. The country just doesn’t need as many PhD programs in the humanities as it has, and research universities are going to start eliminating them as state funding dries up. My worry is that entire departments will be cut instead. It would be much worse for future generations if only the elite could study foreign languages or philosophy than if the number of PhD programs and research-intensive programs were reduced. That’s going to happen at any university that demands immediate profitability from every department.

The humanities were from the beginning about creating free, well-rounded people who could think clearly and communicate at a high level. In the middle ages, what we would now associate with the humanities (the trivium–rhetoric, grammar, and logic) was part of the “School of Arts” and taught to undergraduates, who then went on to the advanced schools for master’s degrees and doctorates in theology or the professions. In Renaissance Italy, the literae humaniores--rhetoric, grammar, poetry, history, and moral philosophy–were studied by the sons of the elite so they could advance themselves in a world that required abundant knowledge, critical thought, and clear communication to succeed. Today is no different. Every university should have people teaching literature and history and philosophy to undergraduates, but not every university needs literature and history and philosophy graduate programs. Their emergence and growth were the result of historical forces unrelated to the need for the number of such programs we now have.

The sense of crisis as a lack of historical memory effects librarianship as well. My friend Kathleen Kern at the University of Illinois is working on a project related to the “serials crisis.” It seems the phrase first pops up in the library literature in the mid-70s, but she found discussions of similar issues going back much further. I’ve been doing some research related to “information overload,” and have found evidence of a “crisis” as far back as the 16th century. By definition, a crisis requires a period of normalcy by which to define itself. I argue that we don’t really have a “serials crisis” or a “crisis in the humanities,” because the state in which we find ourselves has been the normal state for decades. Humanists, like librarians, always think people are out to get them (which is true), but they also think that the situation is new (which isn’t true). If we’re always in crisis, then we’re never in crisis.

The existence of patterns like this is why I’m so skeptical about hyperbolic or apocalyptic rhetoric in general.  People who say “X is the future!” with such boundless optimism usually have a very short historical memory, and they don’t realize that the majority of predictions about such and such being the future were just plain wrong, and even the most accurate ones were partially true at best. The same goes for the overly pessimistic predictions of decline. They’ve been with us at least since Plato. The humanities as a profession, like librarianship as a profession, always faces challenges, but constant challenges don’t a crisis make. They are the normal state of affairs. The appropriate action isn’t to jump for joy that we’re saved by some hot trend or panic because we’re supposedly in the midst of crisis, but to face the challenges soberly, make our case, and do the best we can to create the future we want. I find it more comforting to realize we’re not in a state of unprecedented crisis. Plus ca change, plus c’est la meme chose.

The Counter-Enlightenment in Our Midst

I’ve been vacationing for a couple of weeks on a Great Lake, swimming, sailing, hitting the local tourist attractions, and reading books on the Enlightenment . On vacation I deliberately try to avoid the news (so I don’t spoil it playing tiny violins after reading sad tales like this one), but somehow I ended up reading a summary account of rabble-rousers and their roused rabble at town hall meetings about health care reform, and the contrast between that and my reading left me feeling depressed.

It was Voltaire, I think, (or perhaps Diderot) who wrote that violent resistance to arguments just meant you were too stupid to form arguments. We have seen this playing out around the country, with right-wing professional idiots (leaders?) encouraging their followers to shout, disrupt proceedings, deliberately avoid debate, and all the other tactics of the stupid and inarticulate in the face of calm reason. The irony is that these leaders and their followers seem to think of themselves as "conservatives" of some kind, but it’s not at all clear what they want to conserve other than the wealth and power of private insurance companies. They certainly don’t seek the ordered liberty so beloved of some who deem themselves conservatives. I’ve long speculated that there aren’t really any conservatives in America anyway. There are only variations of reactionary against the Enlightenment ideals of the founding.

Historians of conservatism–e.g., Russell Kirk, Robert Nisbet, Jerry Muller–often trace the beginnings of conservatism in the English-speaking world to Edmund Burke and his Reflections on the Revolution in France (though Anthony Quinton goes further back to Bolingbroke, if I remember correctly). Burke himself, though, was a beacon of tolerance and reason compared to aggressive soldiers of the Counter-Enlightenment like Joseph de Maistre. A clubbable man and friend of Adam Smith and a supporter of the American War of Independence such as Burke couldn’t have been otherwise. As the title and movement of conservatism were born and spreading through Europe, it made some sense. The conservatives were trying to conserve, or at least to resurrect, an older regime of authoritarian political and religious order that was actively under assault from Enlightenment values such as liberty, equality, toleration, reason, education, and individual rights against the state.

In America, such a tradition makes little sense, despite Kirk’s heroic efforts to give American reactionaries an historical tradition. America was the first country founded upon Enlightenment values. Granted, Americans themselves have rarely in the mass lived up to those values, and the history of America is to some extent the development of these enlightened  values over the darker forces of our nature for two hundred years. No one with eyes to see could say that America is a perfectly enlightened or tolerant country, but without a doubt the enlightened values of the founding have slowly found favor with a greater percentage of the population. Those Americans resisting the ideals of reasoned discussion and debate, toleration for the Other, individual rights, liberty, equality, and education are thus not conservatives, but reactionaries. They don’t wish to conserve or even resurrect a fallen order, but to impose darkness on the land.

To give some substance to these musings, let’s briefly examine two figures of the Enlightenment who are in stark contrast to the shouting rabble and their beloved leaders in the recent meetings: Immanuel Kant and Adam Smith.

Kant wrote a late essay called "What is Enlightenment?" that summarized some of his views. For Kant, enlightenment meant throwing off the self-imposed shackles of leaders and having the courage to use your own reason to make decisions. The motto is sapere aude, or "dare to know." Enlightened people educate themselves, use their reason, and challenge irrational authority. They are not looking to be lead. The unenlightened desire to be led. They want people to tell them what to believe about important issues–about God, religion, ethics, politics. The unenlightened take on faith, for example, the literal truths of religious texts because they have been told to do so and have rarely had more faith in their own capacity for reason than in the word of another. This is not to say the unenlightened are stupid, though sometimes they are. This is merely to say they are unreasonable. Many of them wouldn’t object to this at all. Recall Tertullian’s famous defense of his Christian belief: Credo quia absurdum est–I believe because it is absurd. De Maistre and other figures of counter-Enlightenment were no different. For them, reason is not a primary value.

In the current debates, as in so many others in the country, we see this playing out. We see people who want to be led, who take their marching orders from radio and television entertainers like Glenn Beck or Rush Limbaugh, or from others hidden inside various advocacy groups. They don’t reason, they don’t dare to know. They certainly don’t balk at the irrational and foolish. They’re encouraged to become part of a mob and they do it in an attempt to forestall any rational debate by any side in the discussion. I heard one woman interviewed on the radio who claimed that she opposed a public health plan because she didn’t want her health care decisions made by "some bureaucrat." Regardless of one’s position in this debate, this response–no doubt fed to her by someone leading her on–is absurd. If she has health insurance now, who does she think is making decisions about her coverage but some bureaucrat, and, what’s more, a bureaucrat with an eye on the profit margin of her insurance company rather than the needs of her health. An enlightened person would say, oppose or defend whatever you wish, but at least have intelligent reasons for doing so.

It’s a more curious contrast with Adam Smith, a mainstay of the Scottish Enlightenment and one of the most misunderstood writers of contemporary times. In this country, Adam Smith has the reputation of being an absolutely laissez-faire economist, totally dedicated to the "invisible hand," opposed to government, a friend of the capitalist class and an implied enemy of those who find themselves losers in a perfectly free market. Both right and left have this illusion of Smith. Rich financiers in the Reagan years supposedly sported ties with Adam Smith’s image, thinking he was one of their kind. Leftists are seldom any better. I once had a strange interaction with a fellow library school student, a socialist of sorts with an M.A. in history, who saw me reading The Wealth of Nations. The student refused to read Smith "because he was a capitalist," thus demonstrating his own lack of enlightenment. He’d been told all he needed to know by some professor or pundit, and relinquished faith in his own power to educate himself and make reasonable judgments based on his own knowledge.

Adam Smith was a defender of what he called the "system of natural liberty," and he did indeed describe and defend the division of labor and free trade that undeniably builds wealth in nations. However, he was not necessarily a friend of the capitalist or an opponent of government, as anyone who has ever bothered to read Smith would know. Does this quote from the Wealth of Nations surprise you?

People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices. It is impossible indeed to prevent such meetings, by any law which either could be executed, or would be consistent with liberty and justice. But though the law cannot hinder people of the same trade from sometimes assembling together, it ought to do nothing to facilitate such assemblies; much less to render them necessary.

Does this sound like a friend of the rapacious capitalist? What else are lobbyists and business interest groups but conspiracies against the public? Cabals dedicated to their own interest at the expense of the common good? Or this argument against mercantilism:

Consumption is the sole end and purpose of all production; and the interest of the producer ought to be attended to only so far as it may be necessary for promoting that of the consumer. The maxim is so perfectly self-evident that it would be absurd to attempt to prove it. But in the mercantile system the interest of the consumer is almost constantly sacrificed to that of the producer; and it seems to consider production, and not consumption, as the ultimate end and object of all industry and commerce.

How many of our laws, regulations, and subsidies are truly dedicated to protection of the individual and unorganized consumers, and how many to the protection of organized business interests, i.e., the producers? Whose interests are at stake in the current debate, and whose interests are getting the most attention in the media–the consumers of health care or the producers of it? What would Adam Smith the consumer advocate have to say about the shenanigans of the insurance industry?

Despite my commentary on the health care debate here, it’s not health care or the debate as such that interests me so much as the mob tactics associated with it. We have right wing pundits and entertainers calling President Obama a Nazi while encouraging the sort of mob politics the Nazis themselves used to such great effect. In this case, the end of enlightenment is the rise of the ochlocracy, or "rule of the mob." We’ve had people who might otherwise be intelligent and productive citizens showing up at meetings shouting so that others might not be heard. They’ve been acting like Yahoos, another creation of an eighteenth-century writer. In Gulliver’s travel to the land of the Houyhnhnms he encounters creatures he takes to be humans by their appearance, but finds after watching them they’re little more than bestial savages. Watching roused rabble scream and shout affirms Jonathan Swift’s belief that humans aren’t rational animals, but only animals capable of reason.

This disturbs me as a human and as a citizen, but also professionally. American reactionaries, wherever they have power, try to defund education and any other public good. They would rather send a harmless pot-smoker to prison than a smart poor person to college. With no responsible voices on the political right speaking out against the disruptive mobs, does this mean they support the rise of ochlocracy?

There are mobs of every political stripe, as history has shown, but I’m more concerned professionally by right-wing than left-wing mobs. Left-wing mobs have a tendency to destroy commercial property (as in the WTO protests in Seattle a decade ago) or else just appropriate it (as with most left-wing revolutions). I don’t have any commercial property, and am unlikely to acquire any, so that doesn’t affect me as directly. Right-wing mobs have a tendency to attack institutions of education rather than of commerce. They don’t like book-learning, but they do like book-burning.

The Right has been working hard for a couple of decades to reduce the funding of higher education, and thus make it more difficult for poor, or even the middle class, to afford college. This is insidious destruction of a society of educated and thus often critical citizens. With the active encouragement of people to join mobs and shout down opponents, and the lack of right-winge opposition to demagogic voices, how big a leap is it to imagine mobs being encouraged not just to shout down politicians they don’t like, but to start burning books and such at public rallies? If the reactionary leaders don’t like reasoned debate, how long before they direct the mobs against the the institutions most dedicated to reason and debate–our colleges and universities?

Does this seem far-fetched? Perhaps. On the other hand, one right-wing entertainer with millions of followers is ignorant or stupid enough to compare those who believe in equal rights with women to Nazis. It’s not like we aren’t living amidst millions of loud, ignorant bigots. I see no difference in principle in demagogues encouraging their followers to disrupt peaceful meetings and encouraging them to besiege libraries or disrupt the activities of teaching and learning at institutions of higher education. Both involve resistance to enlightenment, the denial of reason, and the embrace of dark, unruly passions.

The Usefulness of the Liberal Arts

There’s an interesting article in today’s Inside Higher Education making the case that while business people sometimes make the case for the usefulness of a liberal arts education in business, humanities professors rarely do. It contrasts the views of management guru Peter Drucker with those of English professor Stanley Fish. Drucker values the usefulness of the liberal arts (and, though the article doesn’t mention it, wrote an essay on management as a liberal art) for life and work, while Fish claims they serve no purpose and do nothing to improve people.

In a way, I can see Fish’s point. The liberal arts are so designated because they are the arts appropriate to free persons, that is, persons who do not have to work for a living and have the leisure to pursue their interests in literature or philosophy. At the very least, it gives those people something to talk about. People with jobs can always talk about their work, but people without jobs need something to occupy their time and have conversations about.

However, things have been different since the Renaissance. Rhetoric and other liberal arts began to flourish anew in the republics of Renaissance Italy because they were useful. Rhetoric is necessary to persuade, and persuasion is an important component of politics and law, as well as business. People were seeking out teachers of rhetoric and other liberal arts because they were strivers who wanted to improve themselves to get ahead, not because they were layabouts who needed to find enjoyable ways to kill time before the advent of television. This model of the liberal arts has just as much relevance today as it did then.

I suspect that the main reason humanities professors don’t play up the usefulness of the liberal arts for business is twofold. First, anything that smells of trade is looked down upon. We all know what shallow money-grubbers business people are. After all, we’ve been shown in numerous novels for the past two hundred years how awful they are, novels all written by people unsuccessful in the business world. Also, humanities professors rarely have much knowledge of what is necessary to succeed in the business world, because they’ve rarely spent much time outside academia. It’s a rare occurrence to find a humanities professor who has spent much time working in the business world in any but the lowest positions for brief times long ago. It’s hard to say if the liberal arts are useful for some profession if you’ve never worked in that profession.

Librarianship sometimes seems like an in-between world. It’s not quite academic in the way that teaching is academic, and parts are much more administrative than most professors would like. Even in non-managerial jobs, there’s a lot of paperwork and administrivia. Whereas I value the academic in academic librarianship, there are also plenty of librarians who thrill at the parallels between libraries and businesses and look to the business literature for inspiration. Regardless, what we do is more like what might be done in a business than what most professors do.

Even with that, it’s hard sometimes to articulate the usefulness of a liberal arts education for some library jobs. Because my job is working with humanities faculty, students, and collections, it’s obvious the knowledge and acclimatization gained through such an education is useful. Rhetoric is probably the most practical, and I get the same sense from non-academic friends. Whether you’re building a case for a budget increase or trying to sell someone a widget, the ability to construct persuasive arguments is important.

I’m less sure about the immediate usefulness of having read a lot of literature or viewed a lot of art, though, because such things seem to be most useful when the literature or art provides a shared context for people and allows them to communicate more effectively because they have something in common. In a discussion with a librarian once, I said the only function of the human appendix was to serve as a memento mori, but the joke lost its point when I had to explain what a memento mori was. Because of the various backgrounds of academic librarians, I’m already careful about making certain cultural allusions in conversation or assuming the shared values a mutual liberal education might provide. Out in the world it’s even harder to make such allusions or count on a shared culture created through education.

Thus it would seem that the skills, and not the content, of a liberal education that are the most valuable for business, which might be another reason it’s harder for academics to make the arguments for the usefulness of the liberal arts. In the humanities the emphasis isn’t so much on skills but on content. It’s not, "Professor X sure is good at putting together a persuasive PowerPoint presentation," but "Professor X is a leading authority on topic Y, and she also knows a lot about topic Z as well." Mastering a subject, or many subjects, is valued for its own sake, and not just because it’s good for sales. Mastering a subject is also a synecdoche for something larger as well. Mastery of a subject is also mastery of the self, of achieving or striving to achieve a kind of perfection, of overcoming the shallowness of popular culture and ignorant opinion and seeking to know and understand.

The article finds it surprising that business people are better at defending the liberal arts to business people than academics are, but this shouldn’t be surprising at all. Without shared values and a shared culture, communication is difficult. For better and worse, the cultures are too far apart to communicate well.

Student Expectations

An article in the New York Times this week reported on a study of student expectations that claimed they were a significant factor in grade complaints.  Students, it seems, have different expectations about what they should have to do to earn good grades. Some of the students quoted, for example, seemed to think that they should receive good grades based on their effort. One student said, "I think putting in a lot of effort should merit a high grade…. What else is there really than the effort that you put in?” Truly an illuminating comment, I’m sure you’ll agree. To most of us the answer is obvious.

The article mentioned various efforts around the country to deal with these unwarranted expectations about grades. Apparently, at Wisconsin the professors tell students they need to  “read for knowledge and write with the goal of exploring ideas.” They have seminars to reinforce this idea and teach students what education is supposed to be about.

This last quote gets at a much more fundamental question about student expectations than whether they should be graded for effort. Namely, what is education for in the first place? What value does it have? What is college for? Many students value education only instrumentally. They think rightly or wrongly that a college education is a means to getting a job. Education in itself is valued only insofar as it leads to gainful employment. As a student once said to me years ago, "I’m going to be a farmer. Why do I need to take classes like this?" (The class in question was introductory rhetoric, in which the student was faring poorly.) Any response I could have given would have been lost on this particular student, because the student had such a drastically different understanding of what the purpose of college is than I did. He was going to get some practical agricultural training and maybe enough accounting skills to run the family farm. There’s nothing wrong with that, but it meant that he denigrated anything that didn’t lead to his instrumental purpose. For him, the purpose of a college "education" was to help him be a farmer.

Students like this must be truly bewildered when they enter almost any traditional college and they’re taught by people for whom knowledge is valued for itself and not for any instrumental purpose. This is true even in fields with practical applications, and not just in the liberal arts. Professors are professors because they like to learn. They are the types Aristotle was talking about when he said that man by nature desires to know. Philosophers by nature desire to know. Farmers desire to know how to run a farm. This is a huge and crucial difference. Students who seek only instrumental learning can’t even understand the love of liberal learning that motivates their teachers. Learning is valued for its own sake, and not for the sake of some practical goal.

This difference appears even more starkly in the humanities, which tend to have no instrumental value. If we study seventeenth-century Dutch trading patterns or ancient philosophy or French poetry, we don’t do so for pragmatic results. The result is understanding or knowledge, but not understanding or knowledge that we can apply to getting a job. Why study rhetoric or poetry or history or philosophy? Ultimately, the only reason can be the desire to know, and in this knowing to participate in a larger culture than we encounter in our daily lives. We may understand more about our world, we may even become more fully human in certain ways, but rarely are we going to be able to take this knowledge and go run a business.

One irony is that such a disinterested pursuit of knowledge can lead to practical results. Consider the study of philosophy. Studying philosophy developed my analytical skills in ways that other study wouldn’t have, and these skills have been useful for many things, including my job, but I wouldn’t have pursued the study and thus developed the skills were I not interested in the subject for itself. Studying history can develop in us an understanding of other people and other cultures and perhaps lead to sympathy with those unlike ourselves which might reduce tensions and increase world peace, but it doesn’t necessarily do this. This would be an unrealistic reason to read a history book.

Another irony is that the mis-expectation of the student quoted above, who believed he should get an ‘A’ for effort, is one expectation that has little to do with learning for its own sake or the non-instrumental value of humanistic study, but is instead an expectation completely at odds with the practical world he will encounter when leaving school. Imagine a performance appraisal for any job where it would be appropriate to ask, "what is there other than the effort you put in?" One of the most realistic and practical portions of higher education is the ultimate expectation of results–just like in the real world. Whether you’re repairing an automobile or preparing a sales presentation, no one cares about the effort you put in. People care about the finished product. The one way in which higher education indisputibly prepares one for the demands of the workaday world is the one this student finds the least understandable.