Description Dilemma: Processing Mathieu-Guillaume-Thérèse Villenave’s Collection on Alina d’Eldir

By Alice Griffin

Piece of paper with script: “La Sultane Ch. Mercier D’Eldir, femme auteur,” undated.

How does one describe a collection with multiple and unknown creators and a subject with an inconsistent biography?

As part of my summer Archival Fellowship for Manuscripts Division Collections, I processed a collection of papers and correspondence relating to Alina d’Eldir (?-1851), author of Méditations en prose, par une dame indienne (1828), which was described as “le premier ouvrage qui était composé en français, et publié à Paris, par une princesse de l’Hindoustan” (the first work to be written in French, and published in Paris, by an Indian princess). This collection documents  d’Eldir’s advocacy for magnetism as a cure for illness and her role as the founder of the Ordre Asiatique de Morale Universelle, a religious organization.  

When doing some research on this collection, I found that biographies of d’Eldir present conflicting information and are a bit fuzzy on the details. However, there is a general consensus that she was taken from India when she was young and converted to Christianity. Sources also say she was treated like an adopted daughter by Empress Josephine, Napoleon Bonaparte’s first wife. From the mid-1820s on, she provided magnetism treatments from her residence in Paris. While several sources do report these elements of her life, I don’t mean to provide this information as fact, but more as an illustration of what has been written about her. It seems like there are very few contemporary works on her; the most recent publication I have found about d’Eldir was an article in Chambers’s Journal of Popular Literature, Science and Arts, published in 1894.

Portrait of Alina d’Eldir with some of the unknown handwriting in the collection, undated.

This is a small collection (less than 0.2 linear feet), but do not be fooled! The size of a collection does not necessarily match the complexity of description and processing efforts. Part of the challenge here was sorting through who the creator(s) versus the subject of this collection was. Although the description that arrived with the collection highlighted d’Eldir as the focal point of the collection, it did indicate that Mathieu-Guillaume-Thérèse Villenave (1762-1846) was the main creator and collector of the group of documents. Villenave was involved in the Ordre Asiatique de Morale Universelle, and edited a book about magnetism and Alina d’Eldir (La vérité du magnétisme, prouvée par les faits, 1829). The collection includes documents related to Villenave’s work with d’Eldir: correspondence from Alina d’Eldir (written on her behalf by her husband, and secretary, Charles Mercier) to Villenave, certificates from the Ordre Asiatique de Morale Universelle, and Villenave’s handwritten copies of testimonials describing d’Eldir’s magnetism treatments. However, other documents in the collection (other copies of testimonials, a portrait of d’Eldir) feature different handwriting. In addition, a typewritten bibliography about d’Eldir and her contemporaries includes a book published in 1912, well after the deaths of Villenave and d’Eldir. Two indications of other previous owners include a scrap of paper with the name Léon Féer (possibly Léon Féer, 1830-1902, the French linguist) and a book plate with the name Hans Fellner (possibly Hans Fellner, 1925-1996, the bookseller).

The dealer’s description also created some confusion about the creator versus subject of the collection because of how it concentrates on d’Eldir’s biography. This makes sense, of course, d’Eldir is an influential figure and the focus of the collection, but her voice and work are presented secondhand through materials created by others.

With all of this in mind, my supervisors on the Manuscripts Processing Team and I had several discussions about whether we should include a creator at all. If we put Villenave as the creator, would that not take into account the other creators involved in the collection? And then would that overshadow d’Eldir as the center of the collection? In the end, we decided to include Alina d’Eldir’s name as a subject in the controlled access headings and put Villenave as a creator. This makes it clear that Villenave is the main creator, while also indicating how the materials are about Alina d’Eldir rather than created by her. We also decided that transparency was the best approach to describing how much (or how little) is known about the collection’s history. We provided this information, including more recent creators/collectors of the papers, via the Custodial History note. Although the information I was able to provide was somewhat limited, it should prove useful to researchers, public services, and future processing archivists. 

D’Eldir as a subject, as opposed to a creator, does not mean that the collection is less valuable, but I do think it means I should be mindful in my description. I didn’t want to contribute to under-describing d’Eldir, nor did I want to mischaracterize her; and further I didn’t want to perpetuate the descriptions of d’Eldir that exoticize her. When describing d’Eldir in the finding aid, I decided to stick with simple biographical statements, which were supported by materials in the collection (e.g. her position in the Ordre Asiatique de Morale Universelle and her connection to magnetism). In the interest of transparency, I used the Works Cited note to list the sources I consulted when writing Description notes. 

As an archivist, it is important to provide description that is useful and as accurate as possible. At the same time, however, exhaustive, infallible description isn’t how researchers generally navigate to a collection. I think this is where keywords, subject headings, and name authorities come in. With thoughtful, accurate subject headings, I feel confident that researchers will find this collection. And those researchers may be able to reveal further details about the collection that we will want to add to the finding aid in the future. 

Ultimately, the beauty of processing is that it is iterative; my description is not set in stone and nor should it be. Future archivists can edit and expand as researcher needs and best practices dictate.

Mathieu-Guillaume-Thérèse Villenave’s Collection on Alina d’Eldir, circa 1829-1950s, is currently discoverable via the Princeton University Library Finding Aids site, and open for research. For more information on how to visit and conduct research at  Princeton’s Department of Rare Books and Special Collections, please consult the Visit Us page on our website.

Letter to Villenave from Charles Mercier d’Eldir on behalf of Alina d’Eldir, 1839.

Meet the 2019 Manuscripts Division Archival Summer Fellow

Alice Griffin

Name: Alice Griffin

Educational Background: I recently graduated from Pratt Institute in New York City with a Master’s degree in Library and Information Science, with a personal focus in archives and information technology. For my undergraduate degree, I studied anthropology and French at Barnard College in New York City.

Previous Experience: During college, I worked in the Barnard College Archives and Special Collections where I developed an intense appreciation of archival collections, archival work, and archivists themselves! For two summers during college I interned in the Archives and Modern Manuscripts Division at the National Library of Medicine.

After college, I taught English in French public schools, then returned to the U.S. and interned at the National Anthropological Archives, working in the reference room and on processing projects. In late 2016, I began working as the Metadata/Digitization Assistant at the Archives of La MaMa Experimental Theatre Club, where I continued working through graduate school.

Why I like Archives/Professional Interests: I like archives because it’s a field where continual learning and critical inquiry are encouraged. I also enjoy how archival work facilitates new research and scholarship. Generally, I am interested in how archival description can best facilitate access for different communities of users.

Other interests: I love live music in small venues and have a soft spot for good bar trivia.

Looking forward to working on the following project(s) while at Princeton: I’m looking forward to the many projects I’ll be working on this summer! I’ll be starting with processing the Peter Bunnell Papers; then, I’ll be working with some born-digital materials and French language collections. Also looking forward to responding to remote reference with public services.

Now Accepting Applications for the 2019 Archival Fellowship for Manuscripts Division Collections

Princeton University Library’s Department of Rare Books and Special Collections (RBSC) is excited to offer the Archival Fellowship for Manuscripts Division Collections again this year. The fellowship provides a summer of paid work experience for a current or recent graduate student interested in pursuing an archival career.

Fellowship Description: The 2019 Fellow will primarily gain experience in technical services, with a focus this year on arrangement and description of manuscript collections, including hybrid collections with born-digital and audiovisual materials. Additional projects may include assisting with reference and other public services tasks. The Fellow will work under the guidance of the team of processing staff responsible for collections within RBSC’s Manuscripts Division, including the Lead Processing Archivist, Project Archivist for Americana Manuscripts Collections, Processing Archivist for General Collections, and the Latin American Processing Archivist.

The Manuscripts Division of Rare Books and Special Collections is located in Firestone Library, Princeton University’s main library, and holds over 14,000 linear feet of materials covering five thousand years of recorded history and all parts of the world, with collecting strengths in Western Europe, the Near East, the United States, and Latin America. The Fellow will primarily work with the Division’s expansive literary collections, the papers of former Princeton faculty, and collections relating to the history of the United States during the 18th and 19th centuries.

The ten- to twelve-week fellowship program, which can begin as early as May, provides a stipend of $950 per week. In addition, travel, registration, and hotel costs to the Society of American Archivists’ annual meeting in August will be covered by Princeton.

Requirements: This fellowship is open to current graduate students or recent graduates (within one year of graduation). Applicants must have successfully completed at least twelve graduate semester hours (or the equivalent) applied toward an advanced degree in archives, library or information management, literature, American history/studies, or other humanities discipline, public history, or museum studies; a demonstrated interest in the archival profession; good organizational and communication skills; and the ability to manage multiple projects. At least twelve undergraduate semester hours (or the equivalent) in a humanities discipline and/or foreign language skills (particularly Spanish-language reading skills) are preferred.

The Library highly encourages applicants from under-represented communities to apply.

To apply: Applicants should submit a cover letter, resume, and two letters of recommendation addressed to the processing team at Applications must be received by Monday, March 4, 2019. Video interviews will be conducted with the top candidates, and the successful candidate will be notified by April 5th.

Please note: University housing will not be available to the successful candidate. Interested applicants should consider their housing options carefully and may wish to consult the online campus bulletin board for more information on this topic.


Meet the 2018 Summer Archival Fellow

Under the supervision of the processing team for Manuscripts Division collections, the summer fellow will be assisting staff with various projects, particularly processing projects that will include working with paper-based, born-digital, and audiovisual content.

Name: Sara Rogers

Educational background:  I just graduated from The University of Texas at Austin where I obtained my Master’s degree in Information Studies at the School of Information. As an undergraduate I studied History and English at the University of Denver.

Previous experience:  After graduating from college, I worked for several years in the Records department of a financial institution in Denver, Colorado. While I enjoyed the kind of work I was doing I knew I really wanted to work with special collections and materials that could be shared with the public. 

In Austin, I had the opportunity to work for the Briscoe Center for American History as the Archives Intern and as a Graduate Research Assistant at the Alexander Architectural Archives. I also worked on a digitization project for a production company and an audiovisual project for South by Southwest.

Why I like archives/Professional interests:  I feel lucky to have been exposed to archives early on. When I first started college I immediately went to the main library on campus to fulfill my lifelong dream of being paid to read books all day. Instead, the hiring manager heard I was planning on studying history and wisely assigned me to work in Special Collections and Archives.

Other interests:  I love traveling! I grew up an Army Brat, so I’ve been fortunate enough to have had some amazing opportunities to live and study abroad. However, despite being fairly well traveled, this is my first time spending a significant amount of time on the East Coast. So I’m excited to spend my weekends exploring the area and visiting nearby cities. If anyone has any travel tips/suggestions let me know!

Looking forward to working on the following project(s) while at Princeton: This summer I will be processing legacy collections, testing and documenting born-digital workflows, learning how to use the 3D printer, working with public services and more! While I have spent the past year working with born-digital materials and creating documentation for digital preservation, I am excited to have the opportunity to process paper collections again and to work with public services.

Job Posting: Processing Archivist for Latin American Manuscripts Collections

Processing Archivist for Latin American Manuscripts Collections

Department: Rare Books and Special Collections
Requisition #: D-18-LIB-00024

Position Summary

Princeton University Library seeks an energetic, collaborative, forward-thinking archival description professional to create, manage, and enhance data for Manuscripts Division collections. Personal papers of Latin American literary, cultural, and political figures will constitute the majority of the position’s workload, though work in other collection areas may be assigned. Primary duties include processing and cataloging new acquisitions along with revising legacy finding aid data and catalog records as required by current practices and user needs. Management of audio-visual resources and digital assets is included in the position’s responsibilities. This position will supervise student workers. Archivists participate in committee work relating to policies, workflow, and system development and may contribute to digital humanities projects.

This position is available immediately.  Applications received within 1 month of posting are guaranteed consideration.

Princeton is especially interested in qualified candidates who can contribute, through their commitment to the Library’s mission and vision, to the diversity and excellence of our academic community.

Essential Qualifications

  • Master’s degree from an ALA-accredited program, or equivalent combination of other advanced degree and professional-level experience in a research library or archival setting.
  • Fluent reading knowledge of Spanish, either in connection with modern or contemporary literature, or demonstrated application of the language in a library, archives, or other research setting.
  • Hands-on manuscripts processing experience with collections varying in size and scope.
  • Familiarity with current developments in processing procedures.
  • Application of standards for manuscript and archival description such as DACS, EAD, and MARC and facility with managing the resulting descriptive data
  • Ability to work both independently and collaboratively in a team setting.
  • Excellent communication and interpersonal skills.
  • Ability to work effectively in a dynamic environment and with a diverse group of staff and patrons.

Preferred Qualifications

  • Experience with collection management tools such as Archivists’ Toolkit, Archon, ArchivesSpace, or similar system.
  • Knowledge of procedures for accessioning and describing born-digital materials and audiovisual media, and understanding of related preservation concerns.
  • Understanding of EAC-CPF. Processing experience with handwritten materials.
  • Proficiency with XSLT, XQuery or other such computing tools relevant to the management of archival descriptive data.
  • Experience with bibliographic MARC-format cataloging using RDA, AMREMM, or AACR2.
  • Knowledge of non-English languages that are significant to the position’s scope such as French or Portuguese.

The successful candidate will be appointed to an appropriate Librarian rank depending upon qualifications and experience.

Applications will be accepted only from the AHire website: and must include a resume, cover letter, and a list of three references with full contact information. This position is subject to the University’s background check policy.
Requisition #: D-18-LIB-00024

Princeton University Library is one of the world’s leading research libraries. It employs a dedicated and knowledgeable staff of more than 300 professional and support staff working in a large central library, 9 specialized branches, and 3 storage facilities. The Library supports a diverse community of 5,200 undergraduates, 2,700 graduate students, 1,200 faculty members, and many visiting scholars. Its holdings include more than 10 million printed volumes, 5 million manuscripts, 2 million non-print items, and extensive collections of digital text, data, and images. More information: 

Transformed: From the Visuals database to MARC in 4 (depending on how you count them) easy (depending on what you consider easy) steps

In technical services we emphasize our ability to query, manage, and transform data. At any given moment one or more of us will be using XSLT, XQuery, Microsoft Access, Open Refine, Python, or some other such tool to analyze or edit data in our various systems.  In this post one such project is highlighted: moving data from the Visuals database for graphic arts to the Voyager catalog and the Blacklight discovery system.

The Visuals database was around for a long time. Its scope was ambitious: prints, drawings, photographs, paintings, sculpture, and other non-book objects in the Princeton University Library collections.  However, its chief content was records for holdings of the Graphic Arts collection in RBSC.  There was no declared descriptive standard, and field content was 100% free text.  Inevitably data problems occurred.  Only the iron discipline of Vicki Principi of RBSC brought some semblance of order to the data starting in 2004.

Visuals started out as a SPIRES database. Eventually it was migrated to SQL Server.  At migration time the inconsistent data could not be normalized as usual for relational databases, so Stan Yates of the Library’s Systems Office (now Information Technology) created a very simple and flexible three-part data structure as shown below.  Here is what one of over 750,000 Visuals database entries looked like via Microsoft Access:

idRecord Element Value
10 ARTIST Cruikshank, George, 1792-1878 [etcher]

These bits and pieces of data were assembled into complete records in forms and reports for presentation. The public interface, though greatly improved in recent years by Gary Buser of Information Technology, was difficult to use for those not already familiar with the structure and data conventions of Visuals.  Also, because of its unorthodox internal structure the Visuals catalog was not connected to Aeon requesting functions.  In an innovation, in 2014 Visuals records were added to Primo, then our discovery system.  When Blacklight replaced Primo, Visuals records made the transition.  The results were easily foreseeable: users could find and request materials in a familiar local searching environment.  The transformation for Primo and Blacklight employed a local XML format called “Generic” that mimicked the structure of a MARC record, since Voyager MARC was the model the discovery systems were based on.  The Visuals-to-Generic stylesheet became the basis of the one that was used to transform Visuals to MARC.

The decision to retire Visuals in favor of MARC was based on the specific need to facilitate the transition from the original Princeton University Digital Library (PUDL) to Digital PUL, or DPUL. DPUL has 2 “canonical” systems as data sources: Voyager (MARC) and the finding aids XML database (EAD).  PUDL data based on Visuals constituted a large block of non-MARC, non-EAD records.  Those records would not be migrated in their current custom VRA-encoded state.  The simplest and most forward-looking solution is to transform Visuals as a whole to Voyager in MARC.  Voyager/MARC is our best system for rich item-level bibliographic descriptions such as those in Visuals.  Voyager/MARC provides an easy metadata path for future digitization projects.  At the same time the shift to Voyager eliminates the need to maintain an isolated standalone system and provides a more functional and sustainable environment for cataloging and data management in RBSC.

The first step in converting Visuals to MARC was to output data in a workable format, by extract to XML via the Access front end. Queries produced 10 files based on the last digit of the Visuals record numbers.  (Division into multiple files was done simply to provide files of manageable size.)  Each resulting XML file was over 9 MB.  The structure was the very opposite of complex: a <dataroot> element wrapping tens of thousands of elements like this from the file of records with ID numbers ending in 0:

<Value>Cruikshank, George, 1792-1878 [etcher]</Value>

The XSL stylesheet begins transformation by gathering “records” together in variables (by grouping the various Visuals elements that have idRecord in common), and then processes each resulting variable as a unit to produce a MARC XML record. For instance, the Cruikshank “Value” in the database illustration above—a single line of text–turns into the following MARC field 100 with 3 subfields, as part of bibliographic record number 10.  The field even gets added punctuation per MARC conventions.  If there are additional ARTIST elements they turn into 700 fields.

<marc:controlfield tag=”001″>10</marc:controlfield>

<marc:datafield ind1=”1″ ind2=” ” tag=”100″>
<marc:subfield code=”a”>Cruikshank, George,</marc:subfield>
<marc:subfield code=”d”>1792-1878,</marc:subfield>
<marc:subfield code=”e”>etcher.</marc:subfield>

Of necessity, given the structure and content of Visuals, MARC leader and 008 values are chiefly arbitrary–with the exception of 008/07-10 (Date 1), which in many cases could be parsed out of the Visuals DATE field. Other infelicities exist when Visuals data varied from major patterns that could be coded in XSL, though valuable pointers from Nikitas Tampakis of Information Technology and Joyce Bell of Cataloging and Metadata Services brought the encoding much closer to standard MARC.  Joyce’s thoroughgoing review prompted many stylesheet changes to make the records similar to current ones created according to RDA.  A great number of the exceptions are being dealt with in bulk during post-processing now that the records have been loaded into Voyager.

Transformation of the Visuals XML output to MARC XML was a matter of minutes. After validation against the MARC XML format, the files were ready for conversion to MARC21 via Terry Reese’s MARCEdit conversion utility.  This remarkable tool took only a few seconds to produce records that could easily be handed off for loading to Voyager.

In MARC 21 the field looks like this, as one of many fields in the record with 001 “10” and 003 “Visuals”:

100  1 ‡a Cruikshank, George, ‡d 1792-1878, ‡e etcher.

As part of a MARC record it can be validated, indexed, displayed, and communicated in this form.

The MARC files were then bulk-loaded into Voyager by Kambiz Eslami of Information Technology and became available to users of Blacklight. Anyone can see Visuals record #10 at, identified by its new Voyager bibliographic record ID and with the original Visuals record ID now in an 035 field.  If we so choose, the Voyager records can be exported to OCLC for inclusion in WorldCat.

Finished! (Except for the post-processing clean-up.)


The migration from Visuals to MARC is attended by a degree of irony. MARC is on the way out, according to many observers.  What will take its place?  BIBFRAME, or some other data format based on RDF (Resources Description Framework).  What is the key structural feature of RDF?  It’s “triples” – a combination of subject and predicate and object, with the predicate being the term that describes the connection between the subject and the object.  Seen any triples lately?  Why, yes: Visuals!!!  Let’s apply RDF terms from Dublin Core (a simpler alternative to BIBFRAME) to our three-part Visuals statement, with a little help from VIAF, the Virtual International Authority file.

This Visuals “triple”:

<Value>Cruikshank, George, 1792-1878 [etcher]</Value>

turns into this RDF triple in Dublin Core:

<rdf:RDF xmlns:rdf=””
<rdf:Description rdf:about=”″>
<dcterms:creator rdf:resource=””/>

Or, as a human would read it: The work described in Princeton Visuals record #10 has creator Cruikshank, George, 1792-1878.  “What’s a creator?” you might ask.  The namespace prefix “dcterms” tells you that you can find out at the address indicated.  (Cruikshank is labelled as an “etcher” in Visuals and in MARC, but Dublin Core does not go into specific function terms like that.  Cruikshank is in the MARC 100 field in this record.  The closest Dublin Core term representing the 100 “Main Entry” field is “creator” and that will serve to get users to the resource description.)  In our Dublin Core statement, everything’s a URI, and life is sweet.

So, just in this brief note the same functional text has been shown encoded in 5 different ways: Visuals database format, XML extract from the database, MARC XML, MARC21, and RDF. Such multiple-identity situations are familiar to us.  Many of the technical services staff are adept at understanding, manipulating, and when necessary actually inventing data encoding schemes and at moving data from one to another (and adapting the data as required to the new encoding environment).  These skills have grown to be just as significant in our work as creating the data (metadata) in the first place.  As the examples show, the significant content lives on no matter what form of coding is wrapped around it, and whether it is represented by text or by a URI.  MARC is by no means the end of the road for Visuals or anything else.  Conversion of MARC fields to RDF triples would mean a round trip for Visuals data.  Was Visuals futuristic, structurally speaking?  It’s a question for the historians.  For now, we are going to move ahead one step at a time.

Manuscripts Division Offers 2018 Archival Fellowship

The Manuscripts Division, a unit of Princeton University Library’s Department of Rare Books and Special Collections, is proud to offer the 2018 Manuscripts Division Archival Fellowship. This fellowship provides a summer of paid work experience for a current or recent graduate student interested in pursuing an archival career. For more information about the Manuscripts Division visit:

Fellowship Description: The 2018 Fellow will gain experience in technical services, with a focus this year on arrangement and description of manuscript collections, including hybrid collections with born-digital and audiovisual materials. Additional projects may include assisting with reference and imaging services work. The Fellow will work primarily under the guidance of the of the Manuscripts Division processing team, which includes the Lead Processing Archivist and Project Archivist for Americana Manuscript Collections.

The Manuscripts Division of Rare Books and Special Collections is located in Firestone Library, Princeton University’s main library, and holds over 14,000 linear feet of materials covering five thousand years of recorded history and all parts of the world, with collecting strengths in Western Europe, the Near East, the United States, and Latin America. The Fellow will primarily work with the Division’s expansive literary collections, the papers of former Princeton faculty, and collections relating to the history of the United States during the 18th and 19th centuries.

The ten- to twelve-week fellowship program, which may be started as early as May, provides a stipend of $950 per week. In addition, travel, registration, and hotel costs to the Society of American Archivists’ annual meeting in August will be covered by Princeton.

Requirements: This fellowship is open to current graduate students or recent graduates (within one year of graduation). Successful completion of at least twelve graduate semester hours (or the equivalent) applied toward an advanced degree in archives, library or information management, literature, American history/studies, or other humanities discipline, public history, or museum studies; demonstrated interest in the archival profession; and good organizational and communication skills. At least twelve undergraduate semester hours (or the equivalent) in a humanities discipline and/or foreign language skills are preferred.

The Library highly encourages applicants from under-represented communities to apply.

To apply: Applicants should submit a cover letter, resume, and two letters of recommendation to: Applications must be received by Monday, March 12, 2018. Video interviews will be conducted with the top candidates, and the successful candidate will be notified by April 20th.

Please note: University housing will not be available to the successful candidate. Interested applicants should consider their housing options carefully and may wish to consult the online campus bulletin board for more information on this topic.


Digital Archives Workstation Update: KryoFlux, FRED, and BitCurator Walk into a Bar…

The Manuscripts Division processing team’s new digital archives workstation.

Over the past year and a half, the Manuscripts Division processing team has made two significant additions to our digital archives workstation. The first, mentioned briefly in our July 2016 post, was a KryoFlux forensic floppy controller, which allows archivists to create disk images from a variety of obsolete floppy disk formats. The second, more recent addition was a forensic computer workstation called the Forensic Recovery of Evidence Device (FRED), which arrived this May (1). We now use the FRED in a native BitCurator environment as our primary workstation, along with the KryoFlux and a growing collection of external drives. While this streamlined setup has vastly improved our born-digital processing workflow, we would be lying if we didn’t admit that getting to this point has involved back-and-forth discussions with IT colleagues, FRED troubleshooting, and a fair share of headaches along the way. This post will describe how we got these new tools, FRED and KryoFlux, to work together in BitCurator.

Before we had the FRED, we operated the KryoFlux from the Windows partition of our digital processing laptop (which is partitioned to dual boot Bitcurator/Ubuntu Linux and Windows 7). To get to this point, however, we had to jump over some hurdles, including confusion over the orientation of drives on our data cable, which differed from the diagram in the official KryoFlux manual; finicky USB ports on our laptop; and the fact that the laptop didn’t seem to remember that it had the KryoFlux drivers installed between uses (2). While operating the KryoFlux meant we had to do some sort of extra finagling with each use, it nonetheless allowed us to image floppies we couldn’t with our previous tools.

In addition to hardware components such as the controller board, floppy drive, and associated cables, the KryoFlux “package” also consists of a piece of software called DiskTool Console (DTC), which can be run directly in the terminal as a command line tool or through a more human-friendly graphical user interface (GUI). The KryoFlux software is compatible with Windows, Mac, and Linux. However, we initially went with a Windows install after hearing a few horror stories about failed attempts to use KryoFlux with Linux. Though operational, this set-up quickly became unsustainable due to the laptop’s tendency to crash when we switched over from disk imaging in Windows to complete later processing steps in BitCurator. Whenever this happened, we had to completely reinstall the BitCurator partition and start from scratch, sometimes losing our working files in the process. In addition to this problem was the issue of our quickly dwindling hard drive space. In order to sidestep this mess, we needed to install the KryoFlux on the FRED. Since we planned to have the FRED running the BitCurator environment as its only operating system to avoid any future partitioning issues, this meant we would have to attempt the dreaded Linux install.

Our feelings about Linux before the Archivist’s Guide to KryoFlux. Source:

Luckily, the arrival of our FRED in May 2017 coincided with the advent of the Archivist’s Guide to KryoFlux. Although the KryoFlux is gaining popularity with archivists, it was originally marketed towards tech-savvy computer enthusiasts and gamers with a predilection for vintage video games. The documentation that came with it was, to put it nicely, lacking. That’s where an awesome group of archivists, spearheaded by Dorothy Waugh (Emory), Shira Peltzman (UCLA), Alice Prael (Yale), Jennifer Allen (UT Austin), and Matthew Farrell (Duke) stepped in. They compiled the first draft (3) of the Archivist’s Guide to KryoFlux, a collaborative, user-friendly manual intended to address the need for clearer documentation written by archivists for archivists. Thanks to the confidence inspired by this guide, our dark days of Linux-fearing were over. We did encounter some additional hiccups on our way to a successful Linux install on the FRED — but nothing we couldn’t handle without the tips and tricks found in the guide. The following are some words of wisdom we would offer to other archivists who want to use KryoFlux in conjunction with the FRED and/or in a natively installed BitCurator environment.

First, when installing KryoFlux on a Linux machine, there are a few extra steps you need to take to ensure that the software will run smoothly. These include installing dependencies (libusb and the JDK Java Runtime Platform) and creating a udev rule that will prevent future permissions issues. If the previous sentence is meaningless to you, that’s ok because the Archivist’s Guide to KryoFlux explains exactly how to do both of these steps here.

A second problem we ran into was that, even though we had Java installed, our computer wasn’t invoking Java correctly when we launched the KryoFlux GUI; the GUI would appear to open, but important functionality would be missing (such as a completely blank settings window). A tip for bypassing this problem can be found several paragraphs into the README.linux file that comes with the KryoFlux software download; these instructions indicate that the command java -jar kryoflux_ui.jar makes Java available when running the GUI. To avoid having to run this command in the terminal every single time we use the GUI, we dropped the command into a very short bash script. We keep this script on the FRED’s desktop and click on it in order to start up the GUI in place of a desktop icon. There are likely other solutions to this problem out there, but this the first one that worked consistently for us.

Annotated section of README.linux file from the KryoFlux software for Linux (which you can download from this page.)

One particularity of the FRED to keep in mind when working with the KryoFlux, or any external floppy controller or drive, is the FRED’s internal Tableau write blocker (UltraBay). Since the KryoFlux employs hardware write-blocking (after you remove a certain jumper block (4) ), the FRED’s internal hardware write blocker is unnecessary and will create problems when interfacing with external floppy drives. To bypass the FRED’s Tableau write blocker, make sure to plug the KryoFlux USB data cable into one of the USB ports along the very top of the FRED or those on the back, not the port in the UltraBay.

Plug the KryoFlux data cable into the USB ports that are not connected to the internal write blocker in the FRED’s UltraBay. Like so.

Technical woes aside, the best part about our new FRED/KryoFlux/BitCurator set-up is that it allows us to access data from floppy disks that were previously inaccessible due to damage and obscure formatting. Just this summer, our inaugural Manuscripts Division Archival Fellow, Kat Antonelli, used this workstation to successfully image additional disks from the Toni Morrison Papers. Kat was also able to use Dr. Gough Lui’s excellent six-part blog series on the KryoFlux to interpret some of the visualizations that the KryoFlux GUI provides. From these visualizations, she was able to glean that several of the disks that even the KryoFlux couldn’t image were most likely blank. While the Archivist’s Guide to KryoFlux provides a great way to get started with installation, disk imaging, and basic interpretation of the KryoFlux GUI’s various graphs, learning how to navigate these visualizations is still somewhat murky once you get beyond the basics. As archivists continue to gain experience working with the KryoFlux, it will be interesting to see how much of this visual information proves useful for archival workflows (and of course, we’ll document what we learn as we go!)

What does it all mean? (The left panel shows us how successful our disk image was. The right panel contains more detailed information about the pattern of data on the disk.)

(1) The processing team drafted a successful proposal to purchase the FRED based on the results of a survey we conducted asking 20 peer institutions about their digital archives workstations. We plan to publish the results of this survey in the near future. More to come on this project in a future post!

(2) You can read more about our troubleshooting process for these issues in the “Tale of Woe” we contributed to the Archivist’s Guide to KryoFlux. More on this resource later in this post.

(3) The guide is still in draft form and open for comments until November 1, 2017. The creators encourage feedback from other practitioners!

(4) See page 3 of the official KryoFlux manual for instructions on enabling the write blocker on the KryoFlux. (3.5” floppies can also be write-blocked mechanically.)

Meet the Manuscripts Division 2017 Summer Fellow

Under the supervision of the Manuscripts Division processing team, the summer fellow will be assisting with several key projects, including “traditional” paper-based processing, processing born-digital media, inventorying AV materials, and researching access options for born-digital and digitized AV content.

Kat at Prambanan, a Hindu temple in Indonesia

Name: Kathryn Antonelli (but feel free to call me Kat!)

Educational background: I received my undergraduate education from Temple University. My degree was in Media Studies and Production, with a minor in French. I’m now about halfway through my Master’s program in Library and Information Science through the University of South Carolina’s distributed education option. This summer, I’m conducting an independent study on the ethics of archiving audiovisual materials (especially within collections of indigenous and minority cultural groups), so if you have any leads on interesting articles to read please do let me know. 🙂

Previous experience: Before finding my interest in archiving, I worked in event production at the Barnes Foundation in Philadelphia. More recently, after moving to Chicago and starting my MLIS, I’ve had the opportunity to intern at the Gerber/Hart Library, the Chicago Symphony Orchestra, the Oriental Institute, and the Newberry Library.

Why I like archives: I like archives for two reasons: the stories they tell, and the mysteries they solve. I do truly enjoy working with paper-based collections, but after my undergraduate program I became much more aware of how audiovisual media presents—or omits—information, which made those materials and the ways we can use them even more interesting to me. And, after a childhood full of Nancy Drew novels, I’ll count anything from puzzling out the (accurate!) birth date of a well-known dancer to identifying people in a photograph as a type of mystery solving.

Other interests: While baseball season is a lot of fun, and the weather is much nicer, I’m rarely sad for summer to end because it means college football is about to start. I’m an ardent Temple fan, of course, but I also watch every other game I can. My friends are always entertained by the irony, since outside of watching sports I am not a competitive person at all.

Projects this summer: I’m excited that my first task at Princeton is to process the Albert Bensoussan papers. The collection is in both French and Spanish and I love working with foreign language materials. Later this summer, I’ll be taking on more tasks with our born-digital holdings, so I’m also looking forward to learning how to use the new FRED machine to work with files in the Toni Morrison collection.

Moving Beyond the Lone Digital Archivist Model Through Collaboration and Living Documentation

Click here to view slides.

Below is the text of a presentation Elvia Arroyo-Ramirez, Kelly Bolding, and Faith Charlton gave earlier this month at the 2017 Code4Lib conference in Los Angeles, CA. The talk focused on the Manuscripts Division Team’s efforts to manage born-digital materials and the challenges of doing this work as processing archivists without “digital” in their titles. 

Hello everyone, welcome to the last session of the last day of code4lib. Thank you for sticking around.

What we want to talk about in the next 10 minutes are the numerous challenges traditional processing archivists can face when integrating digital processing into their daily archival labor. Shout out to UCSB, NCSU, and RAC for presenting on similar topics. Knowledge, skills, and institutional culture about who is responsible for the management of born-digital materials can all be barriers for those that do not have the word “digital” in their job titles.

Our talk will discuss steps the Manuscripts Division at Princeton University has taken to manage its born-digital materials through collaboration, horizontal learning, and living documentation.

But first, we’ll introduce ourselves: Hi, I am:

  • Elvia – I am the Processing Archivist for Latin American Collections
  • Kelly – I am a Manuscripts Processor
  • Faith – I am the Lead Processing Archivist for Manuscripts

We, along with two other team members, including Allison Hughes and Chloe Pfendler, who both contributed to efforts we will discuss here, form part of the Manuscripts Division in our department. And though we are all “traditional” processing archivists who do not have the word “digital” in our titles, we’ve increasingly encountered digital assets in the collections we are responsible for processing.

First we wanted to give everyone a breakdown of our department. Princeton’s archival repositories are actually physically split between two libraries with three main divisions. The Manuscripts Division (where we are located) is in Firestone Library; Public Policy and the University Archives are located several blocks away at Mudd Library. The library currently employs one dedicated Digital Archivist for the University Archives and that is Jarrett Drake, whom without his expert guidance and skill sharing, we wouldn’t be giving this presentation. Jarrett has really set the tone for horizontal learning and opening opportunities for skill building and sharing across the divisions of the department to empower his colleagues to take on digital processing work.

With that said, the Manuscripts Division has no digital archivist, so digital processing responsibilities are distributed across the team, which initially left us feeling like [gif of Ghostbusters team at the onset of meeting a ghostbusting challenge].

To dive into this type of work we needed to take some first steps. 

We literally jumped at the chance to begin managing our digital backlog by participating in SAA’s 2015 Jump in 3 initiative, which allowed us to gain intellectual control over legacy media within the division’s 1600 or so manuscript collections. We also began updating pertinent documentation, such as our deed of gift, and drafting new guidelines for donors with born-digital materials. We also began assembling our first digital processing workstation – a dual booting BitCurator and Windows 7 laptop connected to various external drives, including a Kryoflux for imaging problematic floppy disks.

With Jarrett’s assistance we began processing born-digital materials using the workflows he and his predecessors had developed for University Archives. We’ve also experimented with new tools and technologies; for example, setting up and using the KryoFlux and creating bash scripts to reconcile data discrepancies. So far, our work continues to be an ongoing process of trial, error, and, most importantly, open discussion.

Okay, let’s talk about horizontal learning. As an increasing number of archivists in our department were gaining the skills necessary to handle digital processing, the opportunity to share our expertise and experiences across divisions materialized. The following are two examples of how we’ve built this collaborative approach.

Over the last year a group of archivists from the across the department, including the Digital Archivist, came together to form DABDAC, the Description and Access to Born-Digital Archival Collections working group, as a means of maintaining an open forum for discussing born-digital description and access issues.

Members meet biweekly to discuss case studies, fails, potential new tools they are interested in experimenting with, readings, additions to workflows, etc. The workgroup follows a “workshop” model; whoever has a current description or access issue can bring it to the meeting’s agenda and ask the collective for advice.

Creating a horizontal skill-sharing environment has boosted our confidence as nascent digital archivists. Now with a baseline understanding of digital processing and the tools we need to do this type of labor, we sought the advice of our peers within the profession to help inform the development of our very own digital archives workstation. The team developed a survey asking 20 peer institutions about their local setup, which ultimately informed our decision in purchasing a FRED machine. Thanks to those who responded and provided us with in depth and extremely helpful responses.

Another key theme that has emerged from our experiences is the importance of living documentation. By this, we mean workflow documentation that is:

  • collaboratively created;
  • openly accessible and transparent;
  • extensible enough to adapt to frequent changes; and  
  • flexible enough to use across multiple divisions.

Managing living documentation like our Digital Records Processing Guide on Google Drive allows us to maintain tried-and-true guidelines vetted by the Digital Archivist, and supplemented by other archivists who work with digital materials.

We currently use the Comments feature to link from specific steps in the workflow to separate Google Docs, or other online resources that can inform decision-making or provide working alternatives to specific steps. We also write and link to documents we call “reflections.” These reflection documents detail improvised solutions to problems encountered during processing so that others can reuse them. By expanding our workflows this way, we extend the value of time dedicated to experimentation by documenting it for future repurposing.

Digital processing also presents opportunities for archivists to develop workflows collaboratively across institutions, especially since archivists often adopt digital tools developed for other fields like forensics. These tools often come poorly documented or with documentation intended for users with very different goals. One example is the KryoFlux, pictured here, a forensic floppy controller that many archivists have adopted. While our KryoFlux arrived from Germany with a few packages of gummy bears, the setup instructions were not so friendly. Luckily, we have benefitted tremendously from documentation that other repositories have generously shared online, particularly guides created by Alice Prael and others at Yale. UCLA’s Digital Archivist Shira Peltzman also recently asked us to contribute our “Tale of Woe” to a collaborative KryoFlux User Guide currently being drafted.

Before we conclude, we want to acknowledge both the particular institutional privileges that allow us to conduct this work as well as the broader structural challenges that complicate it. We are fortunate that the structure of our department affords processing archivists the time necessary to collaborate and experiment, as well as the material resources to purchase tools.

At the same time, while archivists are shifting functionally into more technical roles, institutional structures do not always acknowledge this shift. In our collective experience as an all-female team, we’ve faced challenges due to gendered divisions of labor. Even though the library and archives profession swings heavily female, technical positions in libraries still remain predominantly male. When these gender-coded realities are not acknowledged or challenged, undue and sometimes stubborn expectations can be placed on those who are expected to do the “digital” work and those who are not. For those in “traditional” processing roles with technical responsibilities that now fall within their domain, their labor can be often underappreciated or unacknowledged.

To wrap up, the realities of contemporary manuscript collections have made it clear that the lone digital archivist model no longer works for some institutions, particularly larger ones. As a team, we have met the challenge of integrating digital processing into our regular work by focusing on collaboration, horizontal learning, and living documentation. Although digital processing is new for us, we’ve been able to apply many skills we’ve already developed through prior work with metadata management, and we encourage our fellow archivists to find confidence in these skills when jumping into this work. We wanted to share with you the work we’ve done locally in hopes that our case study may empower anyone in a “traditional” processing role to take on the work that’s often been confined to that of the “digital archivist,” particularly by reaching out to others, whether they be in a different division, department, or institution.

We look forward to further collaboration with other colleagues at our home base and hope to continue building relationships and collaborating with others in the profession at large.
We leave you with a bibliography of additional resources and our contact information, and some gummy bears. Thank you.