As I’ve put this project together over the past several months, I’ve spent a great deal of time thinking about the ideal shape of a digital archival. The more time I spend with the audio reels that form the base of my archive, listening to them over and over as I digitize and transcribe them, the more I feel they deserve to be presented with as little intervention as possible. To this end, my website has evolved into two fairly separate entities. On one end is the archive, where the collection of audio reels will eventually be reproduced in total with as little narrative intervention as possible. On the other end is the mediated environment that I have been referring to as the “galleries,” where I will publish short interpretative essays that situate the audio reels within their various historical contexts.
Creating an archive that is simultaneously unmediated and useful is no small task. A set of .wav files would of course offer the most “pure” digital presentation of the content, but would also prove generally useless for the purposes of research and general consumption. The Louie B. Nunn Center for Oral History at the University of Kentucky Libraries’ Oral History Metadata Synchronizer (http://www.oralhistoryonline.org) has been a great solution to this problem, as it allows for the presentation of immersive and highly accessible audio files without extensive description or exhibit manipulation. The interactive links between audio and text files that OHMS provides allow for a searchable transcript that encourages visitors to the archive to actually listen to the audio files, rather than simply reading through transcript derivations. For an example of, see: http://newsreeldetroit.matrix.msu.edu/blackstarproductions/items/show/4.
Providing this particular physical archive with a digital embodiment makes a great deal of sense. The magnetic reels in this collection are extremely fragile—many of them have in fact already experienced significant print through or breakage—and are not currently easily accessible to researchers in their full archival context. These reels were created and preserved as a collection, and their historical value is only fully captured by an archive that makes them available as a unified body of work. Some of the reels are out of sync or printed over in ways that are easily corrected with digital audio software, but I have chosen not to take this step. Excessively cleaning this material degrades its historical context, as it erases the evidence left by twenty years spun on a storage reel. Similarly, I have chosen not to edit down inaudible portions of these audio tapes, as this again undermines their historical context as coherent physical artifacts.
More than anything, the process of building this admittedly tiny audio archive has taught me that this kind of careful archival construction is a much larger project than one person can manage alone. Significant labor is involved in digitizing the content, as well as preparing it for exhibit online. These reels were recorded at various speeds on increasingly obsolete equipment. To this end, with the goal of assisting anyone who might come across this in the process of doing similar work, I have provided documentation below of the workflow that I’ve adopted. My gallery is built in a hosted version of Omeka 2.4 (http://omeka.org/download/) with the MultimediaDisplay plugin installed (https://github.com/UCSCLibrary/MultimediaDisplay)
- Initial digitization is performed using a reel to reel player loaded in a MOTU cabinet. The audio files themselves are recorded with Audacity software and then exported as 32-bit PCM Float WAV files.
My reel to reel player is capable of 7 1/2 IPS and 15 IPS playback only. I have digitally adjusted the playback speed of reels recorded at 3 3/4 IPS by playing them back at 7 1/2 IPS during recording and them using the “change speed” filter at 50% in Audacity. - For inaudible audio files, I have increased the amplification to a minimum audible level using the “amplify” filter in Audacity. These are the extent of my digital manipulations.
- The WAV file is then converted to an MP3, with metadata fields populated with all information written on the physical reel box.
- That MP3 is then uploaded into ExpressScribe, running Text Wrangler alongside the program rather than transcribing internally. (Transcription within ExpressScribe is not stable, and strongly NOT RECOMMENDED.)
- The TXT transcription file is then cleaned for OHMS and saved as CRLF UTF-8 TXT to make it readable by OHMS software. If OHMS encounters problems reading the file, I will run it through the Word macro provided by the Louie B. Nunn Center for Oral History.
- The MP3 file is then uploaded to the Omeka gallery where this archive is hosted, and all metadata is manually copied from the MP3 container into the Omeka interface. Before leaving Omeka, I copy the direct path to the MP3 point OHMS to the file.
- I then open the OHMS interface (https://ohms.uky.edu, subscription required) and create a new interview, populating the metadata fields with text identical to similar fields in Omeka. The direct URL from Omeka is pasted into the appropriate OHMS metadata field. Importantly, the field to provide the XML file with a short name will be filled, as this will be necessary for the OHMS plugin in Omeka. I then upload the appropriate TXT file and sync the MP3 to the transcript. Eventually, I would like to add indexing to these files. At this point, indexing does not function in the OHMS plugin for Omeka. Repairing this functionality will require further development by a programmer.
- Once the sync is complete, I export the XML file from OHMS. At this point, I go back into Omeka, upload the XML file and link it by name into the file structure. If everything goes well, the interactive transcript and audio file should load on the Omeka repository page!
Leave a Reply