Imagine you wanted to find books or journal articles on a particular subject. Or find manuscripts by a particular author. Or locate serials, music or maps. You would use a library catalogue that includes facts – like title, author, publication date, subject headings and genre.
That information and more is stored in the treasure trove of library catalogues.
It is hard to overstate how important this library catalogue information is, particularly as the amount of information expands every day. With this information, scholars and librarians are able to find things in a predictable way. That’s because of the descriptive facts presented in a systematic way in catalogue records.
But what if you could also experiment with the data in those records to explore other kinds of research questions – like trends in subject matter, semantics in titles or patterns in the geographic source of works on a given topic?
Now it is possible. The Library of Congress has made 25 million digital catalogue records available for anyone to use at no charge. The free data set includes records from 1968 to 2014.
This is the largest release of digital catalogue records in history. These records are part of a data ecosystem that crosses decades and parallels the evolution of information technology.
In my research about copyright and library collections, I rely on these kinds of records for information that can help determine the copyright status of works. The data in these records are already embodied in library catalogues. What’s new is the free accessibility of this organised data set for new kinds of inquiry.
The decision reflects a fresh attitude toward shared data by the Library of Congress. It is a symbolic and practical manifestation of the library’s leadership aligned with its mission of public service.
To understand the implications of this news, it helps to know a bit about the history of library catalogue records.
Today, search engines let us easily find books we want to borrow from libraries or purchase from any number of sources. Not long ago, this would have seemed magical. Search engines use data about books – like the title, author, publisher, publication date and subject matter – to identify particular books. That descriptive information was gathered over the years in library catalogue records by librarians.
The library’s action sheds light on this unseen but critical network. This infrastructure is invisible to most of us as we use libraries, buy books or use search engines.
For many, the idea of a library catalogue conjures up the image of card catalogues. The descriptions contained in catalogue records are “metadata” – information about information. Early catalogue records date back to 1791, just after the French Revolution. The revolutionary government used playing cards to document property seized from the church. The idea was to make a national bibliography of library holdings confiscated during the Revolution.
For many years, library collections were organised individually. As the number of books and libraries grew, the increased complexity demanded a more consistent approach. For example, when the Library of Congress purchased Thomas Jefferson’s personal library in 1815, it arranged its collections around Jefferson’s personal system organised around the themes of memory, reason and imagination. (Jefferson based this on Francis Bacon’s own model.) The library sought to arrange its collections on that model into the 19th century.
As the number of books and libraries grew, a more systematic approach was needed. The Dewey Decimal System appeared in 1876 to tackle this challenge. It combined consistent numbers (“classes”) with particular topics. Each class can be further divided for more specific descriptions.
In the 1890s, the library developed the Library of Congress Classification System. It is still used today to predictably manage millions of items in libraries worldwide.
Catalogues, cards and computers
By the 1960s, systematic descriptions made the transition from analog cards to online catalogue systems a natural step. Machine-Readable-Cataloging (or MARC) records were developed to electronically read and interpret the data in bibliographic cataloguing records. The structured categorisation coincided naturally with the use of computers.
The Library of Congress remains a primary – but not the only – source for catalogue records. Individual libraries produce catalogue records that are compiled and circulated through organisations like OCLC. OCLC connects libraries around the globe and offers an online catalogue. WorldCat coordinates catalogue records from many libraries into a cohesive online resource. Groups like these charge libraries through membership fees for access to the compiled data. Libraries, though, typically do not charge for the catalogue records they produce, instead working cooperatively through organisations like OCLC. This may evolve as more shared effort and crowdsourced resources can be combined with the library’s data in ways that improve search and inquiry. Examples include SHARE and Wikipedia.
One month later
In the short time since the Library of Congress’s data release, we see inklings of what may come. At a Hack-to-Learn event in May, researchers showed off early experiments with the data, including a zoomable list of nine million unique titles and a natural language interface with the data.
For my part, I am considering how to use the library’s data to learn more about the history of publishing. For example, it might be possible to see if there are trends in dates of publication, locations of publishers and patterns in subject matter. It would be fruitful to correlate copyright information data retained by the US Copyright Office to see if one could associate particular works with their copyright information like registration, renewal and ownership changes. However, those records remain in formats that remain difficult to search or manipulate. The records prior to 1978 are not yet available online at all from the US Copyright Office.
Colleagues at the University of Michigan Library are studying the recently released records as a way to practice map-making and explore geographic patterns with visualizations based on the data. They are thinking about gleaning locations from subject metadata and then mapping how those locations shift through time.
There’s a growing expectation that this kind of data should be freely available. This is evidenced by the expanding number of open data initiatives, from institutional repositories such as Deep Blue Data here at the University of Michigan Library to the U.S. government’s data.gov. The UK’s Open Research Data Task Force just released a report discussing technical, infrastructure, policy and cultural matters to be addressed to support open data.
The Library of Congress’s action demonstrates an overarching shift in use of technology to meet historical research missions and advance beyond. Because the data are freely available, anyone can experiment with them.
Melissa Levine, Lead Copyright Officer, Librarian, University of Michigan.
This article first appeared on The Conversation.