Database Trials: A How-To

Librarians at this week’s Electronic Resources and Libraries conference in Austin, TX (as well as those who attended the portions of the conference that could be subscribed to online), got a handy primer in collecting feedback on databases their institutions are thinking of adding. Trials by Juries: Suggested Practices for Database Trials was presented by Jon Ritterbush, University of Nebraska, Kearney (UNK); Annis Lee Adams of Golden Gate University, CA; and Christine E. Ryan, Clemson University, SC.

Ritterbush explained that the genesis for the program was a September 2011 posting to the ERIL-L listserv in which he asked, What tools or techniques have worked for you in gathering feedback on database trials, whether from librarians or library users? The five responses revealed an array of approaches at different institutions, and Adams was prompted to convene this panel.

The three librarians had fairly similar issues with database trials‚ each is working with finite funds and faces many demands from faculty and students, not to mention vendors, to buy products from the ever-expanding, and often bewildering, menu of reference databases.

Golden Gate Database Trials: A How ToInitial questions to ask
Adams explained that she sets up all trials at Golden Gate University’s library, but the request can come from multiple channels: herself, other librarians, faculty, and sometimes even students. The first question to ask, she says, is whether the material in the database matches the school’s curriculum‚ if not, there is no trial (a sentiment echoed by the other presenters). Secondly, she checks whether the database will work on a technical level in an academic environment. In order to be suitable for the school, the site must offer access via IP authentication and remote access via a proxy server. Some databases, notably those created for corporate libraries, are only accessible through individual usernames and passwords, and these will not work where there are many thousands of potential users, all of whom need on- and off-site access. The vendor must also be able to provide usage statistics.

At Golden Gate, Adams told the audience, the ideal time for database trials is in the fall, so that the library can be ready with a decision when the budget is due in the spring. However, the institution is involved in three to four trials at a time, all year. Products are minimally evaluated by the collection development committee, which consists of the subject liaison librarians and the library director, and by the faculty member who requested the product, if any. Databases may also be appraised by other faculty members, but only if the product is something the library is likely to buy.

Evaluation at Golden Gate University examines four areas:

  • Content
  • Ease of use (one database was rejected recently because not even the librarians could figure out how to use it)
  • Cost
  • Whether the item has been requested by faculty, in which case it is given greater consideration

All this is commented upon on the library’s internal blog. It is formatted so there is one blog posting per trial; those involved can leave comments to that post. Once a decision has been reached, Adams sets the post to drafts so that it is no longer open to staff but is saved. She adds any in-person discussion, committee deliberations, for example, to the posting so that all information regarding a decision on the product is saved in one place.

Tracking multiple trials
One of the library’s biggest problems is keeping track of all the trials and their verdicts. Especially in the case of repeated requests for the same item, it’s necessary to have complete documentation of which product was tried, when, what the purchase decision was, and why that decision was made. Adams has created a spreadsheet that documents everything from full trials to receiving just a price quote from a vendor and not proceeding with a trial. The spreadsheet features columns listing vendor name, database name, trial dates, evaluation dates, decision date, decision, notes, and who requested that the product be considered.

UNK Database Trials: A How ToNext was Ritterbush, who echoed many of the ideas expressed by Adams, and had many institution-specific findings, too. He emphasized that if his library doesn’t have the money for a particular product, they may postpone the trial, and if they proceed with it will do so while providing full disclosure to the vendor that a purchase isn’t likely this year. As well as being the right thing to do, Ritterbush explains, this sometimes results in the vendor offering a discount so that the product can be bought after all.

At UNK, faculty requests for new library resources go through the relevant subject liaison, who must be willing to act as a sponsor for a database. This accomplishes three goals, says Ritterbush: it helps keep the liaison in the loop on what might be bought in their area; it increases the number of librarians who see the product; and it helps spread out the responsibility for purchases, so that one librarian isn’t always under pressure.

To share or not to share
The institution mainly runs any database trials in the fall and spring semesters because that’s when faculty will respond. The library only tries three or fewer products at a time, which limits confusion as well as what Ritterbush termed trial fatigue. Also, only trials of three months or longer are featured on the institution’s website; others are shared within the library or emailed to selected faculty (about five respondents are considered optimal). And don’t rely on email feedback, he says, as it can be mediocre; instead, UNK library uses a short web form to get answers, asking for Likert-scale as well as open-ended text responses. Feedback forms can be housed on, for example, SurveyMonkey or LibGuides, explains Ritterbush.

Clemson Database Trials: A How ToRyan, the final presenter, explained that at Clemson University, as the electronic resources (ER) librarian she is not central to the database trial process. That’s because the ER librarian position is a new one there, and also because new products under consideration are very subject specific, so it is appropriate that subject librarians set up and monitor the trials. Ryan was very clear that trials shouldn’t be made open to the public (a contention that was challenged in the question-and-answer session after the presentation), maintaining that doing that is a decision that only the vendor should make. Guidelines for trials, she explained, should be created using the library’s collection development policy document as a roadmap, and should state who is responsible for the various parts of the trial process, explain how to get the most from the trial, and what to ask the vendor. Like Ritterbush, she recommends that documentation be posted to LibGuides.

Advertising
Ryan added some important points to the other presenters’ comments. It is imperative to see the potential license agreement before the trial, she says, as if it doesn’t match state requirements, the product cannot be bought anyway. Use all means at your disposal to advertise‚ student listservs are a good venue, and don’t forget that digital resources can be promoted using print vehicles, such as easily created table tents that tell the students that they can go to the reference desk to try the product.

The library’s feedback forum (Ryan showed an online questionnaire) should ask specific questions rather than solicit general impressions‚ for example, it should ask which search term the user employed, and if they aren’t relevant, to reject that person’s remarks. Don’t require a response to every question, she says, or users will give up. And when providing the feedback to others (you should include the vendor as well as the local participants), create an executive summary and define library statistics jargon such as download and session.

The question-and-answer session addressed several topics that showed that libraries need guidance in this area. The most useful was an answer provided to Bob Scheier of the library of College of the Holy Cross, MA, who asked the panelists how they know whether funding for a database will be available, given that they run three to four trials at a time. Ryan suggested that the library should make clear to requestors and vendors that they’re working with a prioritized wish list. Managing expectations with regard to new purchases, she explained, is key to the whole process.

See Library Journal’s Best Databases 2011

Share
Henrietta Verma About Henrietta Verma

Henrietta Verma (hverma@mediasourceinc.com, @ettaverma) is reviews editor at Library Journal, edits Library Journal and School Library Journal's reference review columns, and covers ereference and digital databases for both magazines. Before joining LJ's staff, Etta was reference editor at SLJ for five years and edited that magazine's Series Made Simple supplement. Etta, who is from Ireland, has also been a reference librarian and a library director and is the mom of two avid readers.

Featuring YD Feedwordpress Content Filter Plugin