Summary of ULS online discussion: Surveys in Libraries

Author: Jennifer Lee, University of Calgary

On December 2, 2013, ULS held its first members-only online discussion on surveys in libraries.

ULS hopes to hold these discussions in the fall and spring to highlight ULS member work, to extend conversations beyond ALA Annual and Midwinter, and as a benefit to ULS members. ULS also hopes to briefly summarize these as an added member benefit; this is the first summary.
In this session four members of the Evidence-based Practice Discussion Group gave tips on surveys and discussed survey creation and implementation. Participants used the chat window to interact with speakers and each other.

Wendy Begay urged us to gather data that would be actionable. That is, we should ask questions that give us information that we can act on. She also suggested that we report our results back to survey participants; student groups can be useful here. More importantly, we should tell participants what we changed as a result of the information they gave us. A good tip that Wendy gave was to not base big changes on one survey, but instead to consult with other (possibly existing) data first.

Jason Martin talked about survey design and different question types. A theoretical framework or model, such as the information literacy standards, or aspects of a service, keep surveys focused and on topic. For example, choose 2-3 information literacy standards, and then design your questions based on those. This also avoids too many questions addressing one area, and too few in another. Jason then gave tips on designing questions such as: avoiding the overuse of open-ended questions, avoiding the use of negatives in questions (which can be confusing), and avoiding jumping around from question type to question type (which can place a cognitive strain on those surveyed.)

Rick Stoddart talked about Counting Opinions’ LibSat tool for measuring user satisfaction (note he is not necessarily endorsing it). Compared to a homegrown survey, LibSat has more analysis and reporting functions and validity, but less flexibility and control of the survey questions. Compared to LibQual, LibSat provides continuous data, rather than a snapshot; unlike LibQual, LibSat results cannot be used in peer comparisons.

Lisa Horowitz presented her experiences of adding a question to a university-wide survey. She worked with a team that included library administration as well as someone from the local Office of Institutional Research (OIR) to design the question. The OIR staff member added it to the next five years of enrolled student, senior and alumni surveys. Issues included balancing the needs of other areas on campus who also wanted to add questions, justifying the need for an additional library question if the survey already includes one, and being able to compare responses across different surveys.

More information on the speakers can be found at

The next members-only discussion will be on Academic Library Outreach, on Wednesday, April 23 from noon-1 pm EDT. To register, go to:
A recording will be sent to all who register, so sign up even if you can’t participate!

Interested in university libraries? Join your colleagues at ACRL ULS, where you can find opportunities to participate in continuous learning activities like our lively and engaging online discussions, to volunteer on professional committees, to make connections with a great network of university librarians, and more! For more information on ULS, including an archive of past events and discussion forums, see also our blog ( and Section website ( To become a member, simply update your ACRL division memberships at and select the University Libraries Section under ACRL. Membership is free is you are not already enrolled in more than 2 sections and only an additional $5.00 if you are. We look forward to welcoming you as a member!

“See” you at the next discussion!

Archived Recording

Archived Slides

You may also like...