Rolling out Campus Guides

We began implementing Campus Guides (LibGuides CMS) this week. Two projects will kick-off this new platform for us:

  1. Building out new informational pages that target specific campus groups (faculty will come first)
  2. Redesigning the “LibGuides” workflow for guide publication by using groups for admin purposes

The informational pages have been a usability disaster for some time, but we had delayed doing anything about this because our current CMS is it’s own kind of disaster. Now that we have Campus Guides, though, we can implement some information architectural changes and interface enhancements (using JQuery) to vastly improve the utility of these kinds of pages.

An admin group that requires admin access to edit content will serve as the content management system for things like navigation boxes, Jquery features and other code-intensive objects that non-technical staff should not be touching. Another group will house the “guides” themselves, where librarians can access templates embedded with the admin group’s content.

We’ll be doing something similar with our traditional libguides content: creating an admin group that contains the “steal this guide” content with things like canned search boxes, book feeds, etc. We’ll also be creating a Test group for these libguides where interesting features can be explored without allowing others to copy such content. Once these tests are approved as public content, we can then simply place that new content into the “steal this guide” group for use across the system.

I’m curious if others out there have tried this approach. On paper it looks great. We shall see…

Advertisements

UnLibGuiding LibGuides

Libraries Website Re-created in LibGuides

Our library website re-created using Campus Guides as a WCMS by overriding the default CSS and creating an administrative group to protect HTML and Javascript code.

When my library launched LibGuides last year, we spent some time tweaking the CSS in order to streamline the LibGuides interface. As any user of this very fine product knows, LibGuides displays a lot of information that is very task-oriented, but in my opinion, can overwhelm the page to where all this info gets in the way of users getting their primary task done: that is, finding library resources pertinent to their research topic.

During this exercise of turning off these features, it became quite obvious that you could override much more than “displaying:none” several print and tagging features. One could, in fact, redesign the entire interface.

Fast forward 1.5 years later and along comes SpringShare’s Campus Guides, which is essentially LibGuides, but with the ability to run multiple instances (Groups) of LibGuides, each with its own unique customizations.

Immediately, I saw an opportunity to create a “Group” of guides that could serve as an experiment without affecting our live LibGuides service. The end result was turning Campus Guides into a true Web Content Management System (WCMS).

To be sure, I was going to override alot of CSS, largely by restyling the look and feel, but also “displaying:none” many parts of the site. But another aim was to continue using the CMS features of LibGuides, so that any group of pages could still be managed stably for the long-term and leverage all the power that makes LibGuides so great to work with.

To accomplish this, I created one Campus Guides Group to serve as the back-end content management area and another for creating the public facing guides. The CMS group was called Library Administration and contained image collateral, rich text boxes with HTML and javascript, JQuery libraries and menu systems that could be used in templates by page managers in the public-facing group.

That public facing group, which I dubbed Spork after our sandbox project, would have the public-facing CSS overrides and all the regular librarian-generated pages that would make up our LibGuides replicant of our website.

Taken together, here’s how these two pieces completed the full Campus Guides WCMS:

  1. When a new page is created, the author would decide which section of the site it would live in. For example, if the page was part of the About section, they would start by making a new guide from the About Template.
  2. This About Template would contain all of the content boxes that every About page should have, including the sub-navigation menu for the About area of the site, which is a box duplicated from a box in the Library Administration Group. This means that the page creator cannot alter the content of that navigation, but also any changes made in Library Administration would cascade down to all pages using that sub-navigation box.
  3. It’s also important to note that the Library Administration group can be password protected so that non-admin librarians cannot inadvertently destroy carefully constructed HTML content that is meant to “UnLibGuide” Libguides.
  4. Once the librarian has created their page from the About Template, they can add any box content they wish in the usual way. Importantly, our custom CSS overrides keep whatever they do to continue looking like the polished website we created in our custom CSS.

Using this strategy, we were able to do many things without fear of creating something that could be undone by non-HTML-savvy staff. We could embed forms, JQuery features and even remote database content into our Spork site. In fact, we were able to replicate just about every feature of our live Library Site within Campus Guides…and make it look like a professionally designed portal.

Parts of the site are still not filled out yet, but to see examples of some of what we’ve done, I’ve included the following links:

The lesson from our experiment is that libraries with technically literate staff could take LibGuides to a rather impressive level. In fact, this whole project came out of our frustration with doing something similar in Drupal. Point being: Campus Guides is far easier at creating basic web sites than Drupal (although admittedly not as powerful a tool…yet!).

I should also point out that this is strictly an experiment. My library is moving to SharePoint next winter.

Library IT Project Management Application Now Available

Howdy all! Sorry for the hiatus…just looking at my last post, it’s been 4 months since I posted anything. I suppose that gets pretty close to a closed blog. 🙂

Anyway, things have been extremely busy here with multiple major projects from migrating our website, launching WorldCat Local and lots and lots of experiments with Campus Guides, Drupal, SharePoint and beyond.

But I wanted to let everyone know that I published our Library IT Project Management Application on Zoho yesterday, making it free to the library community, and anyone else out there.

The application is available in the Zoho Marketplace for free!

This ticketing application was developed by my Web Services Team for project management. The application has the following features:

  • Task Request form for external teams to submit to Library IT
  • Task Assignment for Library IT lead to assign to staff
  • Views of tasks and projects by status
  • Email to initiators throughout task life cycle as task status changes
  • Request for review action to allow Library IT lead to review draft work before committing project to production.
  • Rich reports and statistics on tasks

The application uses system-generated emails (like the emails received from task requesters and the Zoho administrator). So there’s no real configuration required. And there’s lots of ways you could develop this further…so have at it…and let me know how you like it!

PS: My presentation on this application, which includes more details on its functionality was given at Code4Lib Midwest 2012. You can view the presentation on my Selected Works site.

Off to LITA

I’m sitting on the Lincoln Line, train #303, en route from Chicago to St. Louis, headed for the LITA National Forum.

It’s been a year since I last came back to “my people” at the 2019 Forum in Atlanta. And what a year it’s been!

Back then I was just a few months into my new job as Web Services Coordinator, still learning the various platforms at my institution, laying my strategy for the great work ahead and learning (sometimes the hard way) the political landscape of my new workplace.

At that time, the agenda for the coming year included preparing our library portal for migration to SharePoint and all the changes that would bring. There was also establishing a Web Services team, a core of metrics, a content management strategy and a documentation practice, none of which was yet in place. Who knew, as the year unfolded, that WorldCat Local, LibGuides and a trip to Adamson University in the Philippines would be in store.

As I reflect on the accomplishments since the last LITA Forum, it’s amazing how much actually got done.

The portal is not yet in SharePoint (that’s slated to occur by January), but I did succeed in forming consensus in the library about streamlining the portal architecture along with a major makeover of the Special Collections section. In July, my team and I launched this new architecture, reducing the number of pages by about 40%. It also appears that our efforts at building understanding in our campus IT about our unique requirements have been fruitful. We are now ready for the intriguing adventure that will be SharePoint.

Another accomplishment: My team is now in place to carry out our future work as well. Last winter, I hired a graduate developer to help carry out the coding (largely in C#). And this past summer, I filled the Web Applications Librarian position to assist in bringing our library closer to our cutting edge aspirations.

And all of our processes for collecting data and documenting our work is now formalized and ready for the future. The remaining piece, the content management plan, is largely complete while it awaits final approval from the library administration (and of course a content management system like SharePoint).

So, as I head out of Chicago toward the annual gathering of many other techbrarians, I go very content and optimistic about the road ahead. Here’s to a week of inspiring ideas in St. Louis and to months afterward for implementing those ideas.

You’re Gonna Need A Bigger Boat: Redesigning a Special Collections Site

I just exploded a killer shark with a single rifle shot…well, actually, two shots, since I missed the first time.

In gearing up for the migration to SharePoint for my library’s web portal, I began some overdue conversations with our Special Collection and Archives staff about their section of the library site.  The result was a massive overhaul of their site architecture, features and design that led to a much more efficient site.

Original Special Collections and Archives Site

Original Special Collections and Archives Site

At the time that I began looking at their site, it was the largest section of the entire library portal, with dozens of pages arranged with no particular hierarchy, speaking the language of few visitors and serving up little in terms of actual content. In general, our Special Collections and Archives pages had the same kinds of problems that many other similar sites have (Dowell, 2008; Stevenson, 2008)

Clearly, there had to be a better way.

The work to fix this began about four months ago and was initially scoped by me (incorrectly) as a fairly straight-forward redesign proposal. My first plan was to have the SpCA staff come to my office, discuss options for bringing all their content (mostly PDF finding aids) into a database and wrapping a site around that.

Step 1: Discuss typical users
Step 2: Examine the current architecture
Step 3: Rationalize a new architecture by marrying lessons from steps 1 & 2.

Of course, this turned out to be ridiculously naive. And, in fact, the outcome of those first meetings was a realization that brought me back to that line in Jaws, when Chief Brody mutters in horror: “You’re gonna need a bigger boat.”

I had failed on a few fronts here, all related to time:

  1. I didn’t really appreciate the time it would take to get the SpCA staff on the same page as me, in terms of understanding the usability issues my solutions aimed to solve.
  2. I hadn’t provided enough time to do my homework in terms of analyzing the content on the site at the level required to truly improve the site.
  3. Most importantly, I let timelines from other projects get in the way of doing what was necessary to get the job done right.

And so, after my first redesign proposal went down in flames, I stepped back and started building that bigger boat. Step One would be to return to the information architecture methodology script straight out of the “Polar Bear Book” (Information Architecture by Morville and Rosenfeld).

My new methodology to make the special collections waters safe for swimmers was to:
1.    Conduct a very informal heuristic evaluation and link problems to usability issues
2.    Discuss these issues with staff
3.    Conduct 1-on-1 interviews with staff about the site history and its future
4.    Sit down with everyone and design user personas
5.    Conduct an in-depth content analysis
6.    Synthesize the findings and build a proposal from that

Suddenly, the shark was manageable. My meetings with staff began to be far more productive, largely because they learned more about the issues involved and also because I had gained a much deeper knowledge of them, their users and their site.

New Special Collections Site

New Special Collections Site

By the time it came to the second proposal, everyone was pretty much on the same page. And, in fact, the second iteration was light-years better than the first. Some of the changes we implemented include:

  • Major consolidation of the site pages and simplification of the architecture
  • Use of images and exemplars to guide users
  • Use of FAQs (within LibGuides) to help novice users understand how Special Collections works.
  • Greater use of appropriate language to suit the majority of users, while also allowing for different kinds of users
  • Linkages to contact info, a database-driven finding aid collection and image search (currently on hold pending improvements in the image collection).

Put another way, I made success out of the initial failure. And, this has already helped with the larger project now at hand: moving the entire site to SharePoint, which is due to happen in the next few months. Yes, yes, I know I’ve been promising an update on that…

Better Living Through Good Data, Part 2

Crazy Egg confetti view with cookie variable filtersA few months ago, I blogged about a cookie-based filter I deployed to screen out librarian machines from my library’s Google Analytics and Crazy Egg data. Well, after running this experiment for awhile, it looks like our IP data was actually good enough after all.

The original problem that sparked this experiment was that the computers in our library were all on dynamic IPs and thus, we could not reliably screen those machines out in order to get a pure glimpse into what non-librarians thought was important. A few options were considered, such as putting all library machines on static IPs. The university IT department didn’t like that idea, so I had to go back to the drawing table…or baking table, if you will.

The solution was to deploy a browser cookie on each machine used by staff, which Google and Crazy Egg would then use to identify the librarians visiting various library pages. The added bonus was that we could actually see which web elements and pages were mostly used by staff and which ones were mostly used by others.

Problem is, deploying the cookie across the staff computers was a real hassle: each computer might have multiple users, like the Reference computers, which required going around chasing down librarians (not an easy task!) and having them log into each computer, launch each browser and change their default homepage to our cookie page (which sets the cookie and then refreshed to the library homepage). In some areas, there might be ten or more people using a single computer.

Needless to say, the pay off for all this work had to be really good.

And as it turned out, it out, the IP range filters were not all that different from the cookie-filtered data.

In Google Analytics, we set up multiple views, one with no filters, one with a cookie filter and one with an IP range filter. We let the data come in for two months and then checked the results to see what the discrepancies were.

What we found was actually surprising. The IP filters were doing a pretty good job…in fact, they were doing too good of a job. It appears that the IP range covers the librarians, but also many of the public machines in our computer labs and elsewhere. This isn’t too bad, since users working at home are very likely not librarians…so, we essentially have our librarians out of our data using the IP ranges.

And statistically, the IP ranges were only a few percentage points off the cookie filters, so the differences were fairly benign.

For example, on Saturday March 5, 2011, we had a total of 6,751 people come to our library homepage. Of those, 49 people were librarians according to our pretty much perfect cookie-filtered data. The IP filters identified 131 users as librarians because it counted some computers used by students in the labs. Doing a little math: we found an average 1.5% difference between the two filters. Not siginificant enough to worry about…especially given how onerous deploying and monitoring the cookie was.