Separate Beds for ContentDM

separate beds for contentdmI tried to make things work, but in the end, short of a divorce, I told ContentDM if things were going to work out between us, we had to sleep in separate beds.

There’s been a lot of talk about “Breaking up with ContentDM” but for a library with limited tech staff to develop our own digital library platform, calling it quits isn’t in the cards…no matter how abusive ContentDM is to us.

Abusive? Well, let’s list it here to be on record:

  • As of this writing core functionalities like the image viewer and search do not work in IE10 due to Compatibility Mode (but then again IE10 users are just asking for it…time to move on folks!)
  • phrase search doesn’t work well
  • stop words are not stopped which is especially bad since phrase searching doesn’t fix this
  • commonly used JQuery UI features cannot be used in custom pages without conflicting with the Advanced Search drop down
  • Worst of all, once you upload a JS or CSS file, it’s in there for good…no deletions are possible!
  • Objects that are composed of an image layer and an OCR text layer do not display correctly in Firefox (but that’s probably more on Mozilla than OCLC)

So, I knew it was time to draw a line in the bedroom when our attempts at customizing the user experience within the ContentDM web configuration toolset went terribly wrong.

Our JQuery almost always caused conflicts, our attempts at responsive design went horribly wrong within the very unresponsive framework of ContentDM and the way ContentDM is structured (with separate CSS/JS uploads for each customized collection) spelled long-term disaster for managing change.

Then came the latest release update when even more went wrong (largely in IE10).

In the end, I couldn’t take it anymore and called up OCLC and begged them to reset the whole site to default configurations, so we could at least start fresh without concerns that legacy JS and CSS were going to cause problems (as I believe they were). They were very helpful and in a matter of 2 hours, had our collections all reset.

We’re doing it differently now as we roll out phased customizations.

Here are our hard-learned best practices:

  • Never upload any custom CSS or JS to ContentDM…at least until OCLC creates a way to delete these. Instead, where you need such things, simply upload a reference file that points to externally hosted files, which you can edit/delete as needed
  • For the system header, upload your banner image and resist the urge to include any navigation HTML. Instead, use the system menu creation tool. You can use your externally hosted CSS file (reference globally) to style these links (but if you have drop downs you need to given them using this method)
  • Use a meta tag redirect to force ContentDM to redirect traffic to your externally hosted home page since ContentDM doesn’t allow you to replace the home page completely with an external page without resorting to this trick. Probably not great for SEO, but avoids the aggravations we endured for so long
  • Use the custom page tools for your collection pages that allow you to replace the whole page (including the header and footer) with an externally hosted page. In our case, we are doing this for the really important collections, but others, we manage directly within ContentDM.
  • Put any custom interface features into your externally hosted pages and develop to your hearts content

The result: users can now enjoy JQuery-powered interface features and more responsive designs from the home page down to the individual collection pages. If you want to add proven page-turning or timeline technologies in your collection pages, you can now do so without worry. The users only deal with ContentDM once they enter search result or image viewer pages.

To help with the browser issues, we will be deploying browser checks that will deliver messages to users coming to our site with IE or Firefox, hoping to head off bad user experiences with one-time, cookie-based messages. In other words, the first time you come to our site with one of these known problem browsers (for ContentDM), you’ll be urged to use Safari or Chrome.

Conceivably, you could use a CMS like WordPress or Drupal to manage your custom pages and start adding timeline, mapping and other plugins as you like. We’ll probably work toward this in 2014 or 2015.

Speaking of user disruption, the other cool thing about separating most of your digital library GUI from ContentDM, is that you can work in test environments, out of sight, and only update the public pages when you’ve thoroughly tested them. This was impossible when we tried to work in ContentDM itself. And when things went wrong, the users in our DL saw every thing in plain sight.

View the current production version of our Digital Collections.

Yeah, separate beds are just what the doctor ordered. Post any questions in the comments as I’m sure I raced through many of the details…

WorldCat Quality Assurance Testing

Over the past year-plus, my library has been prepping a launch of WorldCat Local to serve as our new, single-search tool for library resources. Now that my colleagues in Metadata, Serials and E-Resources has done their work to prep WCL’s back-end, my team and I have launched a one-month Quality Assurance test.

Using Zoho as our testing platform, my team created four modules that aimed to:

  • test the out-of-the-box features of WorldCat Local
  • test the accuracy of our work in the Knowledge Base (to bring up full-text access, etc.)
  • provide an introduction to WCL features to staff and launch an internal conversation about WCL impact on our work
  • identify short-term and long-term fixes that will be carried out on our current configuration and metadata

The data we’re analyzing can be analyzed from a few perspectives:

  • Format specific failure rates
  • Basic vs. Advanced search failure rates
  • Database specific failure rates
  • Feature specific failure rates

We’re currently in the fourth week of our testing, but already we’ve identified a number of issues, some of which we have already resolved. Some of which we will take care of right after testing ends. In fact, I’m already seeing value in running a follow-up test right after we make some tweaks to the WCL service configuration that will likely fix the issue staff have seen in the testing. We can then compare the same searches pre and post-tweak.

I’m personally excited about the move from our ridiculously siloed approach to resource discovery (you know, the way libraries traditionally have had a bizillion different platforms that just boggle all but the most savvy reference librarians). Our new vision is to provide a single search box to get at our resources that will be approachable and logical to most users. Yes, certain researchers will still need the specific database tool, but for 90-95% of our users, WCL will probably suffice.

Actually, I think this will only get better over time. I was at an OCLC meeting in Chicago yesterday, and was happy to hear about how libraries currently using WCL are using it’s features to improve the relevancy and value of the library catalog…and getting up to speed with contemporary user expectations (that is the Google way).

We hope to put together a formal report by the end of March and then run a few more test before launching WCL with our new portal next Fall.

 

Library Who Dunnit

The joke lately at my library is that our website just doesn’t matter anymore. Today, I saw this sentiment come alive in our analytics data.

As you can see from the graph below, we have a typical academic usage pattern, with activity building up during a quarter, peaking just before finals and then crashing as students rush off on their inter-quarterly wanderings.

What was interesting in this last quarter’s data was that this activity failed to build to its usual tempo. In fact, it appeared to build and then flatten out.

The culprit? I think this is our hosted research help portal, LibGuides, which we just soft-launched in the middle of the quarter…about the very time student research needs start to draw them to the library. Pre-LibGuides, we would see alot of traffic to our locally-hosted Subject Guides, which would show up in our analytics data. But LibGuides, which is replacing these Subject Guides, is off the radar as far as Google Analytics is concerned (for now), so this may be why we saw such low numbers.

Anyway, with WorldCat Local configuration almost complete and LibGuides readying for a hard launch in Fall, the library web site will likely see even further erosion in usage. And this is a very good thing, because it means we’ve gotten all the other stuff out of the way, improved the signal to noise ratio and just simplified the act of accessing our resources (which are also all off-site and out in the cloud).

That’s what they call usability folks!

Your Library: A Disruptive Publishing Platform

Followers of the Internet’s short, but remarkable history, will have to admit that all the early talk in the 1990s of the web as an agent of social disruption are beginning to cascade into reality. Hyperlinks are finally beginning to reach into corporate email caches and government communication logs. The crumbling of the music industry is now spreading to print publishing and film. And corrupt regimes from Tunisia to Iran to China are struggling to grapple with the consequences of a connected citizenry.

These stories get lots of press. But, an different kind of online revolution is sweeping through the library world, which you may have heard of, but gets much less fanfare: libraries are becoming not just storehouses for books, but publishers as well.

A good chunk of this publishing power is owed to the proliferation of cheap, simple-but-powerful, cloud-based publishing solutions.

Since 2008, the technology world has been all abuzz about cloud computing, and now that mantra is top on the agenda at nearly every kind of government, business or educational conference.

Librarians will recognize this topic well, since it has been top of the agenda at our conferences for some time now. In fact, we kind of led the hype cycle years ago. Our shared catalogs, journal database services and even, in some cases, shared online reference services, have been distributed in cloud-like fashion for decades. In fact, at a recent OCLC preso, the librarians in attendance were reminded that OCLC Founder Frederick G.Kilgour was talking up the cloud concept back in the 1960s.

My own library is engaged in cloud-based publishing already. The university’s institutional repository, which I administer, is one such hosted service with content archived, backed-up and made available online via the Berkeley Press’ Digital Commons platform. On it, you’ll find full-text faculty articles and eBooks, student dissertations and theses, conferences and online journals all published by the university community. I’m even working with a faculty member on getting an art gallery set up for her oral history project. What’s more, it’s fully indexed by the major search engines, so it’s a fantastic way to bring attention to the university’s intellectual legacy.

The technology behind Digital Commons, is a very simple, but powerful, web publishing platform that is a good example of the benefits of the cloud. When the university first started this journey, we did not have to build anything. Once we decided who to go with, we just licensed the platform and turned it on. All the maintenance, security and development is handled by Berkeley Press. And what’s really nice is that because online publishing is their business, they have the incentive and focus to do a good job. And if they ever don’t satisfy, we can just move shop to another provider…with a little effort.

Of course, as is often the case with such hosted products, you have to give up some design and development control. So far, this has been the one true wrinkle. A common complaint I get is in regards to layout limitations. But to BePress’ credit, their response time to our requests is superb.

But back to disruption and revolution…Digital Commons is a good example of how cloud computing can be an agent of change…you know, like Twitter and Facebook, what I like to call the Molotov Cocktails of the 21st Century. What else would you expect from a product out of Berkeley anyway?

To be sure, traditional publishers see platforms like Digital Commons as a threat. For an industry famously desperate and broke, the online publishing model strikes right at their bottom line. That’s because platforms like Digital Commons remove the middleman from the equation and give full control over to the author and the author’s institution. For faculty and students whose primary motivation is not to enrich a publisher, but to have their work get noticed in a timely manner befitting our Internet age, this is a real coup.

And as I said, such cloud-based publishing platforms are also a revolution for libraries, which have traditionally been in the business of cataloging and sharing content, rather than producing it. But that’s just the crazy world we live in now.

Of course, in my opinion, the real disruptive publisher of our day are the blogging platforms like WordPress, which truly turn publishing loose to anyone…at no cost. And like other instances of the cloud, you can just turn them on and off at will. Nobody gets hurt. No one goes broke. But the creative energies are loosened on the world, unfiltered and unabridged.

But if you’re talking about trying bridge the old world with the hyper-atomized future that blogs foretell, well, my library can take authors there as soon as they say “go.”

Who’s Afraid of the Big, Bad OCLC? Not Me.

At ALA Midwinter, I was sitting down with my library’s rep from OCLC and I asked a naive question: “Are you guys public…or planning on going public?”

Some librarians may be surprised by my question since non-profit OCLC is only one of the most important players in the library vendor world. That is, few librarians would not know the OCLC story. Actually, this has alot to do with my fairly non-librarian experience at Adobe Systems which hired me out of library school…and the fact that I had never worked in a library before committing to risking it all on a library profession.

Of course, once the rep began discussing OCLC’s history, it all came back to me from that foggy nether region that was my library school experience, where a blizzard of unfamiliar acronyms, standards and issues was dumped into my brain.

But there was a reason for my question: these guys don’t act like a non-profit…more like a hungry start up.

  • Exhibit A: They are aggressively working on being the Google of library search with WorldCat Local.
  • Exhibit B: They are now building a very intriguing, extensible backend library system, Web Scale Management.
  • Exhibit C: They are looking to channel all that dynamic energy that swirls within the world’s pool of librarian developers by building a community around a library “Apps Store.”
  • Exhibit D: All the above is in the cloud, integrated and scalable…just like the business model of any other technology startup set to disrupt the world as we know it.

In a sense, what I was walking away with after a few days of discussions, presos and visits to the OCLC booth was the sense that the Google analogy was actually quite fitting. The fact that this kind of energy and determination was present at a non-profit made it all the more appealing.

Now, many librarians worry about OCLC, arguing that their trajectory is following not Google’s path but Microsoft’s.

Here’s why I’m not afraid…in fact, here’s why I embrace a Big, Bad OCLC:

Firstly, Microsoft did set technology back via their sub-standard products and near-monopoly for almost 15 years, but that company is now losing market share rapidly for both of those reasons. In fact, they’ve already been dethroned by Apple as the world’s largest tech company, and probably soon by Google too. OCLC, on the other hand, is being too innovative (for now) to be a slothful titan like Microsoft.

Second, libraries are already suffering under other monopolists: the publishers and their database brethren, which have created extremely unser-unfriendly information silos in the areas of eBooks, journals, etc. A Big, Bad OCLC will have the leverage over these folks to push them to opening up their indexes so that the long, sad saga of multiple, siloed databases will finally come to an end. When there is a Big, Bad OCLC, you either add your index to WorldCat, or you suffer a lonely obscurity that is very unprofitable.

Finally, there is a movement to extend OCLC’s benefits across the library world. This idea of making OCLC a platform designed for others to build on top of is just the kind of environment libraries need to compete with the likes of Google and other information providers. It will keep us fresh as Firefox, up to date like Twitter, connected like Facebook…and more importantly, perhaps even more relevant than Google.

Yeah, I can’t wait for the Big, Bad OCLC.