Luddites, Trumpism and Change: A crossroads for libraries

“Globalization is a proxy for technology-powered capitalism, which tends to reward fewer and fewer members of society.”
– Om Malik

Corner someone and they will react. We may be seeing this across the world as change, globalization, technology and economic dislocation force more and more people into the corner of benefit-nots. They are reacting out of desperation. It’s not rational. It’s not pretty. But it shouldn’t be surprising.

Years ago at a library conference, one of the keynote speakers forecast that there would be a return to the analog (sorry my Twitter-based memory does not identify the person). The rapidity of digitization would be met by a reaction. People would scurry back to the familiar, he said. They always do.

Fast forward to 2016, where the decades-long trends toward globalization, borderless labor markets, denationalization, exponential technological change and corresponding social revolutions has hit the wall of public reaction. Brexit. Global Trumpism. Call it what you will. We’re in a change moment. The reaction is here.

Reacting to the Reaction

People in the Blue Zones, the Technorati, the beneficiaries of cheap foreign labor, free trade and technological innovation are scratching their heads. For all their algorithms and AI, they didn’t see this coming. Everything looked good on their feeds. No danger could possibly burst their self-assured bubble of inevitability. All was quiet. It was like a clear blue, September 2001, morning in New York City. It was like the boardroom in the Federal Reserve in 2006. The serenity was over in an instant.

Since Brexit, and then Trump’s election, the Glittery Digitarians have initiated a period of introspection. They’re looking up from their stock tickers and gold-plated smart watches to find a grim reality: the world is crowded with people that have lost much ground at the expense of the global maelstrom that has elevated a very small, lucky few to greatness. They are now seeing, as for the first time, the shuttered towns. The empty retail stores. The displaced and homeless.

Suddenly their confident talk of personal AI assistants has turned from technolust to terror. Their success suddenly looks short-sighted.

Om Malik wrote in his recent New Yorker op-ed, that Silicon Valley may soon find itself equated with the super villains on Wall Street. He posits that a new business model needs to account for the public good…or else.

I recently read Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity by Douglas Rushkoff. If you haven’t read it, now would be a good time. Like Bernie Sanders and others, Rushkoff has been warning of this kind of reaction for awhile. The system is not designed for the public good, but only around a narrow set of shareholder requirements. All other considerations do not compute.

My Reaction

Let me put this in personal perspective.

In my work, I engage the public in “the heart of Silicon Valley” on what they want from their community and what’s missing. What I hear is concern about the loss of quiet, of connection to others, of a pace of life that is not 24/7 always a click away. This is consistent. People feel overwhelmed.

As one of the chief technologists for my library, this puts me in a strange place. And I’ve been grappling with it for the past few months.

On the one hand, people are curious. They’re happy to try the next big thing. But you also hear the frustration.

Meanwhile, the burden of the Tech Industry is more than inflated rents and traffic. There’s a very obvious divide between long-time residents and newcomers. There’s a sense that something has been lost. There’s anger too, even here in the shadow of Google and Facebook.

The Library as a Philosophy

The other day, I was visited by a Eurpean Library Director who wanted to talk about VR. He asked me where I thought we’d be in ten years.

I hesitated. My thoughts immediately went back to the words of despair that I’d been hearing from the public lately.

Of course, the genie’s out of the bottle. We can’t stop the digital era. VR interface revolutions will likely emerge. The robots will come.

But we can harness this change to our benefit. We can add rules to heal it to our collective needs.

This is where the Library comes in. We have a sharing culture. A model that values bridging divides, pooling resources and re-distributing knowledge. It’s a model that is practically unique to the library if you think about it.

As I read Rushkoff, I kept coming back to the Librarian’s philosophy on sharing. In his book, he contends that we need to re-imagine (re-code) our economy to work for people. He recalls technologies like HTTP and RSS which were invented and then given away to the world to share and re-use. This sounded very ‘librarian’ to me.

We share knowledge in the form of access to technology, after all. We host training on new maker gear, coding, robotics, virtual reality.

Perhaps we need to double-down on this philosophy. Perhaps, we can be more than just a bridge. Maybe we can be the engine driving our communities to the other side. We can not just advocate, but do. Have a hackathon? Build a public alternative to the Airbnb app to be used by people in your town.

Know the Future

In the end, libraires, technologists and digitarians need to tell a better story. We need to get outside our bubbles and tell that story with words that resonate with the benefit-nots. And more, we need that story to be backed up with real-world benefits.

It starts with asking the community what kind of world they want to live it? What obstacles keep them from living that way? And then how the library and technology can help make change.

We have the philosophy, we have the spaces and we have public permission. Let’s get to work.

Is 3D Printing Dying?

Inc.’s John Brandon recently wrote about The Slow, Sad, and Ultimately Predictable Decline of 3D Printing. Uh, not so fast.

3D Printing is just getting started. For libraries whose adopted mission is to introduce people to emerging technologies, this is a fantastic opportunity to do so. But it has to be done right.

Another dead end?

Brandon cites a few reasons for his pessimism:

  • 3D printed objects are low quality and the printers are finicky
  • 3D printing growth is falling behind initial estimates
  • people in manufacturing are not impressed
  • and the costs are too high

I won’t get into all that’s wrong with this analysis, as I feel like most of it is incorrect, or at the very least, a temporary problem typical of a new technology. Instead, I’d like to discuss this in the library maker context. And in fact, you can apply these ideas to any tech project.

How to make failure a win—no matter what

Libraries are quick to jump on tech. Remember those QR Codes that would revolutionize mobile access? Did your library consider a Second Life branch? How about those Chromebooks!

Inevitably, these experiments are going to fail. But that’s okay.

As this blog often suggests, failure is a win when doing so teaches you something. Experimenting is the first step in the process of discovery. And that’s really what all these kinds of projects need to be.

In the case of a 3D Printing project at your library, it’s important to keep this notion front and center. A 3D Printing pilot with the goal of introducing the public to the technology can be successful if people simply try it out. That seems easy enough. But to be really successful, even this kind of basic 3D Printing project needs to have a fair amount of up-front planning attached to it.

Chicago Public Library created a successful Maker Lab. Their program was pretty simple: Hold regular classes showing people how to use the 3D printers and then allow those that completed the introductory course to use the printers in open studio lab times. When I tried this out at CPL, it was quite difficult to get a spot in the class due to popularity. The grant-funded project was so successful, based on the number of attendees, that it was extended and continues to this day.

As a grant-funded endeavor, CPL likely wrote out the specifics before any money was handed over. But even an internally-funded project should do this. Keep the goals simple and clear so expectations on the front line match those up the chain of command. Figure out what your measurements of success are before you even purchase the first printer. Be realistic. Always document everything. And return to that documentation throughout the project’s timeline.

Taking it to the next level

San Diego Public Library is an example of a Maker Project that went to the next level. Uyen Tran saw an opportunity to merge startup seminars with their maker tools at her library. She brought aspiring entrepreneurs into her library for a Startup Weekend event where budding innovators learned how the library could be a resource for them as they launched their companies. 3D printers were part of this successful program.

It’s important to note that Uyen already had the maker lab in place before she launched this project. And it would be risky for a library to skip the establishment of a rudimentary 3D printer program before trying for this more ambitious program.

But it could be done if that library was well organized with solid project managers and deep roots in the target community. But that’s a tall order to fill.

What’s the worst thing that could go wrong?

The worst thing that could go wrong is doubling down on failure: repeating one failed project after another without changing the flawed approach behind it.

I’d also add that libraries are often out ahead of the public on these technologies, so dead ends are inevitable. To address this, I would also add one more tactic to your tech projects: listening.

The public has lots of concerns about a variety of things. If you ask them, they’ll tell you all about them. Many of their concerns are directly related to libraries, but we can often help. We have permission to do so. People trust us. It’s a great position to be in.

But we have to ask them to tell us what’s on their mind. We have to listen. And then we need to think creatively.

Listening and thinking outside the box was how San Diego took their 3D Printers to the next level.

The Long Future of 3D Printing

The Wright Brothers first flight managed only 120 feet in the air. A year later, they flew 24 miles. These initial attempts looked nothing like the jet age and yet the technology of flight was born from these humble experiments.

Already, 3D printing is being adopted in multiple industries. Artists are using it to prototype their designs. Astronauts are using it to print parts aboard the International Space Station. Bio-engineers are now looking at printing stem-cell structures to replace organs and bones. We’re decades away from the jet age of 3D printing, but this tech is here to stay.

John Brandon’s read is incorrect simply because he’s looking at the current state and not seeing the long-term promise. When he asks a Ford engineer for his take on 3D Printing in the assembly process, he gets a smirk. Not a hotbed of innovation. What kind of reaction would he have gotten from an engineer at Tesla? At Apple? Fundamentally, he’s approaching 3D Printers from the wrong perspective and this is why it looks doomed.

Libraries should not make this mistake. The world is changing ever more quickly and the public needs us to help them navigate the new frontier. We need to do this methodically, with careful planning and a good dose of optimism.

Virtual Realty is Getting Real in the Library

My library just received three Samsung S7 devices with Gear VR goggles. We put them to work right away.

The first thought I had was: Wow, this will change everything. My second thought was: Wow, I can’t wait for Apple to make a VR device!

The Samsung Gear VR experience is grainy and fraught with limitations, but you can see the potential right away. The virtual reality is, after all, working off a smartphone. There is no high-end graphics card working under the hood. Really, the goggles are just a plastic case holding the phone up to your eyes. But still, despite all this, it’s amazing.

Within twenty-four hours, I’d surfed beside the world’s top surfers on giant waves off Hawaii, hung out with the Masai in Africa and shared an intimate moment with a pianist and his dog in their (New York?) apartment. It was all beautiful.

We’ve Been Here Before

Remember when the Internet came online? If you’re old enough, you’ll recall the crude attempts to chat on digital bulletin board systems (BBS) or, much later, the publication of the first colorful (often jarringly so) HTML pages.

It’s the Hello World! moment for VR now. People are just getting started. You can tell the content currently available is just scratching the surface of potentialities for this medium. But once you try VR and consider the ways it can be used, you start to realize nothing will be the same again.

The Internet Will Disappear

So said Google CEO Erik Schmidt in 2015. He was talking about the rise of AI, wearable tech and many other emerging technologies that will transform how we access data. For Schmidt, the Internet will simply fade into these technologies to the point that it will be unrecognizable.

I agree. But being primarily a web librarian, I’m mostly concerned with how new technologies will translate in the library context. What will VR mean for library websites, online catalogs, eBooks, databases and the social networking aspects of libraries.

So after trying out VR, I was already thinking about all this. Here are some brief thoughts:

  • Visiting the library stacks in VR could transform the online catalog experience
  • Library programming could break out of the physical world (virtual speakers, virtual locations)
  • VR book discussions could incorporate virtual tours of topics/locations touched on in books
  • Collections of VR experiences could become a new source for local collections
  • VR maker spaces and tools for creatives to create VR experiences/objects

Year Zero?

Still, VR makes your eyes tired. It’s not perfect. It has a long way to go.

But based on my experience sharing this technology with others, it’s addictive. People love trying it. They can’t stop talking about it afterward.

So, while it may be some time before the VR revolution disrupts the Internet (and virtual library services with it), it sure feels imminent.

Three Emerging Digital Platforms for 2015

‘Twas a world of limited options for digital libraries just a few short years back. Nowadays, however, the options are many more and the features and functionalities are truly groundbreaking.

Before I dive into some of the latest whizzbang technologies that have caught my eye, let me lay out the platforms we currently use and why we use them.

  • Digital Commons for our institutional repository. This is a simple yet powerful hosted repository service. It has customizable workflows built into it for managing and publishing online journals, conferences, e-books, media galleries and much more. And, I’d emphasize the “service” aspect. Included in the subscription comes notable SEO power, robust publishing tools, reporting, stellar customer service and, of course, you don’t have to worry about the technical upkeep of the platform.
  • CONTENTdm for our digital collections. There was a time that OCLC’s digital collections platform appeared to be on a development trajectory that would take out of the clunky mire it was in say in 2010. They’ve made strides, but this has not kept up.
  • LUNA for restricted image reserve services. You and your faculty can build collections in this system popular with museums and libraries alike. Your collection also sits within the LUNA Commons, which means users of LUNA can take advantage of collections outside their institutions.
  • Omeka.net for online exhibits and digital humanities projects. The limited cousin to the self-hosted Omeka, this version is an easy way to launch multiple sites for your campus without having to administer multiple installs. But it has a limited number of plugins and options, so your users will quickly grow out of it.

The Movers and Shakers of 2015

There are some very interesting developments out there and so here is a brief overview of a few of the three most ground-breaking, in my opinion.

PressForward

If you took Blog DNA and spliced it with Journal Publishing, you’d get a critter called PresForward: a WordPress plug-in that allows users to launch publications that approach publishing from a contemporary web publishing perspective.

There are a number of ways you can use PressForward but the most basic publishing model its intended for starts with treating other online publications (RSS feeds from individuals, organizations, other journals) as sources of submissions. Editors can add external content feeds to their submission feed, which bring that content into their PressForward queue for consideration. Editors can then go through all the content that is brought in automatically from outside and then decide to include it in their publication. And of course, locally produced content is also included if you’re so inclined.

Examples of PressForward include:

Islandora

Built on Fedora Commons with a Drupal front-end layer, Islandora is a truly remarkable platform that is growing in popularity at a good clip. A few years back, I worked with a local consortia examining various platforms and we looked at Islandora. At the time, there were no examples of the platform being put into use and it felt more like an interesting concept more than a tool we should recommend for our needs. Had we been looking at this today, I think it would have been our number one choice.

Part of the magic with Islandora is that it uses RDF triples to flatten your collections and items into a simple array of objects that can have unlimited relationships to each other. In other words, a single image can be associated with other objects that all relate as a single object (say a book of images) and that book object can be part of a collection of books object, or, in fact, be connected to multiple other collections. This is a technical way of saying that it’s hyper flexible and yet very simple.

And because Islandora is built on two widely used open source platforms, finding tech staff to help manage it is easy.

But if you don’t have the staff to run a Fedora-Drupal server, Lyrasis now offers hosted options that are just as powerful. In fact, one subscription model they offer allows you to have complete access to the Drupal back end if customization and development are important to you, but you dont’ want to waste staff time on updates and monitoring/testing server performance.

Either way, this looks like a major player in this space and I expect it to continue to grow exponentially. That’s a good thing too, because some aspects of the platform are feeling a little “not ready for prime time.” The Newspaper solution pack, for example, while okay, is no where near as cool as what Veridian currently can do.

ArtStor’s SharedShelf

Rapid development has taken this digital image collection platform to a new level with promises of more to come. SharedShelf integrates the open web, including DPLA and Google Images, with their proprietary image database in novel ways that I think put LUNA on notice.

Like LUNA, SharedShelf allows institutions to build local collections that can contain copyrighted works to be used in classroom and research environments. But what sets it apart is that it allows users to also build beyond their institutions and push that content to the open web (or not depending on the rights to the images they are publishing).

SharedShelf also integrates with other ArtStor services such as their Curriculum Guides that allow faculty to create instructional narratives using all the resources available from ArtStor.

The management layer is pretty nice and works well with a host of schema.

And, oh, apparently audio and video support is on the way.

Your Job Has Been Robot-sourced

rosie-the-robot

“People are racing against the machine, and many of them are losing that race…Instead of racing against the machine, we need to learn to race with the machine.”

– Erik Brynjolfsson, Innovation Researcher

Libraries are busy making lots of metadata and data networks. But who are we making this for anyway? Answer: The Machines

I spent the last week catching up on what the TED Conference has to say on robots, artificial intelligence and what these portend for the future of humans…all with an eye on the impact on my own profession: librarians.

A digest of the various talks would go as follows:

    • Machine learning and AI capabilities are advancing at an exponential rate, just as forecast
    • Robots are getting smarter and more ubiquitous by the year (Roomba, Siri, Google self-driving cars, drone strikes)

Machines are replacing humans at an increasing rate and impacting unemployment rates

The experts are personally torn on the rise of the machines, noting that there are huge benefits to society, but that we are facing a future where almost every job will be at risk of being taken by a machine. Jeremy Howard used words like “wonderful” and “terrifying” in his talk about how quickly machines are getting smarter (quicker than you think!). Erik Brynjolfsson (quoted above) shared a mixed optimism about the prospects this robotification holds for us, saying that a major retooling of the workforce and even the way society shares wealth is inevitable.

Personally, I’m thinking this is going to be more disruptive than the Industrial Revolution, which stirred up some serious feelings as you may recall: Unionization, Urbanization, Anarchism, Bolshevikism…but also some nice stuff (once we got through the riots, revolutions and Pinkertons): like the majority of the world not having to shovel animal manure and live in sod houses on the prairie. But what a ride!

This got me thinking about the end game the speakers were loosely describing and how it relates to libraries. In their estimation, we will see many, many jobs disappear in our lifetimes, including lots of knowledge worker jobs. Brynjolfsson says the way we need to react is to integrate new human roles into the work of the machines. For example, having AI partners that act as consultants to human workers. In this scenario (already happening in healthcare with IBM Watson), machines scour huge datasets and then give their advice/prognosis to a human, who still gets to make the final call. That might work for some jobs, but I don’t think it’s hard to imagine that being a little redundant at some point, especially when you’re talking about machines that may even be smarter than their human partner.

But still, let’s take the typical public-facing librarian, already under threat by the likes of an ever-improving Google. As I discussed briefly in Rise of the Machines, services like Google, IBM Watson, Siri and the like are only getting better and will likely, and possibly very soon, put the reference aspect of librarianship out of business altogether. In fact, because these automated information services exist on mobile/online environments with no library required, they will likely exacerbate the library relevance issue, at least as far as traditional library models are concerned.

Of course, we’re quickly re-inventing ourselves (read how in my post Tomorrow’s Tool Library on Steroids), but one thing is clear, the library as the community’s warehouse and service center for information will be replaced by machines. In fact, a more likely model would be one where libraries pool community resources to provide access to cutting-edge AI services with access to expensive data resources, if proprietary data even exists in the future (a big if, IMO).

What is ironic, is that technical service librarians are actually laying the groundwork for this transformation of the library profession. Every time technical service librarians work out a new metadata schema, mark up digital content with micro-data, write a line of RDF, enhance SEO of their collections or connect a record to linked data, they are really setting the stage for machines to not only index knowledge, but understand its semantic and ontological relationships. That is, they’re building the infrastructure for the robot-infused future. Funny that.

As Brynjolfsson suggests, we will have to create new roles where we work side-by-side with the machines, if we are to stay employed.

On this point, I’d add that we very well could see that human creativity still trumps machine logic. It might be that this particular aspect of humanity doesn’t translate into code all that well. So maybe the robots will be a great liberation and we all get to be artists and designers!

Or maybe we’ll all lose our jobs, unite in anguish with the rest of the unemployed 99% and decide it’s time the other 1% share the wealth so we can all, live off the work of our robots, bliss-out in virtual reality and plan our next vacations to Mars.

Or, as Ray Kurzweil would say, we’ll just merge with the machines and trump the whole question of unemployment, let alone mortality.

Or we could just outlaw AI altogether and hold back the tide permanently, like they did in Dune. Somehow that doesn’t seem likely…and the machines probably won’t allow it. LOL

Anyway, food for thought. As Yoda said: “Difficult to see. Always in motion is the future.”

Meanwhile, speaking of movies…

If this subject intrigues you, Hollywood is also jumping into this intellectual meme, pushing out several robot and AI films over the last couple years. If you’re interested, here’s my list of the ones I’ve watched, ordered by my rating (good to less good).

  1. Her: Wow! Spike Jonze gives his quirky, moody, emotion-driven interpretation of the AI question. Thought provoking and compelling in every regard.
  2. Black Mirror, S02E01 – Be Right Back: Creepy to the max and coming to a bedroom near you soon!
  3. Automata: Bleak but interesting. Be sure NOT to read the expository intro text at the beginning. I kept thinking this was unnecessary to the film and ruined the mystery of the story. But still pretty good.
  4. Transcendence: A play on Ray Kurzwell’s singularity concept, but done with explosions and Hollywood formulas.
  5. The Machine: You can skip it.

Two more are on my must watch list: Chappie and Ex Machina, both of which look like they’ll be quality films that explore human-robot relations. They may be machines, but I love when we dress them up with emotions…I guess that’s what you should expect from a human being. 🙂

The People Wide Web

The debate around Net Neutrality has taken an interesting spin of late. Just as foes to Net Neutrality have gotten closer to their goal of setting up tollways and traffic controls on the information superhighway, some drivers are beginning to build their own transportation system altogether.

Net Neutrality is a concept that has been the norm on the Internet since its inception: the idea that every website gets equal treatment by Internet Service Providers (ISPs). But of course, media companies and the ISPs could conceivably benefit greatly if surcharges for access to higher bandwidth were allowed on the Net. For example, let’s say that Cable Company A offers priority bandwidth to Media Company X, allowing it to serve super high-def streaming video to users at lightning speed. However, Startup Company Z will then be obligated to compete against Media Company X for that bandwidth in order to provide the same quality service. Same goes for Blogger Y.

Fat chance of that. Indeed, given the pace at which media consolidation continues to go unchecked by regulators, were Net Neutrality abandoned, the Internet would quickly resemble something akin to how Network Television dominated communication in the years before high-speed Internet arrived.

And this is what concerns many people since a free, open web has so clearly promoted innovation. So far, the battle is not lost and Net Neutrality is still the norm. Nevertheless, some are creating back up plans.

This past week, BitTorrent, the people behind the popular torrent app uTorrent, announced they are exploring the creation of a new Internet which takes back control of the web and distributes access to websites across peer-to-peer networks.

Called Project Maelstrom, this torrent-based Internet would be powered by a new browser which would effectively rework the Internet into a much freer network with pretty much no gatekeepers.

Details are sparse at the moment, but essentially access to websites would be served as torrents, and thus not served from a single server. Instead, the sites would exist across the peer-to-peer network, in small, redundant bits living on people’s computers. Essentially, its the same technique used for torrent-based file sharing. When you try to access a site, your computer queries the torrent network and dozens of computers begin sending you the packets you need to rebuild the web page in question on your browser. And even as the web page is partially assembled, your computer then begins sharing what it already has with other people trying to access the site.

The result could likely be a much faster Internet, with much greater assurances of privacy. But technical questions remain and this does sound like it could take some time. But wow, what a revolution it would be.

Of course, this could get tricky to pull off. As you may have heard this week, the infamous torrent website Pirate Bay was taken down by authorities in Sweden this week. Pirate Bay serves up links to torrents allowing people to download everything from freeware applications to Hollywood movies that haven’t even been released yet and so has been targeted by law enforcement for years now. Even on today’s Internet, Pirate Bay could conceivably come back online at any time. But if the BitTorrent’s peer-to-peer Internet were realized, Pirate Bay would be back up instantaneously. Indeed, it would probably never come down in the first place. Same goes for Dark Net sites that sell everything from drugs to human beings, which have also been recently taken offline.

Bottom line is: Project Maelstrom is another example of how a free and open Internet is unlikely to ever go away. Question is, how much freedom is a good thing?

My own personal take is that taking back control of the Internet from media companies and ISPs would, on balance, be a great thing. Bad people do bad things in the physical world and that’s why we have never defeated crime 100%. As long as there is an Internet, there will be those that abuse it.

But even more importantly, innovation, freedom of speech and freedom to access information are core to advancing society. So I welcome Project Maelstrom.

So here’s a toast to the People-wide Web!

Responsive Design Now Ordinary

I had a great time at Matthew Reidsma’s talk on Responsive Design last week here in Chicago. But as I explored the concepts on a MAMP install of WordPress, I was startled to see just how ordinary Responsive has become. That’s because the default themes of WordPress are now responsive (and have been since last year’s Twenty Eleven Theme). Talk about “un-sexying” a technology!

It’s actually quite funny (and yet not funny) because I know many people (not in WordPress) who are working really hard to create responsive CSS using media queries from scratch. And this can be quite a job, because you really need to think differently about content, styling, design and even HTML. In fact, the whole enterprise of building a website is turned upside down…assuming you believe (as I do) that the simplest approach to building a responsive site is a Mobile First strategy.

WordPress (and some Drupal themes) just took all the mystery away, I suppose. If you’re fortunate to be able to use WordPress, responsive is just baked into the system and you can instantly see how it will look on different screen sizes by just dragging your browser window in and out.

Once again, this CMS impresses me for its elegance at solving the user experience issues of our day. Hats off to the WP community.