Much of our current model of network information and communication behavior is in the middle ground, it is based on the personal computer (desk- or lap-top) as a consumption target and the organizational website as the delivery target. In other words, the library website is the front door to network library services consumed on the user’s computer.
However, this model is increasingly at odds with their user’s diffuse network experience in an environment of mobile communications, multiple devices, and various integration environments (social networking, RSS aggregator, iGoogle, etc) on the one hand, and cloud based services which concentrate data, user experience and communications and computational capacity on the other. And we do indeed see libraries moving in directions which better play in this context.
Think of two challenges around the library website as the focus in this environment.
One, as I have discussed here before, it is often difficult for libraries to provide a well-seamed, smooth experience at that website. This is for well-known reasons. They are providing a thin layer over two sets of heterogeneous resources. One is the set of legacy and emerging systems, developed independently rather than as part of an overall library experience, with different fulfillment options, different metadata models, and so on. Typically, a library will manage an ILS, a set of services around licensed materials (knowledge base, resolver, metasearch, …), a repository or two, and so on. Another is that the library cannot chose not to provide certain resources because they pose some integration challenges. They need to provide what is required to support the community they serve. So here they have to manage a set of legacy database boundaries that map more to historically evolved publisher configurations and business decisions than to user needs or behaviors. Additionally resources may have different technical or business terms attached to their use. This fragmentation – of systems and of content – is certainly an ongoing challenge.
The second is more subtle and maybe more profound. Remember what I claimed: our model of provision is based on the desktop as a consumption target and the organizational website as a delivery target.
However, it appears that the desktop and the website may be progressively less the sole focus of attention, and a model which focuses almost exclusively on them looks increasingly partial in a world of diffuse network presence and cloud-based resources. This is what I mean by being ‘stuck in the middle’, while more – not by any means all- of the action moves elsewhere.
So, what do I mean by a diffuse network presence? We increasingly have a ‘mesh’ of entry points: PC and phone, yes, but also DVRs, cameras, navigation systems, consoles and so on. This is not a simple distinction between mobile and fixed: think of the iPhone or Asus Eee which close that gap somewhat. And we increasingly have a range of ways in which services are diffused through the network into different user environments: widgets of various sorts, RSS, toolbars, and so on. In other words, in an age of increasingly pervasive connectivity we have a variety of grades of experience available to us in how we connect up. Users are constructing their own digital workflows and identities out of a variety of network services.
A natural complement of this diffusion is a growing reliance on cloud-based services, which can be available to all the ways in which I engage with the network. If my network engagement is increasingly spread across different devices or applications, then I need to have access to my stuff from different places. We increasingly use a range of shared network level network services, for search, for social networking, for managing content and information, for communication, for processing, for bookmarking, and so on. Think of Google Docs, or Flickr, or Facebook. Some of these concentrate data, communications and computational capacity. Others may provide ‘portalization’ services as these are separated from the device or desktop (think Bloglines and Google Reader vs locally installed aggregator).
What this means is that resources are increasingly shared, synchronized and syndicated through a diffuse network of devices and services, and often between those and several concentrated cloud-based services. This is not to suggest that people don’t use a desktop, or, in some cases use it as a principal place to interact with their stuff and with stuff on the network. But this is alongside a growing range of activity which has been untethered from that local desktop. The local desktop, and all that it implies (web browser, local storage, reasonable screen real estate, sufficient attention to be able to manage complex chained transactions or multiple choices) cannot be taken for granted as the primary consumption target.
Now, these two issues are related of course. The library is managing monolithic fragments: the catalog, metasearch, repositories, a-z lists, etc. It is difficult to integrate these in the library websites, and it is difficult to push resources from them into diffuse environments of use and into cloud services. As I have suggested before, we spend too much time getting our systems to work, and not enough putting them to work. Additionally, the interface between the local presence and the consortial and other arrangements that are increasingly important to meet local needs may be more or less well-seamed.
Of course, a major focus of current efforts is to try to transcend the limitations of this environment. Libraries are working on this as are the organizations which supply them (think, for example, of Ex Libris’s discussion about integrated discovery and uniform resource management, Evergreen’s discussion about the ‘big resource rich library‘, shared arrangement like Deff in Denmark or the Scholars Portal in Ontario, or OCLC’s worldcat local and worldcat.org).
We can see two important directions, one towards concentration and one towards diffusion.
First concentration, which we see at at least three levels. At the institutional level, there is a strong push to overcoming fragmentation by moving towards new institutional discovery layers (Primo, Encore, Worldcat Local). At the group level we see the emergence of more state or national systems which pull together resources in user-facing services. These are attractive because they present more resources to the user. And at the global level, we see library resources being represented – through linking or syndication strategies – in search engines, Flickr, Google Scholar, Worldcat and other network level resources.
The second is atomization of content and services so that they can be better integrated into diffuse networking device and applications environments. Here think of RSS/AtomPub, mobile interfaces, APIs, alerting services, portlets and widgetization, persistent links to library services and content, etc. Issues here are technical and licensing. Users increasingly value convenience and relevance, and packaging materials in ways that make most sense for them is not always straightforward.
Of course, the second of these can happen at any of the local, group or global levels, and one of the more interesting organizational questions for libraries in the near future is how much resource to invest at each of these levels.
So, synchronization and syndication become much more important. The institutional website is still important, but a service strategy which focuses on that alone will be increasingly partial.
Related entries to follow …
Share
Comments