Search and find (more), or, What can libraries learn from Google?

In my last post I took an admittedly rather cursory look at the ways in which the quantities of data being generated in the modern, networked society provide distinct challenges for library and information professionals. In this, my second reflection on the topic of ‘Digital Information Technologies and Architectures’, I want to take the step and think a bit more about what this means for libraries and other cultural heritage institutions.

The question I want to ask is this. What exactly can libraries (and other cultural heritage institutions, for that matter) learn from Google?

The question might seem a little provocative, I admit. After all, the dichotomy between libraries and web-based search engines (other brands of which are of course available, but note that only Google is included as a verb in the OED) is one you hear talked about a lot, especially when questions of the relevance of library services (usually public or academic) are discussed.

For instance, when former Fox News presenter Greta Van Susteren took to Twitter earlier this month to criticise the building of expensive new library buildings on U.S. college campuses as “vanity projects”, arguing that the same information (and, indeed, much more) could instead be accessed via students’ smartphones, librarians and academics were quick to respond along the familiar lines of the debate. Libraries, they argued, offered their users immensely more in terms of value-added services than the search engines and the web – with all their potential for inadequacy and bias in the results they present – ever could.

As you may be relieved to hear, I don’t intend to delve much deeper into this particular debate in this post. For one thing, as has already been argued by Ned Potter in a piece on this topic, entitled (rather tellingly) ‘For The Last Time, Google Is Not Our Competition in Libraries‘, the terms of the debate, which tend to place library services in direct rivalry with Google for the hearts and minds (or at least the reliance and curiosity) of those who seek after information, are frequently overblown. In other words, we tend to use search engines and libraries for very different things; you wouldn’t expect, after all, to use a library to find out general information about the weather, or where to buy cheap cinema tickets.

Search engines such as Google are, in some ways, an extension, a branching out into the everyday contexts of all of us who live in the networked, data-rich information society that surrounds us, of one of the most basic functions of the library as a memory institution throughout its long history: that of information retrieval (IR). Along with the other technologies which have been introduced over the past few weeks of my course at #citylis (metadata, relational databases, RDF, linked open data, APIs, etc.) they exist to help make sense of, describe, index, and get access to, data which are made available over the Internet. 

Of course, the ways in which search engines locate and present their results may vary in terms of quality and trustworthiness. This is especially true when it comes to the problematic issues of differences in relevance ranking algorithms and the excessive personalisation of search results. Either of these can lead to a user simply gaining a less-than-accurate impression of the available information or – at the other end of the spectrum – to the formation of what Eli Pariser has termed ‘filter bubbles’: that is, the search engine will give us the results it thinks we want, without exposing us to different or alternative viewpoints. (Whether or not such ‘bubbles’ had anything to do with the outcome of certain recent political events is a question for another time…)

On the other hand, the speed and flexibility with which services like Google can obtain a large number of results of sufficient quality to satisfy most users, has meant that it has become, for some, nearly synonymous with the web. Language use is a case in point here: when we say “I’ll just Google it”, do we really mean any more than simply “I’ll look it up (on the Internet)?” The sheer ubquity of Google as a default search engine in most browsers has only added to this sense of obviousness.

Perhaps there is something to be said, then, for the idea that Google (and its competitors) have taught us how to search for information in a particular way? By allowing the user to enter free-text queries into a single white box, without having to construct a series of more complex (and, admittedly, more precise) commands into an SQL-based interface, these services have seemingly cornered the market when it comes to ease and convenience.

This is turn has had an impact on expectations. Indeed, the growing use of ‘discovery systems’ such as Serials Solutions’ Summon and ExLibris’s Primo in academic library settings may be interpreted as a reponse to demand from students to have access to all the various resources held by the library – eBooks, bibliographic databases, digital collections, online journals, and print holdings – through one single aggregated interface. What is more, the possibilities that these systems offer to draw in metadata for resources from outside of the library’s holdings, with links to full-text publications readily available via the technologies that support linked open data (such as stable URLs, crossrefs, and so on), mean that the perception of the library as a reliable gateway to information may be reinforced (Shapiro, 2013).

I have been playing around a lot recently with Europeana, a online collection of cultural heritage data provided by institutions – libraries, museums, galleries, and archives – from around Europe and the world. All of the content on the site is published as linked open data, with good quality metadata provided for each item; the Europeana API also allows other web services to draw upon the data in the collection for their own purposes. In many ways, it represents my ideal of a digital library and, indeed, of a a library discovery system in general. And it leads me to thinking. If libraries and other members of the GLAM sector are able to contribute to the standards on the web for resource description and indexing (metadata, data curation and networking, and so forth), perhaps what we really should learn from Google is how our users would prefer to search for the information on our systems?

Further Reading

Bradley, Phil (2013), Expert Internet Searching, 4th ed. (London: Facet)

Shapiro, Steven David (2013),”We are all aggregators (and publishers) now: how discovery tools empower libraries”, Library Hi Tech News, Vol. 30 Iss 7 pp. 7 – 9

Europeana Labs, – for Europeana APIs and datasets

Author: thulrbaker

Rare books cataloguer and current student in LIS at #citylis, London.

3 thoughts on “Search and find (more), or, What can libraries learn from Google?”

  1. This is an excellent piece of writing David, showing your understanding of the range of concepts which gave been introduced over the last few weeks in INM348. Your post demonstrates too, that you have read around the issues, and considered, in your use of Europeana, how the theoretical constructs might apply in practice. The title of your text, ‘What can libraries learn from Google?’ shows you are aware of current thoughts and trends in LIS, and that you are able to make innovative connections between material and discussion in the classroom, and that from the wider, networked world

    I notice also the carefully thought out design for your blog!

    Well done. 🙂


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s