« Guest Entry: Little Black Sambo | Main | Mid-Winter planning shenanigans »

R-E-S-P-E-C-T

The Register has an article about Google's new beta for full-text search results, considered to be roughly analogous to Amazon's "Search within the book" feature. The writer, Andrew Orlowski, goes from describing the potential pitfalls of Google Print, to why Google sucks, to why the Internet sucks.

It's a bit precipitious, and like most articles/op-eds in The Register, is wildly confident in its own assumptions and conclusions. And the site's writers often like to take a contrarian view of IT and Internet issues. Nonetheless, Orlowski gives major shout-outs to librarians while puncturing the once and future hype of the Internet as a replacement for libraries:

  Taxonomies also have been proved to have value: archivists can justify a smirk as manual directory projects dmoz floundered - true archivists have a far better sense of meta-data than any computerized system can conjure. If you're in doubt, befriend a librarian, and from the resulting dialog, you'll learn to start asking good questions. Your results, we strongly suspect, will be much more fruitful than any iterative Google searches.   

What's written in the article is little more than what's been bantered about in the library press on how librarians need to fight against the perception that they have been/are being rendered obsolete by various tools of information access. Orlowski's 'Quantum Theory' is more snarky than what you usually get in American Libraries. I'm sure many people would disagree with his conclusion of the promise of the Internet (going back to 1994) being just as dubious and suspect as the Internet of 2003 ... there's a certain signal-to-noise ratio that begs at least a small amount of consideration. And that swipe at blogging was pretty unnecessary.

But in the end, he returns to the value of information professionals and the systems they design/build/utilize:

  The disappearance of "the Internet" - in its c.1994 incarnation isn't too much to worry about. It never really existed, and what we must value is the information archives we have now. If in doubt - ask a librarian, while you can still find one.  

I doubt the worm (or the Zeitgeist) has turned ... libraries and librarians (and other information professionals) will continue to have to defend their collections, their best practices and themselves by any means ncessary. But this does, I think, provide a little inspiration ... and optimism.

Comments

I read the article yesterday. In my opinion, the author's shrill tone and inaccurate points took away from the good point that he has. (For example, dmoz.org is volunteer-run, not run by computers. It is also extremely used; it powers Google Groups.)

Since information is a human understanding of data, only humans can create and classify information.

I think a huge issue with the style of information on the Internet, and what it was thought it would become, is that asides from the .com bubble exploding in violent sprays of shrapnel, we also have the loss of the 'free for all' idealism.

Everyone thought the net was free, and should stay free, not only from the sources it offers, but the services it brings to us too. Many of us have seen services from 1994-1997 suddenly stop being free, and suddenly start charging.

The issue with charging for information, was that it became a neccessity, because as the Internet became more accessible to Joe Bloggs, it naturally filled with more and more personalised trivia and trash can material, of people simply wanting to voice themselves.

I'm no expert, I've just been using the net since it first came into creation, but I do feel that what we are missing, is brightly lit pointers to where the good resources are at.

There are hundreds of lists and 'archives' of material, most of which are dead, irrelevant, or hopelessly outpaced.

People are given sources to use for searching for information, but as was stated, most of these methods return only the basics of the information required. From totally inadequate trackbacks to amature works with lots of tags and headers.

I personally did a Google search the other day for Hotels, and would you believe it, the first four pages had the same pages listed roughly thirty times?! This simply sums up the poor quality of what the basic public has to work with.

However, as was also mentioned, do people want the 'real deal' archived material in the first place? The sort of stuff you normally need to go into a library, take a ticket to the librarian, and lug some dusty old volume over to a desk and pore over for hours on end?

Some do, the majority, simply, don't.

So what do we do? Continue to watch the Internet drown in its own refuse, or simply preserve the real information, in the hope that it will stand atop those that fell before it, and shine through like the leading light?

Time will tell, but so far, it's not looking promising.

Chip -- Did you actually read the article, i quote:

"Archivists can justify a smirk as manual directory projects dmoz floundered - true archivists have a far better sense of meta-data than any computerized system can conjure."

I take that to mean that he is criticising both casual archivists (first part of that sentence) and in the second part criticising computer based taxonomies. He certainly isn't saying that dmoz.org is computer based.

It's google 'directory' it powers. *not* google groups.

stavros.