The web is getting old: the end of key-word + reputation based search ?

With the years going by, the web is getting older with an increasing impact on search results. More and more pages returned by search engines only return very out-dated results. Currently, the way search engines work is to rely on pages (or links to those pages) containing the key-words of the user’s query and ranks them according to an estimated reputation of the page.

However, neither of these two criteria take into account the age of the page, or more precisely the pages *current* reputation. Most searches (but not all) often seek to get recent information about a topic. This is particularly true of technologuy related searches. It often happend to me to search for a way to make this or that work in my linux box and find complex solutions requiring me to modify this or this file, compile this or that program… Theses solutions we often found to be outdated and a further searching (the old way, with out necessarily relying on the search engines results) allowed me to find out the a very simple solution existed: installed a recent package. The same occurs when searching for code examples or api references. I often find myself with results for version 1.4 of the Java API when the current one I am using is 1.6 and where solutions have often evovled to natively provide a solution to the problem I am facing. At the least, such out-dated results, at the least make searching more timely. They likely often miss-lead the to out-dated solutions.

This lets me think that the need for a new generation of search engines is growing fast. This complemented to the growing maturity of the so-called “semantic web”.

Leave a Reply