Adventures of a multidimensional freak

This is Juan Julián Merelo Guervós English-language blog. He teaches computer science at the University of Granada, in southern Spain. Come back here to read about politics, technology, with a new twist

Latest comments

  • enable bluetooth windows 10 en A bit of "Cars" trivia
  • Budget rental cars en The Real H
  • Budget rental cars en The debate about peer-review
  • kolkata Escorts en PPSN: On quality of papers and so forth
  • happy wheels en Äppärätti in Super Sad True Love Story
  • slither io en Problem solved: disabled wireless Internet
  • driving directions en The waning art of bullfighting in Spain
  • bullet force en Monotonically increasing functions
  • pokemon en A quick take on PPSN
  • pokemon en Zapatero on the NYT
  • Blogs Out There

    Nelson Minar's Blog
    Jeremy Zawodny's Blog
    Complexes, Carlos Gershenson's blog
    IlliGAL, Genetic Algorithms blog
    Blogging in the wind, Víctor R. Ruiz's blog

    Atalaya, my Spanish language blog
    Geneura@Wordpress, our research group's blog.
    My home page

    Old stories

    Creative Commons License
    This work is licensed under a Creative Commons License.

    Inicio > Historias > The future of Google: irrelevancy?

    The future of Google: irrelevancy?

    It was already in 1999 when Lawrence and Giles discovered that just a fraction of the whole WWW was indexed by the search engines. By that time, HotBot had the highest coverage, with barely a third of all web pages covered. Google migh have improved the situation a bit, but it's bound to get worse.
    For starters, it should. For Google to index something, somebody must tell him to do so. A standalone page, not linked by any other, will never be found by google. Web pages change, and Google can't keep up with them. Some pages don't want to be indexed. And some are simply not found, for some other reason.
    Some are probably dropped from the database. Search time for Google is bound to increase with the number of pages indexed, and there will be a moment in which all the database can't be searched in real time (well, in a very small time, anyways). You can add more computers, do stuff in parallel, create specific processors, but that will increase the price of every single search, and it will become economically unfeasible to do so; even more if search becomes more complex, by an improvement of the algorithm, or by additional features. Then, Google (or any other search engine, for that matter), will start to "not find" stuff. It's probably not too bad if it finds enough stuff to make you happy, but it will probably not please the merchant or commercial sites which depend on search engines for plying their wares.
    But that's not the only reason. The internet is now almost synonymous with the web. HTTP Traffic accounts for around half of all Internet traffic. But this percentaje is also bound to decrease; it might be overcome by, who knows, P2P traffic or instant messaging traffic. Even if HTTP prevails, XML might be used more than HTML, and even if does not happen, dynamic web pages will surpass static web pages.
    There are so many things that can fail, that my opinion is that searching as a mass business has an expiry date.
    The Economist has a nice article along those lines (via Blogdex). The question is, ¿where will it be in a few years?

    2003-11-02 03:43 | 0 Comment(s) | Filed in

    Referencias (TrackBacks)

    URL de trackback de esta historia


    Dirección IP: (6ad694cc6d)
    ¿Cuánto es: diez mil + uno?

    © 2002 - 2008 jmerelo
    Powered by Blogalia