Science has changed with the introduction of search engines; usually for the good (it's easier to find stuff related to what you're working in), but also in some other ways that can't straighforwardly be labeled good or bad.
For instance, one of the ways of being aware of the state of the art is searching
Google Scholar. The criteria for ranking results is not always clear. For instance, searching for
evolutionary computation returns as first hit
a paper by Schwefel, which is an overview written in 93. Excellent paper, but it has less citations than the next, which is
a book by Fogel, or
the third, another edited book. I would consider neither of them, right now, the best reference, or even primer, for evolutionary computation, notwithstanding their number of cites.
Which brings me to the googlization of science: there's no way of establishing a ranking of the quality of papers (although the number of cites is a good enough approximation), which means that, in its absence, paper authors will look for what search engine optimizers usually look for: term density, using keywords and metainformation, papers acting as "link farms", semantic markup for emphasis, and so on... it's easier to trick a machine than to trick a group of persons, and if previously people published just for having a line added to their resumé, now publishing for increasing their paper pagerank might also be a concern.
Is a
scientific pagerank possible? It might, and it could help, but meanwhile, there's a market niche to be filled (scientific paper ranking optimization) and it will no doubt be filled by someone.