BloJJ

Adventures of a multidimensional freak

This is Juan Julián Merelo Guervós English-language blog. He teaches computer science at the University of Granada, in southern Spain. Come back here to read about politics, technology, with a new twist

Latest comments

  • Natasha en Riddles in Kafka on the shore
  • Cb en Riddles in Kafka on the shore
  • Dan Brown Freak en Spanish mostly pissed off at Dan Brown's Digital Fortress
  • Jack en Riddles in Kafka on the shore
  • Anónimo en About conference poster design and defense
  • Hendo en Riddles in Kafka on the shore
  • TML en Riddles in Kafka on the shore
  • Anonymous en Riddles in Kafka on the shore
  • RonS en Riddles in Kafka on the shore
  • miss en Riddles in Kafka on the shore
  • Blogs Out There

    Nelson Minar's Blog
    Jeremy Zawodny's Blog
    Kottke
    Complexes, Carlos Gershenson's blog
    IlliGAL, Genetic Algorithms blog
    Blogging in the wind, Víctor R. Ruiz's blog


    Atalaya, my Spanish language blog
    Geneura@Wordpress, our research group's blog.
    My home page

    Old stories


    Creative Commons License
    This work is licensed under a Creative Commons License.
    Blogalia

    Stats
    Inicio > Historias > The debate about peer-review

    The debate about peer-review

    Peer review is the system by which when you submit a paper to a major scientific journal (and most congresses) the editor selects a few reviewers, which (purportedly) read the paper, write a report on it, and based on those report, the editor decides to accept the paper, reject it, or accept it after it has undergone a series of changes.
    As this recent article on the Guardian argues, it's not perfect, but it's the best we've got. Its main advantage is that it avoids kooky, false or misleading papers being published; most awful papers also bend it before arriving to the end of the process, and many bad papers do not go either.
    Having been on both ends of the process quite often (though I've never been an editor, I've assigned papers to referees as part of it, the best method being using an evolutionary algorithm) the main problem I see with this is that quality of a paper is more defined by proximity of research methods than by the bigger picture of how much the paper matters to the scientific community at large. The way I see it in computer science, small cliques of scientists all devoted to a single method examine it to the latest twist, nut and bolt and write papers on it, refereeing each other, approving grants submitted by others, and citing each other; sometimes even publishing its own journal that perpetuates its own parrochial scientific subculture. The same cliques, when they open up to the wided community, are usually rejected due to lack of significance or pure shoddy work, which just reinforces its behavior.
    That kind of things can be easily spotted through coauthorship and citation network analysis for any new paper that is submitted. ¿How many citations has the line of research received? ¿Where do they come from? In fact, the same method used to detect whether the target of a link is a spam page or not can be applied in these cases. Though one needn't go so far: just making sure that editors are independent of the research area, or that at least one of the rewiewers comes from "outside".
    That's not going to happen, of course. But one keeps dreaming...
    Etiquetas: ,

    2009-09-13 10:47 | 0 Comment(s) | Filed in Just_A_Scientist

    Referencias (TrackBacks)

    URL de trackback de esta historia http://blojj.blogalia.com//trackbacks/64450

    Comentarios

    Nombre
    Correo-e
    URL
    Dirección IP: 23.20.65.255 (3b2b457c65)
    Comentario

    © 2002 - 2008 jmerelo
    Powered by Blogalia