Adventures of a multidimensional freak

This is Juan Julián Merelo Guervós English-language blog. He teaches computer science at the University of Granada, in southern Spain. Come back here to read about politics, technology, with a new twist

Latest comments

  • ahhhhhhh en Problem solved: disabled wireless Internet
  • ODOM en Problems with git and Google Code?
  • Ricardo en Problems with git and Google Code?
  • DAUGHERTY en Problems with git and Google Code?
  • TRAVIS en Problems with git and Google Code?
  • Frenzy Sportswear en About conference poster design and defense
  • TRAVIS en Problems with git and Google Code?
  • en Algorithm::Evolutionary 0.62_2 released
  • en First sightings of the R word
  • a en Spanish blogosphere in Wired
  • Blogs Out There

    Nelson Minar's Blog
    Jeremy Zawodny's Blog
    Complexes, Carlos Gershenson's blog
    IlliGAL, Genetic Algorithms blog
    Blogging in the wind, Víctor R. Ruiz's blog

    Atalaya, my Spanish language blog
    Geneura@Wordpress, our research group's blog.
    My home page

    Old stories

    Creative Commons License
    This work is licensed under a Creative Commons License.

    Inicio > Historias > Paper on Baldwin effect uploaded to Arxiv

    Paper on Baldwin effect uploaded to Arxiv

    Yesterday, I uploaded to Arxiv a paper on Lamarckian evolution and the Baldwin effect. Here's the abstract:
    Hybrid neuro-evolutionary algorithms may be inspired on Darwinian or Lamarckian evolution. In the case of Darwinian evolution, the Baldwin effect, that is, the progressive incorporation of learned characteristics to the genotypes, can be observed and leveraged to improve the search. The purpose of this paper is to carry out an experimental study into how learning can improve G-Prop genetic search. Two ways of combining learning and genetic search are explored: one exploits the Baldwin effect, while the other uses a Lamarckian strategy. Our experiments show that using a Lamarckian operator makes the algorithm find networks with a low error rate, and the smallest size, while using the Baldwin effect obtains MLPs with the smallest error rate, and a larger size, taking longer to reach a solution. Both approaches obtain a lower average error than other BP-based algorithms like RPROP, other evolutionary methods and fuzzy logic based methods

    The paper is rather old, but was published in a Spanish conference with limited difussion; and, since the issue of Baldwin effect is quite important in neuroevolutionary algorithms, I guess it's better to make it widely available. As usual, comments and criticism are welcome.

    2006-03-02 13:03 | 0 Comment(s) | Filed in

    Referencias (TrackBacks)

    URL de trackback de esta historia


    Dirección IP: (85f229ef82)
    ¿Cuánto es: diez mil + uno?

    © 2002 - 2008 jmerelo
    Powered by Blogalia