Adventures of a multidimensional freak

This is Juan Julián Merelo Guervós English-language blog. He teaches computer science at the University of Granada, in southern Spain. Come back here to read about politics, technology, with a new twist

Latest comments

  • seoexpert en Spanish blogosphere in Wired
  • shrya en About conference poster design and defense
  • seoexpert en About conference poster design and defense
  • pioneerseo en Spanish blogosphere in Wired
  • Anika Kaur en Blogwalk, blogsit, blogrun
  • Kajal Thakkar en Blogwalk, blogsit, blogrun
  • Sweety Patel en Blogwalk, blogsit, blogrun
  • Taniya kapoor en Blogwalk, blogsit, blogrun
  • BillyHarper en Blogwalk, blogsit, blogrun
  • look at this site en I have to publish something in this blog
  • Blogs Out There

    Nelson Minar's Blog
    Jeremy Zawodny's Blog
    Complexes, Carlos Gershenson's blog
    IlliGAL, Genetic Algorithms blog
    Blogging in the wind, Víctor R. Ruiz's blog

    Atalaya, my Spanish language blog
    Geneura@Wordpress, our research group's blog.
    My home page

    Old stories

    Creative Commons License
    This work is licensed under a Creative Commons License.

    Inicio > Historias > Paper on Baldwin effect uploaded to Arxiv

    Paper on Baldwin effect uploaded to Arxiv

    Yesterday, I uploaded to Arxiv a paper on Lamarckian evolution and the Baldwin effect. Here's the abstract:
    Hybrid neuro-evolutionary algorithms may be inspired on Darwinian or Lamarckian evolution. In the case of Darwinian evolution, the Baldwin effect, that is, the progressive incorporation of learned characteristics to the genotypes, can be observed and leveraged to improve the search. The purpose of this paper is to carry out an experimental study into how learning can improve G-Prop genetic search. Two ways of combining learning and genetic search are explored: one exploits the Baldwin effect, while the other uses a Lamarckian strategy. Our experiments show that using a Lamarckian operator makes the algorithm find networks with a low error rate, and the smallest size, while using the Baldwin effect obtains MLPs with the smallest error rate, and a larger size, taking longer to reach a solution. Both approaches obtain a lower average error than other BP-based algorithms like RPROP, other evolutionary methods and fuzzy logic based methods

    The paper is rather old, but was published in a Spanish conference with limited difussion; and, since the issue of Baldwin effect is quite important in neuroevolutionary algorithms, I guess it's better to make it widely available. As usual, comments and criticism are welcome.

    2006-03-02 13:03 | 0 Comment(s) | Filed in

    Referencias (TrackBacks)

    URL de trackback de esta historia


    Dirección IP: (4a9671a52b)
    ¿Cuánto es: diez mil + uno?

    © 2002 - 2008 jmerelo
    Powered by Blogalia