Yesterday, I uploaded to
Arxiv a
paper on Lamarckian evolution and the Baldwin effect. Here's the abstract:
Hybrid neuro-evolutionary algorithms may be inspired on Darwinian or Lamarckian evolution. In the case of Darwinian evolution, the Baldwin effect, that is, the progressive incorporation of learned characteristics to the genotypes, can be observed and leveraged to improve the search. The purpose of this paper is to carry out an experimental study into how learning can improve G-Prop genetic search. Two ways of combining learning and genetic search are explored: one exploits the Baldwin effect, while the other uses a Lamarckian strategy. Our experiments show that using a Lamarckian operator makes the algorithm find networks with a low error rate, and the smallest size, while using the Baldwin effect obtains MLPs with the smallest error rate, and a larger size, taking longer to reach a solution. Both approaches obtain a lower average error than other BP-based algorithms like RPROP, other evolutionary methods and fuzzy logic based methods
The paper is rather old, but was published in a Spanish conference with limited difussion; and, since the issue of Baldwin effect is quite important in neuroevolutionary algorithms, I guess it's better to make it widely available. As usual, comments and criticism are welcome.