Interesting idea that one.
It reminded me of a diversity-guided evolutionary algorithm (a GA) I implemented some years ago -- I think during 2005. Rasmus Ursem, the author, had applied a "similar" concept to deal with the optimization process:
"The DGEA applies diversity-decreasing operators (selection and recombination) as long as the diversity is above a certain threshold dlow . When the diversity drops below dlow the DGEA switches to diversity-increasing operators (mutation) until a diversity of dhigh is reached. Hence, phases with exploration and phases with exploitation will occur (see Fig. 2). Theoretically, the DGEA should be able to escape local optima because the operators will force higher diversity regardless of fitness."
Of course the user *must* use elitism, otherwise the aforementioned algorithm will never converge to an optimum and would perform a simple random walk. That DGEA is very CPU intensive and, if not well implemented (mainly the distance-to-average measure), tend to call so many times the evolutionary operators, without giving a satisfatory result. At least was that what I got after implementing it. But I do not know if I implemented something wrong, since I only had on my hands the paper and Ursem's email to ask him some questions. It was fun to implement that algorithm! :)
I found a misspelling in Carlos' paper:
"Besides *de* above-referred techniques[...]."
I think he meant "the", not the Portuguese preposition "de".
Hey, JJ, could you send me, please, a pdf copy of the following paper?
"Comparing evolutionary hybrid systems for design and optimization of multilayer perceptron structure along training parameters."
I would be very grateful if I could read it! The email is that one in the "Correo-e" text box.
Thank you in advance!