Memetic computation is an extension of evolutionary computation that combines the use of memes and genes to produce better solutions. In this article, we discuss what memes are, and how they make evolutionary algorithms better, and learn about Canonical Memetic Algorithms, which are the most basic form of memetic computation.
As we have already seen in the previous article, evolutionary algorithms are faster and more practical than exact optimization algorithms. However, a major problem with them is they are slow by modern standards, especially when compared to algorithms such as Artificial Neural Networks that produce solutions much more quickly. Besides, in certain applications, fitness functions can take a very long time to produce results, rendering evolutionary algorithms unpragmatic. For example, when used in engineering design, the fitness function may be a simulation run, which may take several hours to produce the final score.
Memetic computation tries to solve these issues by incorporating external knowledge to guide the process of evolution better. But how does knowledge make evolutionary algorithms better? We already know that evolutionary algorithms imitate the natural process of evolution to produce fitter individuals and that genes play an important part in evolution. However, in nature, genes are not the only ones that determine an individual’s fitness. The knowledge an individual acquires also informs their fitness. For instance, genes could allow an individual to run faster than an average person. However, by learning the proper sprinting technique, the individual could become even faster.
Knowledge is a set of ideas. These ideas can spread within a culture via imitation. Further, like genes, memes can also undergo natural selection. Consequently, over time, the fittest ideas survive while the rest disappear. Richard Dawkins proposed this concept in his book “The Selfish Gene”. He named these ideas ‘memes’. Accordingly, memes determine an individual’s fitness just like genes.
Memetic computation draws from this concept by utilizing memes to further enhance evolutionary algorithms. This meme, i.e., knowledge may come from a human who is an expert in a domain or from solutions to past problems.
This series of articles tries to provide a very generic introduction to the world of memetic computation based on the book “Memetic Computation” by Abhishek Gupta and Yew-Soon Ong. My goal is to provide an elementary intuition about memetic computation, so I have left out the mathematical and theoretical intricacies of it.
To understand this concept better, I will be using the same framework employed by the book. First, we will be looking at the Canonical Memetic Algorithms (CMA) followed by the idea of data-driven meme selection. Next, we shall explore memetic automatons before discussing the possible future use of Artificial Neural Networks (ANN) as memes.
Canonical Memetic Algorithms
CMAs are the crudest form of memetic computation. They introduce memes in the form of local search algorithms chosen by an expert. Here, the knowledge comes from the experts as they decide what the most appropriate local search algorithm for a given problem is. But what is a local search algorithm?
Evolutionary algorithms perform global searches. That is, they search the whole search space and gradually narrow toward the solution. In contrast, a local search entails searching a small part of the search space. Canonical Memetic Algorithms (CMAs) combine global and local search to produce a better net output. Let’s see how they do it.
As we have already seen, in evolutionary algorithms, we generate a population of individuals and calculate their fitness score using a fitness function. A CMA selects a subset of these individuals and subjects them to local search. This local search tries to improve an individual iteratively until it meets a termination condition. In other words, a local search algorithm improves an individual by searching its neighborhood for a fitter individual.
So, in essence, CMA improves some of the individuals in the population by using a carefully selected local search algorithm. When this happens on top of the improvement made to the population through the process of evolution, studies show that we converge on a better solution faster.
CMAs use two different strategies to apply the improvement local search makes to individuals. The first one is Lamarckian evolution which replaces an individual in the population with an improved version and updates its fitness score. The second one is the Baldwin effect which updates the fitness score without replacing the individual. So, in effect, we end up with the same individual but with an improved fitness score. Here the fitness score indicates not the actual fitness of the individual but its potential fitness.
The following is a pseudocode of a typical CMA.
Initialize the population pg (g denotes the generation and it is 0 at the beginning)
For each individual i in pg
Calculate the fitness fi
If i is selected for local search
imod,fi_mod = local_search(i)
i,fi = update(i,fi, imod,fi_mod) #Lamarckian evolution/Baldwin effect
Select parents q from pg based on fi
Perform crossover and mutations on q to produce offspring population o
Select who will make it to the next generation pg+1 from o, q, and p
Until the termination condition is met
Even though CMAs generally provide better outcomes, one needs to be careful when employing CMAs because certain combinations of local search algorithms and evolutionary strategies may provide a worse outcome than vanilla evolutionary algorithms. For example, a local search algorithm used with the roulette-wheel selection strategy may produce a worse result. So, one needs to identify the right combination of a local search algorithm and an evolutionary strategy.
Even though CMAs improve outcomes, one of the major issues with CMAs is that they require human intervention. This is because we need an expert to select the local search algorithm. This is an obstacle in the pursuit of Artificial General Intelligence (AGI). In subsequent articles, we shall see how we can make memetic computation more autonomous.
Memetic computation expands evolutionary computation by using memes to improve the output. A meme is a unit of ideas that undergoes natural selection like genes. A Canonical Memetic Algorithm uses a local search algorithm to improve the fitness of a subset of individuals in the population. The local search algorithm is chosen by a human expert. So, the knowledge of the expert is the meme here. Since this involves human intervention, this algorithm is not autonomous. The upcoming articles will discuss how we can make memetic computing more autonomous.