top of page
cover photo.jpg
Evolution of Biologically Inspired Learning in Artificial Neural Networks

Short Summary

In this work, we investigate direct and indirect aspects of Neuroevolution. In the case of direct encoding, we propose limited evaluation and cooperative co-evolution schemes to help scale evolutionary approaches to optimize artificial neural networks (ANNs) with large number of parameters.

 

In the case of indirect Neuroevolution, we propose an evolutionary approach to producing plasticity property in ANNs to facilitate learning during the lifetime of the networks. We employ genetic algorithms to evolve discrete learning rules that are capable of performing synaptic changes locally based on the pairwise binary activation states of neurons. Our evolved plasticity rules make it easy to understand how the synaptic changes are performed depending on the all possible pairwise binary activation states of the neurons.   We study plasticity in three kinds of learning processes. First, the reinforcement signals are available after every action of the networks. Second, the reinforcement signals are received after a certain period of time and finally, in the case where the reinforcement signals are not available. We test evolved plasticity rules on foraging and maze tasks, and show that they are capable of training networks for these tasks.

Keywords:

Neuroevolution, Evolutionary Computing, Hebbian Learning, Evolution of Learning

bottom of page