top of page
  • Writer's pictureAnil Yaman

Distributed Embodied Evolution over Networks

Updated: Dec 23, 2020

Networks of agents, such as, sensor and wireless networks, are used in many tasks in the context of environment monitoring and exploration, and Internet of Things (IoT) applications. Most of these cases, the optimum behaviors of the agents in the network are not known before deployment. In addition to that, agents might be required to adapt, i.e. change their behavior based on the environment conditions.


Offline optimization is usually costly and inefficient, while distributed optimization approaches may be more suitable. Therefore, we propose a distributed embodied evolutionary approach to optimize spatially distributed, locally interacting agents by allowing them to exchange their behavior parameters and learn from each other to adapt to a certain task within a given environment.


As illustrated in figure below, we assume a collection of distributed agents at fixed locations in a 2-dimensional environment. These agents can communicate with their neighbors locally. Each agent has an optimum behavioral settings depending on its local environment. Here, we may assume that the neighboring agents may be expected to behave similarly since they may share similar environmental conditions. The quality of the behavior of each agent is measured locally by the agent computing its fitness value.


We employ embodied evolution (EE) approach to allow agents to exchange their behavior with neighboring agents to learn from each other and modify their behaviors [1]. In this case, the behavior of these agents are parameterized and represented as genotype. Agents exchange their genotype with their neighbors. Thus, each agent may employ one of the strategies listed below to modify its behavior:



  • HillClimbing: change its own behavior using a random mutation. In this case, there is no communication with the neighboring agents.

  • CopyBest: copy the behavior of the best agent in the neighborhood.

  • CopyRand: copy the behavior of a randomly selected agent in the neighborhood.

  • XoverBest: exchange (crossover) some components of the genotype with the best agent in the neighborhood.

  • XoverRand: exchange (crossover) some components of the genotype with a randomly selected agent in the neighborhood.


Inspired by the biological evolution, mutation operator is applied for all cases to encourage exploration [2]. For instance, instead of copying exactly from the neighbor, we perform some mutation to increase the change of finding a better parameter setting. The strategies referred as "Xover-" performs crossover to exchange some of the components of their genotypes with the genotypes of the other agents.


Experiment Scenarios


Imitation Problem: in this case, we try to learn to imitate randomly selected 100 images from MNIST [3] using a network of agents. MNIST dataset consists of 28x28 images of handwritten digits. We assign an agent to each cell (in this case there are 28x28 agents) where each controls the illumination in each cell for image (i) for total of 100. Therefore, each agent has to learn 100 parameters to correctly illuminate its cell and globally emerge a pattern of handwritten digit. Below, we demonstrate the optimization process and the error of the whole network in imitating 100 images (collective fitness) using the strategies listed above.





Illumination Problem: similar to the imitation problem, we demonstrate a hypothetical application scenario where a network of agents aim to learn to illuminate an environment. In this case, they may be required to learn optimum illumination settings depending on the reception of day light at each hour throughout a day. Below, we demonstrate this learning process.





Conclusions


We observed copying neighbors (best or random) works better than optimizing the agent behavior on its own. Exchanging components of the behavior parameters (through crossover) performs best. Selecting random neighbor rather than best for information exchanged demonstrated a better performance. We observed these points even in the cases where the differences between neighbors are large.


Based on: Distributed Embodied Evolution over Networks (Yaman and Iacca, 2020, Applied Soft Computing, https://doi.org/10.1016/j.asoc.2020.106993) https://arxiv.org/abs/2003.12848



References


[1] Bredeche, N., Haasdijk, E., & Prieto, A. (2018). Embodied evolution in collective robotics: a review.Frontiers in Robotics and AI,5, 12.

[2] Eiben, A. E., & Smith, J. E. (2003).Introduction to evolutionary computing (Vol. 53, p. 18). Berlin: springer.

[3] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition.Proceedings of the IEEE,86(11), 2278-2324.

115 views0 comments
bottom of page