NEW* EvoGAMES Hackathon. Read more here or download the poster

Detailed Programme

Bio-inspired Algorithms in Games

Games, and especially video games, are now a major part of the finance and software industries, and an important field for cultural expression. They also provide an excellent testbed for and application of a wide range of computational intelligence methods including evolutionary computation, neural networks, fuzzy systems, swarm intelligence, and temporal difference learning. There has been a rapid growth in research in this area over the last few years.

This event focuses on new computational intelligence or biologically inspired techniques that may be of practical value for improvement of existing games or creation of new games, as well as on innovative uses of games to improve or test computational intelligence algorithms. We expect application of the derived methods/theories to newly created or existing games, preferably video games. Especially papers referring to recent competitions (e.g. TORCS, Super Mario, Pac Man, StarCraft) are very welcome. We invite prospective participants to submit full papers following Springer’s LNCS guidelines.

Areas of Interest and Contributions

Topics include but are not limited to:


Accepted papers will appear in the proceedings of EvoStar, published in a volume of the Springer Lecture Notes in Computer Science, which will be available at the Conference.Submissions must be original and not published elsewhere. The submissions will be peer reviewed by at least three members of the program committee. The authors of accepted papers will have to improve their paper on the basis of the reviewers comments and will be asked to send a camera ready version of their manuscripts. At least one author of each accepted work has to register for the conference and attend the conference and present the work.The reviewing process will be double-blind, please omit information about the authors in the submitted paper.

Submission Details

Submissions must be original and not published elsewhere. They will be peer reviewed by members of the program committee. The reviewing process will be double-blind, so please omit information about the authors in the submitted paper.

Submit your manuscript in Springer LNCS format.

Please provide up to five keywords in your Abstract

Page limit: 12 pages to http://myreview.csregistry.org/evoapps14/.


Submission deadline: 1 November 2013 11 November 2013
Notification: 06 January 2014
Camera ready: 01 February 2014
EvoGAMES: 23-25 April 2014


Further information on the conference and co-located events can be
found in: http://www.evostar.org

Programme Committee


EvoGAMES Programme

Wed 1120-1300  EvoGAMES 1
Chair: Paolo Burrelli

Multi-Criteria Comparison of Coevolution and Temporal Difference Learning on Othello     Wojciech Jaśkowski, Marcin Szubert, Paweł Liskowsk
We compare Temporal Difference Learning (TDL) with Coevolutionary Learning (CEL) on Othello. Apart from using three popular single-criteria performance measures: i) generalization performance or expected utility, ii) average results against a hand-crafted heuristic and iii) result in a head to head match, we compare the algorithms using performance profiles. This multi-criteria performance measure characterizes player's performance in the context of opponents of various strength. The multi-criteria analysis reveals that although the generalization performance of players produced by the two algorithms is similar, TDL is much better at playing against strong opponents, while CEL copes better against weak ones. We also find out that the TDL produces less diverse strategies than CEL. Our results confirms the usefulness of performance profiles as a tool for comparison of learning algorithms for games.

Evolving Evil: Optimizing Flocking Strategies through Genetic Algorithms for the Ghost Team in the Game of Ms. Pac-Man    Federico Liberatore, Antonio Mora, Pedro Castillo, Juan Julián Merelo
Flocking strategies are sets of behavior rules for the interaction of agents that allow to devise controllers with reduced complexity that generate emerging behavior. In this paper, we present an application of genetic algorithms and flocking strategies to control the Ghost Team in the game Ms. Pac-Man. In particular, we define flocking strategies for the Ghost Team and optimize them for robustness with respect to the stochastic elements of the game and effectivity against different possible opponents by means of genetic algorithm. The performance of the methodology proposed is tested and compared with that of other standard controllers. The results show that flocking strategies are capable of modeling complex behaviors and produce effective and challenging agents.

Procedural Content Generation Using Patterns as Objectives     Steve Dahlskog, Julian Togelius
In this paper we present a search-based approach for procedural generation of game levels that represents levels as sequences of micro-patterns and searched for meso-patterns. The micro-patterns are "slices" of original human-designed levels from an existing game, whereas the meso-patters are abstractions of common design patterns seen in the same levels. This method generates levels that are similar in style to the levels from which the original patterns were extracted, while still allowing for considerable variation in the geometry of the generated levels. The evolutionary method for generating the levels was tested extensively to investigate the distribution of micro-patterns used and meso-patterns found.

Wed 1430-1610  EvoGAMES 2
Chair: Antonio M Mora Garcia

Micro and Macro Lemmings simulations based on ants colonies     Antonio Gonzalez-Pardo, Fernando Palero, David Camacho
Ant Colony Optimization (ACO) has been successfully applied to a wide number of complex and real domains. From classical optimization problems to video games, these kind of swarm-based approaches have been adapted, to be later used, to search for new meta-heuristic based solutions. This paper presents a simple ACO algorithm that uses a specifically designed heuristic, called common-sense, which has been applied in the classical video game Lemmings. In this game a set of lemmings must reach the exit point of each level, using a subset of finite number of skills, taking into account the contextual information given from the level. The paper describes both the graph model and the context-based heuristic, designed to implement our ACO approach. Afterwards, two different kind of simulations have been carried out to analyse the behaviour of the ACO algorithm. On the one hand, a micro simulation, where each ant is used to model a lemming, and a macro simulation where a swarm of lemmings is represented using only one ant. Using both kind of simulations, a complete experimental comparison based on the number and quality of solutions found and the levels solved, is carried out to study the behaviour of the algorithm under different game configurations.

Fast Evolutionary Adaptation for Monte Carlo Tree Search     Simon Lucas, Spyridon Samothrakis, Diego Perez
This paper describes a new adaptive Monte Carlo Tree Search (MCTS) algorithm that uses evolution to rapidly optimise its performance. An evolutionary algorithm is used as a source of control parameters to modify the behaviour of each iteration (i.e. each simulation or roll-out) of the MCTS algorithm; in this paper we largely restrict this to modifying the behaviour of the random default policy, though it can also be applied to modify the tree policy. This method of tightly integrating evolution into the MCTS algorithm means that evolutionary adaptation occurs on a much faster time-scale than has previously been achieved, and addresses a particular problem with MCTS which frequently occurs in real-time video and control problems: that uniform random roll-outs may be uninformative. Results are presented on the classic mountain car reinforcement learning benchmark and also on a cut-down version of Space Invaders. The results clearly demonstrate the value of the approach, significantly outperforming ``standard'' MCTS in each case. Furthermore, the adaptation is almost immediate, with no perceptual delay as the system learns: the agent frequently performs well from its very first game.

Automatic Virtual Cinematography: a Dynamic Multi-Objective Optimisation Perspective     Paolo Burelli, Mike Preuss
Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate how to translate a shot list for a virtual scene (e.g. a game replay) into a series of camera configurations. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot for a sample game replay and we analyse the results.