Evostar 2019


The Leading European Event on Bio-Inspired Computation. Leipzig, Germany. 24-26 April 2019.

Call for papers:

EvoMUSART

8th International Conference on Computational Intelligence in Music, Sound, Art and Design.

April 2019, Leipzig, Germany
Part of evo* 2019
evo*: http://www.evostar.org

Evomusart is published in Springer Lecture Notes in Computer Science (LNCS) since 2003

News

About EvoMUSART

Following the success of previous events and the importance of the field of computational intelligence, specifically, evolutionary and biologically inspired (artificial neural network, swarm, alife) music, sound, art and design, evoMUSART has become an evo* conference with independent proceedings since 2012. Thus, evoMUSART 2019 is the seventh International Conference on Computational Intelligence in Music, Sound, Art and Design.

The use of Computational Intelligence for the development of artistic systems is a recent, exciting and significant area of research. There is a growing interest in the application of these techniques in fields such as: visual art and music generation, analysis, and interpretation; sound synthesis; architecture; video; poetry; design; and other creative tasks.

The main goal of evoMUSART 2019 is to bring together researchers who are using Computational Intelligence techniques for artistic tasks, providing the opportunity to promote, present and discuss ongoing work in the area.

The event will be held in April, 2019 in Leipzig, Germany, as part of the evo* event.

Topics of interest

Submissions should concern the use of which use of Computational Intelligence techniques (e.g. Evolutionary Computation, Artificial Life, Machine Learning, Swarm Intelligence) in the generation, analysis and interpretation of art, music, design, architecture and other artistic fields. Topics of interest include, but are not limited to:
Generation
  • Systems that create drawings, images, animations, sculptures, poetry, text, designs, webpages, buildings, etc.;
  • Systems that create musical pieces, sounds, instruments, voices, sound effects, sound analysis, etc.;
  • Systems that create artifacts such as game content, architecture, furniture, based on aesthetic and functional criteria.
  • Robotic-Based Evolutionary Art and Music;
  • Other related artificial intelligence or generative techniques in the fields of Computer Music, Computer Art, etc.;
Theory
  • Computational Aesthetics, Experimental Aesthetics; Emotional Response, Surprise, Novelty;
  • Representation techniques;
  • Surveys of the current state-of-the-art in the area; identification of weaknesses and strengths; comparative analysis and classification;
  • Validation methodologies;
  • Studies on the applicability of these techniques to related areas;
  • New models designed to promote the creative potential of biologically inspired computation;
Computer Aided Creativity and computational creativity
  • Systems in which computational intelligence is used to promote the creativity of a human user;
  • New ways of integrating the user in the evolutionary cycle;
  • Analysis and evaluation of: the artistic potential of biologically inspired art and music; the artistic processes inherent to these approaches; the resulting artefacts;
  • Collaborative distributed artificial art environments;
Automation
  • Techniques for automatic fitness assignment
  • Systems in which an analysis or interpretation of the artworks is used in conjunction with computational intelligence techniques to produce novel objects;
  • Systems that resort to computational intelligence approaches to perform the analysis of image, music, sound, sculpture, or some other types of artistic object or resource.

EvoMUSART Index

For the 20th year anniversary of the evo* conference, a website was made available with all the information on the evoMUSART papers since 2003.

The idea is to bring together all the publications in a handy web page that allows the visitors to navigate through all papers, best papers, authors, keywords, and years of the conference, while providing quick access to the Springer’s web page links. Feel free to browse, search and bookmark: http://evomusart-index.dei.uc.pt/.

Publication Details

Submissions will be rigorously reviewed for scientific and artistic merit. Accepted papers will be presented orally or as posters at the event and included in the evostar proceedings, published by Springer Verlag in a dedicated volume of the Lecture Notes in Computer Science series. The acceptance rate at evoMUSART 2018 was 39% for papers accepted for oral presentation, and 26% for poster presentation.

Submitters are strongly encouraged to provide in all papers a link for download of media demonstrating their results, whether music, images, video, or other media types. Links should be anonymised for double-blind review, e.g. using a URL shortening service.

There are two types of presentation:
  • Long talk (20 minutes + 5 min questions). Authors can optionally bring a poster to present at the poster session.
  • Short talk (10 minutes, no questions). Authors MUST also bring a poster to present at the poster session.
Authors will be notified in advance of the type of presentation (short/long).

Additional information and submission details

Submit your manuscript, at most 16 A4 pages long, in Springer LNCS format (instructions downloadable from http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0)
Submission link: https://myreview.saclay.inria.fr/evomusart19
Page limit: 16 pages
The reviewing process will be double-blind; please omit information about the authors in the submitted paper.

EvoMUSART Conference chairs

Anikó Ekart
Aston University, UK
a.ekart(at)aston.ac.uk

Antonios Liapis
Institute of Digital Games, University of Malta
antonios.liapis(at)um.edu.mt

Publication Chair:
Luz Castro,
Universidade da Coruña, Spain,
maria.luz.castro(at)udc.gal

Programme Commitee

  • Peter Bentley , University College London , UK
  • Tim Blackwell , Goldsmiths College, University of London , UK
  • Andrew Brown , Griffith University , Australia
  • Adrian Carballal , University of A Coruña , Spain
  • Amilcar Cardoso , University of Coimbra , Portugal
  • Peter Cariani , University of Binghamton , USA
  • Vic Ciesielski , RMIT , Australia
  • Kate Compton , University of California Santa Cruz , USA
  • João Correia , University of Coimbra , Portugal
  • Palle Dahlstedt , Göteborg University , Sweden
  • Hans Dehlinger , Independent Artist , Germany
  • Eelco den Heijer , Vrije Universiteit Amsterdam , Netherlands
  • Alan Dorin , Monash University , Australia
  • Arne Eigenfeldt , Simon Fraser University , Canada
  • José Fornari , NICS/Unicamp , Brazil
  • Marcelo Freitas Caetano , IRCAM , France
  • Philip Galanter , Texas A&M College of Architecture , USA
  • Andrew Gildfind , Google Inc. , Australia
  • Scot Gresham Lancaster , Univerisy of Texas, Dallas , USA
  • Carlos Grilo , Instituto Politécnico de Leiria , Portugal
  • Andrew Horner , University of Science & Technology , Hong Kong
  • Colin Johnson , University of Kent , UK
  • Daniel Jones , Goldsmiths College, University of London , UK
  • Amy K. Hoover , University of Central Florida , USA
  • Maximos Kaliakatsos-Papakostas , Department of Music, Aristotle University of Thessaloniki , Greece
  • Matthew Lewis , Ohio State University , USA
  • Alain Lioret , Paris 8 University , France
  • Louis Philippe Lopes , Institute of Digital Games, University of Malta , Malta
  • Roisin Loughran , University College Dublin , Ireland
  • Penousal Machado , University of Coimbra , Portugal
  • Tiago Martins , University of Coimbra , Portugal
  • Jon McCormack , Monash University , Australia
  • Eduardo Miranda , University of Plymouth , UK
  • Nicolas Monmarch , University of Tours , France
  • Marcos Nadal , Universitat de les Illes Balears , Spain
  • Michael O'Neill , University College Dublin , Ireland
  • Somnuk Phon-Amnuaisuk , Brunei Institute of Technology , Malaysia
  • Douglas Repetto , Columbia University , USA
  • Juan Romero , University of A Coruña , Spain
  • Brian Ross , Brock University , Canada
  • Jonathan E. Rowe , University of Birmingham , UK
  • Antonino Santos , University of A Coruña , Spain
  • Marco Scirea , IT University of Copenhagen , Denmark
  • Benjamin Smith , Indianapolis University, Purdue University , USA
  • Stephen Todd , IBM , UK
  • Paulo Urbano , Universidade de Lisboa , Portugal
  • Anna Ursyn , University of Northern Colorado , USA

Accepted paper abstracts

Long Presentations

Deep Learning Concepts for Evolutionary Art
Fazle Tanjil, Brian J. Ross
A deep convolutional neural network (CNN) trained on mil- lions of images forms a very high-level abstract overview of an image. Our primary goal is to use this high-level content information a given target image to guide the automatic evolution of images using genetic programming. We investigate the use of a pre-trained deep CNN model as a fitness guide for evolution. Two different approaches are consid- ered. Firstly, we developed a heuristic technique called Mean Minimum Matrix Strategy (MMMS) for determining the most suitable high-level CNN nodes to be used for fitness evaluation. This pre-evolution strategy determines the common high-level CNN nodes that show high activation values for a family of images that share an image feature of interest. Using MMMS, experiments show that GP can evolve procedural texture images that likewise have the same high-level feature. Secondly, we use the highest-level fully connected classifier layers of the deep CNN. Here, the user supplies a high-level classification label such as “peacock” or “banana”, and GP tries to evolve an image that maximizes the classifi- cation score for that target label. Experiments evolved images that often achieved high confidence scores for the supplied labels. However, the im- ages themselves usually display some key aspect of the target required for CNN classification, rather than the entire subject matter expected by humans. We conclude that deep learning concepts show much potential as a tool for evolutionary art, and future results will improve as deep CNN models are better understood.

Adversarial evolution and deep learning – How does an artist play with our visual system?
Alan Blair
We create artworks using adversarial coevolution between a genetic program (HERCL) generator and a deep convolutional neural network (LeNet) critic. The resulting artificially intelligent artist, whimsically named Hercule LeNet, aims to produce images of low algorithmic complexity which nevertheless resemble a set of real photographs well enough to fool an adversarially trained deep learning critic modeled on the human visual system. Although it is not exposed to any pre-existing art, or asked to mimic the style of any human artist, nevertheless it discovers for itself many of the stylistic features associated with influential art movements of the 19th and 20th Century. A detailed analysis of its work can help us to better understand the way an artist plays with the human visual system to produce aesthetically appealing images.

Autonomy, Authenticity, Authorship and Intention in computer generated art
Jon McCormack, Toby Gifford, Patrick Hutchings
This paper examines five key questions surrounding computer generated art. Driven by the recent public auction of a work of ``AI Art'' we selectively summarise many decades of research and commentary around topics of autonomy, authenticity, authorship and intention in computer generated art, and use this research to answer contemporary questions often asked about art made by computers that concern these topics. We additionally reflect on whether current techniques in deep learning and Generative Adversarial Networks significantly change the answers provided by many decades of prior research.

Camera Obscurer: Generative Art for Design Inspiration
Dilpreet Singh, Nina Rajcic, Simon Colton, Jon McCormack
We investigate using generated decorative art as a source of inspiration for design tasks. Using a visual similarity search for image retrieval, the Camera Obscurer app enables rapid searching of tens of thousands of generated abstract images of various types. The seed for a visual similarity search is a given image, and the retrieved generated images share some visual similarity with the seed. Implemented in a hand-held device, the app empowers users to use photos of their surroundings to search through the archive of generated images and other image archives. Being abstract in nature, the retrieved images supplement the seed image rather than replace it, providing different visual stimuli including shapes, colours, textures and juxtapositions, in addition to affording their own interpretations. This approach can therefore be used to provide inspiration for a design task, with the abstract images suggesting new ideas that might give direction to a graphic design project. We describe a crowdsourcing experiment with the app to estimate user confidence in retrieved images, and we describe a pilot study where Camera Obscurer provided inspiration for a design task. These experiments have enabled us to describe future improvements, and to begin to understand sources of visual inspiration for design tasks.

Swarm-based identification of animation key points from 2D-medialness maps
Prashant Aparajeya, Frederic Fol Leymarie, Mohammad Majid al-Rifaie
In this article we present the use of dispersive flies optimisation (DFO) for swarms of particles active on a medialness map -- a 2D field representation of shape informed by perception studies. Optimising swarms activity permits to efficiently identify shape-based keypoints to automatically annotate movement and is capable of producing meaningful qualitative descriptions for animation applications. When taken together as a set, these keypoints represent the full body pose of a character in each processed frame. In addition, such keypoints can be used to embody the notion of the Line of Action (LoA), a well known classic technique from the Disney studios used to capture the overall pose of a character to be fleshed out. Keypoints along a medialness ridge are local peaks which are efficiently localised using DFO driven swarms. DFO is optimised in a way so that it does not need to scan every image pixel and always tend to converge at these peaks. A series of experimental trials on different animation characters in movement sequences confirms the promising performance of the optimiser over a simpler, currently-in-use brute-force approach.

Paintings, Polygons and Plant Propagation
Misha Paauw, Daan van den Berg
It is possible to approximate artistic images from a limited number of stacked semi-transparent colored polygons. To match the target image as closely as possible, the locations of the vertices, the drawing order of the polygons and the RGBA color values must be optimized for the entire set at once. Because of the vast combinatorial space, the relatively simple constraints and the well-defined objective function, these optimization problems appear to be well suited for nature-inspired optimization algorithms. In this pioneering study, we start off with sets of randomized polygons and try to find optimal arrangements for several well-known paintings using three iterative optimization algorithms: stochastic hillclimbing, simulated annealing and the plant propagation algorithm. We discuss the performance of the algorithms, relate the found objective values to the polygonal invariants and supply a challenge to the community.

Evolutionary Games for Audiovisual Works: exploring the Demographic Prisoner's Dilemma
Stefano Kalonaris
This paper presents a minimalist audiovisual display of an evolutionary game known as the Demographic Prisoner's Dilemma, in which cooperation emerges as an evolutionary stable behaviour. Abiding by a dialogical approach foregrounding the dynamical negotiation of the author's aesthetic aspirational levels, the cross-space mapping between the formal model and the audiovisual work is explored, and the system undergoes several variations and modifications. Questions regarding computational measures of beauty are raised and discussed.

Emojinating: Evolving Emoji Blends
João M. Cunha, Nuno Lourenço, João Correia, Pedro Martins, Penousal Machado
Graphic designers visually represent concepts in several of their daily tasks, such as in icon design. Computational systems can be of help in such tasks by stimulating creativity. However, current computational approaches to concept visual representation lack in effectiveness in promoting the exploration of the space of possible solutions. In this paper, we present an evolutionary approach that combines a standard Evolutionary Algorithm with a method inspired by Estimation of Distribution Algorithms to evolve emoji blends to represent user-introduced concepts. The quality of the developed approach is assessed using two separate user-studies. In comparison to previous approaches, our evolutionary system is able to better explore the search space, obtaining solutions of higher quality in terms of concept representativeness.

Comparing Models for Harmony Prediction in an Interactive Audio Looper
Benedikte Wallace, Charles P. Martin
Musicians often use tools such as loop-pedals and multitrack recorders to assist in improvisation and songwriting, but these tools generally don't proactively contribute aspects of the musical performance. In this work, we introduce an interactive audio looper that predicts a loop's harmony, and constructs an accompaniment automatically using concatenative synthesis. The system uses a machine learning (ML) model for harmony prediction, that is, it generates a sequence of chords symbols for a given melody. We analyse the performance of two potential ML models for this task: a hidden Markov model (HMM) and a recurrent neural network (RNN) with bidirectional long short-term memory (BLSTM) cells. Our findings show that the RNN approach provides more accurate predictions and is more robust with respect to changes in the training data. We consider the impact of each model's predictions in live performance and ask: ``What is an accurate chord prediction anyway?''

Stochastic Synthesizer Patch Exploration in Edisyn
Sean Luke
Edisyn is a music synthesizer program (or "patch") editor library which enables musicians to easily edit and manipulate a variety of difficult-to-program synthesizers. Edisyn sports a first-in-class set of tools designed to help explore the parameterized space of synthesizer patches without needing to directly edit the parameters. This paper discusses the most sophisticated of these tools, Edisyn's Hill-Climber and Constrictor methods, which are based on interactive evolutionary computation techniques. The paper discusses the special difficulties encountered in programming synthesizers, the motivation behind these techniques, and their design. It then evaluates them in an experiment with novice synthesizer users, and concludes with additional observations regarding utility and efficacy.

Evolutionary Multi-Objective Training Set Selection of Data Instances and Augmentations for Vocal Detection
Igor Vatolkin, Daniel Stoller
The size of publicly available music data sets has grown significantly in recent years, which allows training better classification models. However, training on large data sets is time-intensive and cumbersome, and some training instances might be unrepresentative and thus hurt classification performance regardless of the used model. On the other hand, it is often beneficial to extend the original training data with augmentations, but only if they are carefully chosen. Therefore, identifying a ``smart'' selection of training instances should improve performance. In this paper, we introduce a novel, multi-objective framework for training set selection with the target to simultaneously minimise the number of training instances and the classification error. Experimentally, we apply our method to vocal activity detection on a multi-track database extended with various audio augmentations for accompaniment and vocals. Results show that our approach is very effective at reducing classification error on a separate validation set, and that the resulting training set selections either reduce classification error or require only a small fraction of training instances for comparable performance.

Short (Poster) presentations

Automatic Jazz Melody Composition through a Learning-based Genetic Algorithm
Yong-Wook Nam, Yong-Hyuk Kim
In this study, we automate the production of good-quality jazz melodies through genetic algorithm and pattern learning by preserving the musically important properties. Unlike previous automatic composition studies that use fixed-length chromosomes to express a bar in a score, we use a variable-length chromosome and geometric crossover to accommodate the variable length. Pattern learning uses the musical instrument digital interface data containing the jazz melody; a user can additionally learn about the melody pattern by scoring the generated melody. The pattern of the music is stored in a chord table that contains the harmonic elements of the melody. In addition, a sequence table preserves the flow and rhythmic elements. In the evaluation function, the two tables are used to calculate the fitness of a given excerpt. We use this estimated fitness and geometric crossover to improve the music until users are satisfied. Through this, we successfully create a jazz melody as per user preference and training data.

EvoChef: Show me What to Cook! Artificial Evolution of Culinary Arts
Hajira Jabeen, Nargis Tahara, Jens Lehmann
Computational Intelligence (CI) has proven its artistry in creation of music, graphics, and drawings. EvoChef demonstrates the cre- ativity of CI in artificial evolution of culinary arts. EvoChef takes input from well-rated recipes of different cuisines and evolves new recipes by recombining the instructions, spices, and ingredients. Each recipe is rep- resented as a property graph containing ingredients, their status, spices, and cooking instructions. These recipes are evolved using recombination and mutation operators. The expert opinion (user ratings) has been used as the fitness function for the evolved recipes. It was observed that the overall fitness of the recipes improved with the number of generations and almost all the resulting recipes were found to be conceptually cor- rect. We also conducted a blind-comparison of the original recipes with the EvoChef recipes and the EvoChef was rated to be more innovative. To the best of our knowledge, EvoChef is the first semi-automated, open source, and valid recipe generator that creates easily to follow, and novel recipes.

Exploring Transfer Functions in Evolved CTRNNs for Music Generation
Steffan Ianigro, Oliver Bown
This paper expands on prior research into the generation of audio through the evolution of Continuous Time Recurrent Neural Networks (CTRNNs). CTRNNs are a type of recurrent neural network that can be used to model dynamical systems and can exhibit many different characteristics that can be used for music creation such as the generation of non-linear audio signals which unfold with a level of generative agency or unpredictability. Furthermore, their compact structure makes them ideal for use as an evolvable genotype for musical search as a finite set of CTRNN parameters can be manipulated to discover a vast audio search space. In prior research, we have successfully evolved CTRNNs to generate timbral and melodic content that can be used for electronic music composition. However, although the initial adopted CTRNN algorithm produced oscillations similar to some conventional synthesis algorithms and timbres reminiscent of acoustic instruments, it was hard to find configurations that produced the timbral and temporal richness we expected. Within this paper, we look into modifying the currently used tanh transfer function by modulating it with a sine function to further enhance the idiosyncratic characteristics of CTRNNs. We explore to what degree they can aid musicians in the search for unique sounds and performative dynamics in which some creative control is given to a CTRNN agent. We aim to measure the difference between the two transfer functions by discovering two populations of CTRNNs using a novelty search evolutionary algorithm, each utilising a different transfer function. The effect that each transfer function has on the respective novelty of each CTRNN population is compared using quantitative analysis as well as through a compositional study.

Tired of choosing? Just add structure and Virtual Reality
Edward Easton, Ulysses Bernardet, Aniko Ekart
Interactive Evolutionary Computation (IEC) systems often suffer from users only performing a small number of generations, a phenomenon known as user fatigue. This is one of the main hindrances to these systems generating complex and aesthetically pleasing pieces of art. This paper presents two novel approaches to addressing the issue by improving user engagement, firstly through using Virtual Environments and secondly improving the predictability of the generated images using a well-defined structure and giving the user more control. To establish their efficacy, the concepts are applied to a series of prototype systems. Our results show that the approaches are effective to some degree. We propose alterations to further improve their implementation in future systems.

Automatically Generating Engaging Presentation Slide Decks
Thomas Winters, Kory W. Mathewson
Talented public speakers have thousands of hours of practice. One means of improving public speaking skills is practice through improvisation, e.g. presenting an improvised presentation using an unseen slide deck. We present TEDRIC, a novel system capable of generating coherent slide decks based on a single topic suggestion. It combines semantic word webs with text and image data sources to create an engaging slide deck with an overarching theme. We found that audience members perceived the quality of improvised presentations using these generated slide decks to be on par with presentations using human created slide decks for the Improvised TED Talk performance format. TEDRIC is thus a valuable new creative tool for improvisers to perform with, and for anyone looking to improve their presentation skills.

Important dates:

Submission Deadline: 1 November 2018
EXTENDED SUBMISSION DEADLINE: 12 November 2018
Notification: 14 January 2019
Camera-ready: 11 February 2019
Mandatory registration per paper: 28 February 2019
Early registration deadline: 15 March 2019
Late Breaking Abstracts: 10 April 2019
Registration deadline: 17 April 2019
EvoStar dates: 24-26 April 2019

Twitter: