Evostar 2018


The Leading European Event on Bio-Inspired Computation. Parma, Italy. 4-6 April 2018.

Call for papers:

EvoMUSART

7th International Conference on Computational Intelligence in Music, Sound, Art and Design.

April 2018, Parma, Italy
Part of evo* 2018
evo*: http://www.evostar.org

Evomusart is published in Springer Lecture Notes in Computer Science (LNCS) since 2003

News

  • Accepted paper abstracts
  • This year some papers from Evomusart will be selected for a new Springer book titled “The handbook of Artificial intelligence and the Arts” edited by Juan Romero, Penousal Machado and Gary Greenfield.

About EvoMUSART

Following the success of previous events and the importance of the field of computational intelligence, specifically, evolutionary and biologically inspired (artificial neural network, swarm, alife) music, sound, art and design, evoMUSART has become an evo* conference with independent proceedings since 2012. Thus, evoMUSART 2018 is the seventh International Conference on Computational Intelligence in Music, Sound, Art and Design.

The use of Computational Intelligence for the development of artistic systems is a recent, exciting and significant area of research. There is a growing interest in the application of these techniques in fields such as: visual art and music generation, analysis, and interpretation; sound synthesis; architecture; video; poetry; design; and other creative tasks.

The main goal of evoMUSART 2018 is to bring together researchers who are using Computational Intelligence techniques for artistic tasks, providing the opportunity to promote, present and discuss ongoing work in the area.

The event will be held in April, 2018 in Parma, Italy, as part of the evo* event.

Topics of interest

Submissions should concern the use of which use of Computational Intelligence techniques (e.g. Evolutionary Computation, Artificial Life, Machine Learning, Swarm Intelligence) in the generation, analysis and interpretation of art, music, design, architecture and other artistic fields. Topics of interest include, but are not limited to:
Generation
  • Systems that create drawings, images, animations, sculptures, poetry, text, designs, webpages, buildings, etc.;
  • Systems that create musical pieces, sounds, instruments, voices, sound effects, sound analysis, etc.;
  • Systems that create artifacts such as game content, architecture, furniture, based on aesthetic and functional criteria.
  • Robotic-Based Evolutionary Art and Music;
  • Other related artificial intelligence or generative techniques in the fields of Computer Music, Computer Art, etc.;
Theory
  • Computational Aesthetics, Experimental Aesthetics; Emotional Response, Surprise, Novelty;
  • Representation techniques;
  • Surveys of the current state-of-the-art in the area; identification of weaknesses and strengths; comparative analysis and classification;
  • Validation methodologies;
  • Studies on the applicability of these techniques to related areas;
  • New models designed to promote the creative potential of biologically inspired computation;
Computer Aided Creativity and computational creativity
  • Systems in which computational intelligence is used to promote the creativity of a human user;
  • New ways of integrating the user in the evolutionary cycle;
  • Analysis and evaluation of: the artistic potential of biologically inspired art and music; the artistic processes inherent to these approaches; the resulting artefacts;
  • Collaborative distributed artificial art environments;
Automation
  • Techniques for automatic fitness assignment
  • Systems in which an analysis or interpretation of the artworks is used in conjunction with computational intelligence techniques to produce novel objects;
  • Systems that resort to computational intelligence approaches to perform the analysis of image, music, sound, sculpture, or some other types of artistic object or resource.

EvoMUSART Index

For the 20th year anniversary of the evo* conference, a website was made available with all the information on the evoMUSART papers since 2003.

The idea is to bring together all the publications in a handy web page that allows the visitors to navigate through all papers, best papers, authors, keywords, and years of the conference, while providing quick access to the Springer’s web page links. Feel free to browse, search and bookmark: http://evomusart-index.dei.uc.pt/.

Publication Details

Submissions will be rigorously reviewed for scientific and artistic merit. Accepted papers will be presented orally or as posters at the event and included in the evostar proceedings, published by Springer Verlag in a dedicated volume of the Lecture Notes in Computer Science series. The acceptance rate at evoMUSART 2017 was 41% for papers accepted for oral presentation, and 41% for poster presentation.

Submitters are strongly encouraged to provide in all papers a link for download of media demonstrating their results, whether music, images, video, or other media types. Links should be anonymised for double-blind review, e.g. using a URL shortening service.

There are two types of presentation:
  • Long talk (20 minutes + 5 min questions). Authors can optionally bring a poster to present at the poster session.
  • Short talk (10 minutes, no questions). Authors MUST also bring a poster to present at the poster session.
Authors will be notified in advance of the type of presentation (short/long).

Additional information and submission details

Submit your manuscript, at most 16 A4 pages long, in Springer LNCS format (instructions downloadable from http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0)
Submission link: https://myreview.saclay.inria.fr/evomusart18
Page limit: 16 pages
The reviewing process will be double-blind; please omit information about the authors in the submitted paper.

EvoMUSART Conference chairs

Juan Romero
University of A Coruña
jj(at)udc.es

Antonios Liapis
Institute of Digital Games, University of Malta
antonios.liapis(at)um.edu.mt

Publication Chair:
Aniko Ekárt
Aston University
a.ekart(at)aston.ac.uk

Programme Commitee

  • Mauro Annunziato, ENEA, Italy
  • Dan Ashlock, University of Guelph, Canada
  • Peter Bentley, University College London , UK
  • Eleonora Bilotta, University of Calabria, Italy
  • Tim Blackwell, Goldsmiths College, University of London, UK
  • Adrian Carballal, University of A Coruna, Spain
  • Amilcar Cardoso, University of Coimbra, Portugal
  • Peter Cariani, University of Binghamton, USA
  • Vic Ciesielski, RMIT, Australia
  • John Collomosse, University of Surrey, UK
  • Kate Compton, University of California Santa Cruz, USA
  • João Correia, University of Coimbra, Portugal
  • Pedro Cruz, Northeastern University, USA
  • Palle Dahlstedt, Göteborg University, Sweden
  • Hans Dehlinger, Independent Artist, Germany
  • Alan Dorin, Monash University, Australia
  • Arne Eigenfeldt, Simon Fraser University, Canada
  • José Fornari, NICS/Unicamp, Brazil
  • Marcelo Freitas Caetano, IRCAM, France
  • Philip Galanter, Texas A&M College of Architecture, USA
  • Pablo Gervás, Complutense University of Madrid, Spain
  • Andrew Gildfind, Google, Inc., Australia
  • Gary Greenfield, University of Richmond, USA
  • Scot Gresham Lancaster, Univerisy of Texas, Dallas, USA
  • Carlos Grilo, Instituto Politécnico de Leiria, Portugal
  • Andrew Horner, University of Science & Technology, Hong Kong
  • Takashi Ikegami, The University of Tokyo, Japan
  • Christian Jacob, University of Calgary, Canada
  • Colin Johnson, University of Kent, UK
  • Daniel Jones, Goldsmiths College, University of London, UK
  • Anna Jordanous, University of Kent, UK
  • Amy K. Hoover, University of Central Florida, USA
  • Maximos Kaliakatsos-Papakostas, Department of Music, Aristotle University of Thessaloniki, Greece
  • Matthew Lewis, Ohio State University, USA
  • Alain Lioret, Université Paris 8, France
  • Louis Philippe Lopes, Institute of Digital Games, University of Malta, Malta
  • Roisin Loughran, University College Dublin, Ireland
  • Penousal Machado, University of Coimbra, Portugal
  • Roger Malina, Univerisy of Texas, Dallas, USA
  • Bill Manaris, College of Charleston, USA
  • Tiago Martins, University of Coimbra, Portugal
  • Jon McCormack, Monash University, Australia
  • Eduardo Miranda, University of Plymouth, UK
  • Nicolas Monmarché, University of Tours, France
  • Marcos Nadal, University of Vienna , Austria
  • Michael O'Neill, University College Dublin, Ireland
  • Philippe Pasquier, Simon Fraser University, Canada
  • Alejandro Pazos, Universidade da Coruña , Spain
  • Somnuk Phon-Amnuaisuk, Brunei Institute of Technology, Malaysia
  • Brian Ross, Brock University, Canada
  • Jonathan E. Rowe, University of Birmingham, UK
  • Antonino Santos, University of A Coruna, Spain
  • Marco Scirea, IT University of Copenhagen, Denmark
  • Daniel Silva, University of Coimbra, Portugal
  • Benjamin Smith, Indianapolis University, Purdue University,Indianapolis, USA
  • Stephen Todd, IBM, UK
  • Paulo Urbano, Universidade de Lisboa, Portugal
  • Anna Ursyn, University of Northern Colorado, USA
  • Dan Ventura, Brigham Young University, USA
  • Patrick Janssen, National University of Singapore, Singapore

Accepted paper abstracts

Title: Generative Solid Modelling Employing Natural Language Understanding and 3D Data
Author: Marinos Koutsomichalis, Björn Gambäck
Abstract: The paper describes an experimental system for generating 3D-printable models inspired by arbitrary textual input. Utilizing a transliteration pipeline, the system pivots on Natural Language Understanding technologies and 3D data available via online repositories to result in a bag of retrieved 3D models that are then concatenated in order to produce original designs. Such artefacts celebrate a post-digital kind of objecthood, as they are concretely physical while, at the same time, incorporate the cybernetic encodings of their own making. Twelve individuals were asked to reflect on some of the 3D-printed, physical artefacts. Their responses suggest that the created artefacts succeed in triggering imagination, and in accelerating moods and narratives of various sorts.

Title: Musical Organisms: a generative approach to growing musical scores
Author: Anna Lindemann, Eric Lindemann
Abstract: In this paper, we describe the creation of Musical Organisms using a novel generative music composition approach modeled on biological evolutionary and developmental (Evo Devo) processes. We are interested in using the Evo Devo processes that generate biological organisms with diverse and interesting structures—structures that exhibit modularity, repetition, and hierarchy—in order to create diverse music compositions that exhibit these same structural properties. The current focus of our work has been on Musical Organism development. Our Musical Organisms are musical scores that develop from a single musical note, just as a biological organism develops from a single cell. We describe the musical genome and the non-linear dynamical processes that drive the development of the Musical Organism from single note to complex musical score. While the evolution of Musical Organisms has not been our central focus, we describe how evolution can act upon genomic variation within populations of Musical Organisms to create new Musical Organism species with diverse and complex structures. And we introduce the concept of genome embedding as a unique method for generating genetic variation in a population, and developing structural modularity in Musical Organisms.

Title: Medical art therapy of the future: building an interactive virtual underwater world in a children's hospital
Author: Ludivine Lechat, Lieven Menschaert, Tom De Smedt, Lucas Nijs, Monica Dhar, Koen Norga, Jaan Toelen
Abstract: We are developing an interactive virtual underwater world with the aim to reduce stress and boredom in hospitalised children, to improve their quality of life, by employing an evidence-based design process and by using techniques from Artificial Life and Human-Computer Interaction. A 3D motion sensing camera tracks the activity of children in front of a wall projection. As they wave their hands, colorful sea creatures paddle closer to say hi and interact with the children.

Title: Dynamical Music with Musical Boolean Networks
Author: George Gabriel, Susan Stepney
Abstract: An extended Boolean network model is investigated as a possible medium in which a human composer can write music. A Boolean network is a simple discrete-time dynamical system whose state is characterised by the states of its constituent Boolean-valued vertices. The evolution of the system is predetermined by an initial state and the properties of the activation functions associated with each vertex. By associating musical events with the states of the system, its trajectory from a particular start state can be interpreted as a piece of tonal music. The primary source of interest in composing music using a deterministic dynamical system is the dependence of the musical result on the initial conditions. This paper explores the possibility of producing musically interesting variations on a given melodic phrase by changing the initial conditions from which the generating dynamical system is started.

Title: Construction of a repertoire of analog Form-finding techniques as a basis for computational morphological exploration in design and architecture
Author: Ever Patiño, Jorge Maya
Abstract: The article describes the process of constructing a repertoire of form-finding analog techniques, which can be used in evolutionary computation to (i) compare the techniques among them and select the most suitable for a project, (ii) to explore forms or shapes in an analogue and/or manual way, (iii) as a basis for the development of algorithms in specialized software, (iv) or to understand the physical processes and mathematical procedures of the techniques. To our knowledge no one has built a repertoire of this nature, since all the techniques are in sources of diverse disciplines. Methodologically, the construction process was based on a systematic review of the literature, allowing us to identify 33 techniques where the principles of bio inspiration and self-organization are evident, characteristics both of form-finding strategies. As a result, we present the repertoire structure, composed of five groups of techniques sharing similar physical processes: inflate, group, de-construct, stress, solidify and fold. Subsequently, the repertoire’s conceptual, mathematical, and graphical analysis categories are presented. Finally, conclusions of potential applications and research trends of the subject are presented.

Title: evoExplore: Multiscale Visualization of Evolutionary Histories in Virtual Reality
Author: Justin Kelly, Christian Jacob
Abstract: evoExplore is a system built for virtual reality (VR) and designed to assist evolutionary design projects. Built with the Unity 3D game engine and designed with future development and expansion in mind, evoExplore allows the user to review and visualize data collected from evolutionary design experiments. Expanding upon existing work, evoExplore provides the tools needed to breed your own evolving populations of designs, save the results from such evolutionary experiments and then visualize the recorded data as an interactive VR experience. evoExplore allows the user to dynamically explore their own evolutionary experiments, as well as those produced by other users. In this document we describe the features of evoExplore, its use of virtual reality and how it supports future development and expansion.

Title: Adaptive interface for mapping body movements to sounds
Author: Dimitrije Markovic, Nebojsa Malesevic
Abstract: Contemporary digital musical instruments allow an abundance of means to generate sound. Although superior to traditional instruments in terms of producing a unique audio-visual act, there is still an unmet need for digital instruments that allow performers to generate sounds through movements in an intuitive manner. One of the key factors for an authentic digital music act is a low latency between movements (user commands) and corresponding sounds. Here we present such a low-latency interface that maps the user’s kinematic actions into sound samples. The interface relies on wireless sensor nodes equipped with inertial measurement units and a real-time algorithm dedicated to the early detection and classification of a variety of movements/gestures performed by a user. The core algorithm is based on the approximate inference of a hierarchical generative model with piecewise-linear dynamical components. Importantly, the model’s structure is derived from a set of motion gestures. The performance of the Bayesian algorithm was compared against the k-nearest neighbors (k-NN) algorithm, which showed the highest classification accuracy, in a pre-testing phase, among several existing state-of-the-art algorithms. In almost all of the evaluation metrics the proposed probabilistic algorithm outperformed the k-NN algorithm.

Title: Towards Partially Automatic Search of Edge Bundling Parameters
Author: Evgheni Polisciuc, Filipe Assunção, Penousal Machado
Abstract: Edge bundling methods are used in flow maps and graphs to reduce the visual clutter, which is generated when representing complex and heterogeneous data. Nowadays, there are many edge bundling algorithms that have been successfully applied to a wide range of problems in graph representation. However, the majority of these methods are still difficult to use and apply on real world problems by the experts from other areas. This is due to the complexity of the algorithms and concepts behind them, as well as a strong dependence on their parametrization. In addition, the majority of edge bundling methods need to be fine-tuned when applied on different datasets. This paper presents a new approach that helps finding near-optimal parameters for solving such issues in edge bundling algorithms, regardless of the configuration of the input graph. Our method is based on evolutionary computation, allowing the users to find edge bundling solutions for their needs. In order to understand the effectiveness of the evolutionary algorithm in such kind of tasks, we performed experiments with automatic fitness functions, as well as with partially user-guided evolution. We tested our approach in the optimization of the parameters of two different edge bundling algorithms. Results are compared using objective criteria and a critical discussion of the obtained graphical solutions.

Title: Non-photorealistic Rendering with Cartesian Genetic Programming using Graphics Processing Units
Author: Illya Bakurov, Brian Ross
Abstract: A non-photorealistic rendering system implemented with Cartesian genetic programming (CGP) is discussed. The system is based on Baniasadi's NPR system using tree-based GP. The CGP implementation uses a more economical representation of rendering expressions compared to the tree-based system. The system borrows their many-objective fitness evaluation scheme, which uses a model of aesthetics, colour testing, and image matching. GPU acceleration of the paint stroke application results in up to 6 times faster rendering times compared to CPU-based renderings. The convergence dynamics of CGP's mu+lambda evolutionary strategy was more unstable than conventional GP runs with large populations. One possible reason may be the sensitivity of the smaller mu+lambda population to the many-objective ranking scheme, especially when objectives are in conflict with each other. This instability is arguably an advantage as an exploratory tool, especially when considering the subjectivity inherent in evolutionary art.

Title: Expressive Piano Music Playing Using a Kalman Filter
Author: Alexandra Bonnici, Maria Mifsud, Kenneth Camilleri
Abstract: In this paper, we present an algorithm that uses the Kalman filter to combine simple phrase structure models with observed differences in pitch within the phrase to refine the phrase model and hence adjust the loudness level and tempo of qualities of the melody line. We show how similar adjustments may be made to the accompaniment to introduce expressive attributes to a midi file representation of a score. In the paper, we show that the subjects had some difficulty in distinguishing between the resulting expressive renderings and human performances of the same score.

Title: Generating drums rhythms through data-driven conceptual blending of features and genetic algorithms
Author: Maximos Kaliakatsos-Papakostas
Abstract: Conceptual blending allows the emergence of new conceptual spaces by blending two input spaces. Using conceptual blending for inventing new concepts has been proven a promising technique for computational creativity. Especially in music, recent work has shown that proper representations of the input spaces allows the generation of consistent and sometimes surprising blends. The paper at hand proposes a novel approach to conceptual blending through the combination of higher-level features extracted from data; the field of application is drums rhythms. Through this methodology, the input rhythms are represented by 32 extracted features. After their generic space of similar features is computed, a simple amalgam-based methodology creates a blended set of an as equally as possible divided number of the most salient features from each input. This blended set of features acts as the target vector for a Genetic Algorithm that outputs the rhythm that best captures the blended features; this rhythm is called the blended rhythm. The salience of each feature in each rhythm in the database of input rhythms is computed from data and reflects the uniqueness of features. Preliminary results shed some light on how feature blending works on the generation of drums rhythms and new possible research directions for data-driven feature blending are proposed.

Title: RoboJam: A Musical Mixture Density Network for Collaborative Touchscreen Interaction
Author: Charles Martin, Jim Torresen
Abstract: RoboJam is a machine-learning system for generating music that assists users of a touchscreen music app by performing responses to their short improvisations. This system uses a recurrent artificial neural network to generate sequences of touchscreen interactions and absolute timings, rather than high-level musical notes. To accomplish this, RoboJam's network uses a mixture density layer to predict appropriate touch interaction locations in space and time. In this paper, we describe the design and implementation of RoboJam's network and how it has been integrated into a touchscreen music app. A preliminary evaluation analyses the system in terms of training, musical generation and user interaction.

Title: Towards a General Framework for Artistic Style Transfer
Author: Florian Uhde, Sanaz Mostaghim
Abstract: In recent times, artificial intelligence has become more sophisticated when it comes to the creation of fine arts. Especially in the area of painting, artificial methods reached a new level of maturity in the process of replicating perceptual quality. These systems are able to separate style and content of given images, enabling them to recombine and mutate the facets to create novel content. This work defines a general framework for conducting artistic style transfer. This allows recombination and structured modification of state of the art algorithms for further investigation and profiling of artistic style transfer.

Title: On Collaborator Selection in Creative Agent Societies: An Evolutionary Art Case Study
Author: Simo Linkola, Otto Hantula
Abstract: We study how artistically creative agents may learn to select favorable collaboration partners. We consider a society of creative agents with varying skills and aesthetic preferences able to interact with each other by exchanging artifacts or through collaboration. The agents exhibit interaction awareness by modeling their peers and make decisions about collaboration based on the learned peer models. To test the peer models, we devise an experimental collaboration process for evolutionary art, where two agents create an artifact by evolving the same artifact set in turns. In an empirical evaluation, we focus on how effective peer models are in selecting collaboration partners and compare the results to a baseline where agents select collaboration partners randomly. We observe that peer models guide the agents to more beneficial collaborations.

Title: Deep Interactive Evolution
Author: Philip Bontrager, Wending Lin, Julian Togelius, Sebastian Risi, Long Paper
Abstract: This paper describes an approach that combines generative adversarial networks (GANs) with interactive evolutionary computation (IEC). While GANs can be trained to produce lifelike images, they are normally sampled randomly from the learned distribution, providing limited control over the resulting output. On the other hand, interactive evolution has shown promise in creating various artifacts such as images, music and 3D objects, but traditionally relies on a hand-designed evolvable representation of the target domain. The main insight in this paper is that a GAN trained on a specific target domain can act as a compact and robust genotype-to-phenotype mapping (i.e. most produced phenotypes do resemble valid domain artifacts). Once such a GAN is trained, the latent vector given as input to the GAN's generator network can be put under evolutionary control, allowing controllable and high-quality image generation. In this paper, we demonstrate the advantage of this novel approach through a user study in which participants were able to evolve images that strongly resemble specific target images.

Title: The Light Show: Flashing Fireflies Gathering and Flying over Digital Images
Author: Paulo Urbano, The Light Show: Flashing Fireflies Gathering and Flying over Digital Images
Abstract: Computational Generative Art has been inspired by complex collective tasks made by social insects like the ants, which are able to coordinate through local interactions and simple stochastic behavior. In this paper we present the Light Show, an application of the mechanism of flash synchronization exhibited by some species of fireflies. The virtual fireflies form The Light Show gather and fly over digital readymades, self-choreographing the rhythm of illumination of their artistic habitats. We present a standard model with some design parameters able to control synchronization and also a variation able to exhibit clusters of synch at different phases that grow, fight, disappear or win, illuminating different parts of a digital image in an animated process.

Title: Visual art inspired by the collective feeding behavior of sand-bubbler crabs
Author: Hendrik Richter
Abstract: Sand-bubblers are crabs of the genera Dotilla and Scopimera which are known to produce remarkable patterns and structures at tropical beaches. From these pattern-making abilities, we may draw inspiration for digital visual art. A simple mathematical model of sand- bubbler patterns is proposed and an algorithm is designed that may create such patterns artificially. In addition, design parameters to modify the patterns are identified and analyzed by computational aesthetic measures. Finally, an extension of the algorithm is discussed that may enable controlling and guiding generative evolution of the art-making process.

Title: Learning as Performance: Autoencoding and Generating Dance Movements in Real Time
Author: Alexander Berman, Valencia James
Abstract: This paper describes the technology behind a performance where human dancers interact with an ``artificial'' performer projected on a screen. The system learns movement patterns from the human dancers in real time. It can also generate novel movement sequences that go beyond what it has been taught, thereby serving as a source of inspiration for the human dancers, challenging their habits and normal boundaries and enabling a mutual exchange of movement ideas. It is central to the performance concept that the system's learning process is perceivable for the audience. To this end, an autoencoder neural network is trained in real time with motion data captured live on stage. As training proceeds, a ``pose map'' emerges that the system explores in a kind of improvisational state. The paper shows how this method is applied in the performance, and shares observations and lessons made in the process.

Title: Evotype: Towards the Evolution of Type Stencils
Author: Tiago Martins, João Correia, Ernesto Costa, Penousal Machado
Abstract: Typefaces are an essential resource employed by graphic designers. The increasing demand for innovative type design work increases the need for good technological means to assist the designer in the creation of a typeface. We present an evolutionary computation approach for the generation of type stencils to draw coherent glyphs for different characters. The proposed system employs a Genetic Algorithm to evolve populations of type stencils. The evaluation of each candidate stencil uses a hill climbing algorithm to search the best configurations to draw the target glyphs. We study the interplay between legibility, coherence and expressiveness, and show how our framework can be used in practice.

Title: Co-Evolving Melodies and Harmonization in Evolutionary Music Composition
Author: Olav Olseng, Bjoern Gambaeck
Abstract: The paper describes a novel multi-objective evolutionary algorithm implementation that generates short musical ideas consisting of a melody and abstract harmonization, developed in tandem. The system is capable of creating these ideas based on provided material or autonomously. Three automated fitness features were adapted to the model to evaluate the generated music during evolution, and a fourth was developed to ensure harmonic progression. Four rhythmical pattern matching features were also developed. 21 pieces of music, produced by the system under various configurations, were evaluated in a user study. The results indicate that the system is capable of composing musical ideas that are subjectively interesting and pleasant, but not consistently.

Important dates:

Submission Deadline: 1 November 2017
EXTENDED SUBMISSION DEADLINE: 10 November 2017
Notification: 3 January 2018
Camera-ready: 15 January 2018
Mandatory registration per paper: 9 February 2018
Early registration discount: 28 February
Registration deadline: 28 March
EvoStar dates: 4-6 April 2018

Twitter: