Evostar 2017


The Leading European Event on Bio-Inspired Computation. Amsterdam. 19-21 April 2017.

Call for papers:

EvoMUSART

6th International Conference on Computational Intelligence in Music, Sound, Art and Design.

April 2017, Amsterdam, The Netherlands
Part of evo* 2017
evo*: http://www.evostar.org

News

  • Accepted paper abstracts
  • New this year: Index Website:
  • For the 20th year anniversary of the evo* conference, a website was made available with all the information on the evoMUSART papers since 2003. The idea is to bring together all the publications in a handy web page that allows the visitors to navigate through all papers, best papers, authors, keywords, and years of the conference, while providing quick access to the Springer’s web page links. Feel free to browse, search and bookmark: http://evomusart-index.dei.uc.pt/.

About EvoMUSART

Following the success of previous events and the importance of the field of computational intelligence, specifically, evolutionary and biologically inspired (artificial neural network, swarm, alife) music, sound, art and design, evoMUSART has become an evo* conference with independent proceedings since 2012. Thus, evoMUSART 2017 is the sixth International Conference on Computational Intelligence in Music, Sound, Art and Design.

The use of Computational Intelligence for the development of artistic systems is a recent, exciting and significant area of research. There is a growing interest in the application of these techniques in fields such as: visual art and music generation, analysis, and interpretation; sound synthesis; architecture; video; poetry; design; and other creative tasks.

The main goal of evoMUSART 2017 is to bring together researchers who are using Computational Intelligence techniques for artistic tasks, providing the opportunity to promote, present and discuss ongoing work in the area.

The event will be held in April, 2017 in Amsterdam, The Netherlands, as part of the evo* event.

Topics of interest

Submissions should concern the use of which use of Computational Intelligence techniques (e.g. Evolutionary Computation, Artificial Life, Machine Learning, Swarm Intelligence) in the generation, analysis and interpretation of art, music, design, architecture and other artistic fields. Topics of interest include, but are not limited to:
Generation
  • Systems that create drawings, images, animations, sculptures, poetry, text, designs, webpages, buildings, etc.;
  • Systems that create musical pieces, sounds, instruments, voices, sound effects, sound analysis, etc.;
  • Systems that create artifacts such as game content, architecture, furniture, based on aesthetic and functional criteria.
  • Robotic-Based Evolutionary Art and Music;
  • Other related artificial intelligence or generative techniques in the fields of Computer Music, Computer Art, etc.;
Theory
  • Computational Aesthetics, Experimental Aesthetics; Emotional Response, Surprise, Novelty;
  • Representation techniques;
  • Surveys of the current state-of-the-art in the area; identification of weaknesses and strengths; comparative analysis and classification;
  • Validation methodologies;
  • Studies on the applicability of these techniques to related areas;
  • New models designed to promote the creative potential of biologically inspired computation;
Computer Aided Creativity and computational creativity
  • Systems in which computational intelligence is used to promote the creativity of a human user;
  • New ways of integrating the user in the evolutionary cycle;
  • Analysis and evaluation of: the artistic potential of biologically inspired art and music; the artistic processes inherent to these approaches; the resulting artefacts;
  • Collaborative distributed artificial art environments;
Automation
  • Techniques for automatic fitness assignment
  • Systems in which an analysis or interpretation of the artworks is used in conjunction with computational intelligence techniques to produce novel objects;
  • Systems that resort to computational intelligence approaches to perform the analysis of image, music, sound, sculpture, or some other types of artistic object or resource.

EvoMUSART Conference chairs

João Correia
University of Coimbra, Portugal
jncor(at)dei.uc.pt

Vic Ciesielski
RMIT University, Australia
vic.ciesielski(at)rmit.edu.au

EvoMUSART Publication chair

Antonios Liapis
Institute of Digital Games, University of Malta
antonios.liapis(at)um.edu.mt

Programme Commitee

  • Dan Ashlock , University of Guelph , Canada
  • Peter Bentley , University College London , UK
  • Daniel Bisig , University of Zurich , Switzerland
  • Tim Blackwell , Goldsmiths College, University of London , UK
  • Andrew Brown , Griffith University , Australia
  • Adrian Carballal , University of A Coruna , Spain
  • Amilcar Cardoso , University of Coimbra , Portugal
  • Peter Cariani , University of Binghamton , USA
  • Vic Ciesielski , RMIT , Australia
  • João Correia , University of Coimbra , Portugal
  • Palle Dahlstedt , Göteborg University , Sweden
  • Hans Dehlinger , Independent Artist , Germany
  • Eelco den Heijer , Vrije Universiteit Amsterdam , Netherlands
  • Alan Dorin , Monash University , Australia
  • Arne Eigenfeldt , Simon Fraser University , Canada
  • José Fornari , NICS/Unicamp , Brazil
  • Marcelo Freitas Caetano , IRCAM , France
  • Philip Galanter , Texas A&M College of Architecture , USA
  • Andrew Gildfind , Google, Inc. , Australia
  • Gary Greenfield , University of Richmond , USA
  • Carlos Grilo , Instituto Politécnico de Leiria , Portugal
  • Andrew Horner , University of Science & Technology , Hong Kong
  • Patrick Janssen , National University of Singapure , Singapure
  • Colin Johnson , University of Kent , UK
  • Daniel Jones , Goldsmiths College, University of London , UK
  • Anna Jordanous , University of Kent , UK
  • Amy K. Hoover , University of Central Florida , USA
  • Maximos Kaliakatsos-Papakostas , Department of Music, Aristotle University of Thessaloniki , Greece
  • Matthew Lewis , Ohio State University , USA
  • Yang Li , University of Science and Technology Beijing , China
  • Antonios Liapis , IT University of Copenhagen , Denmark
  • Alain Lioret , Paris 8 University , France
  • Louis Philippe Lopes , Institute of Digital Games, University of Malta , Malta
  • Roisin Loughran , University College Dublin , Ireland
  • Penousal Machado , University of Coimbra , Portugal
  • Tiago Martins , University of Coimbra , Portugal
  • Jon McCormack , Monash University , Australia
  • Eduardo Miranda , University of Plymouth , UK
  • Nicolas Monmarché , University of Tours , France
  • Marcos Nadal , University of Vienna , Austria
  • Gary Nelson , Oerlin College , USA
  • Michael O'Neill , University College Dublin , Ireland
  • Philippe Pasquier , Simon Fraser University , Canada
  • Somnuk Phon-Amnuaisuk , Brunei Institute of Technology , Malaysia
  • Jane Prophet , City University, Hong Kong , China
  • Douglas Repetto , Columbia University , USA
  • Juan Romero , University of A Coruna , Spain
  • Brian Ross , Brock University , Canada
  • Jonathan E. Rowe , University of Birmingham , UK
  • Antonino Santos , University of A Coruna , Spain
  • Marco Scirea , IT University of Copenhagen , Denmark
  • Daniel Silva , University of Coimbra , Portugal
  • Benjamin Smith , Indianapolis University, Purdue University,Indianapolis , USA
  • Gillian Smith , Northeastern University , USA
  • Stephen Todd , IBM , UK
  • Paulo Urbano , Universidade de Lisboa , Portugal
  • Anna Ursyn , University of Northern Colorado , USA
  • Dan Ventura , Brigham Young University , USA

EvoMUSART abstracts

Title: Algorithmic Songwriting with ALYSIA
Authors: Margareta Ackerman and David Loker
Abstract: This paper introduces ALYSIA: Automated LYrical SongwrIting Application. ALYSIA is based on a machine learning model using Random Forests, and we discuss its success at pitch and rhythm prediction. Next, we show how ALYSIA was used to create original pop songs that were subsequently recorded and produced. Finally, we discuss our vision for the future of Automated Songwriting for both co- creative and autonomous systems.

Title: On Symmetry, Aesthetics and Quantifying Symmetrical Complexity
Authors: Mohammad Majid al-Rifaie, Anna Ursyn, Robert Zimmer, and Mohammad Ali Javaheri Javid
Abstract: The concepts of order and complexity and their quantitative evaluation have been at the core of computational notion of aesthetics. One of the major challenges is conforming human intuitive perception and what we perceive as aesthetically pleasing with the output of a computational model. Informational theories of aesthetics have taken advantage of entropy in measuring order and complexity of stimuli in relation to their aesthetic value. However entropy fails to discriminate structurally different patterns in a 2D plane. In this work, following an overview on symmetry and its significance in the domain of aesthetics, a nature-inspired, swarm intelligence technique (Dispersive Flies Optimisation or DFO) is introduced and then adapted to detect symmetries and quantify symmetrical complexities in images. The 252 Jacobsen & Hˆfel's images used in this paper are created by researchers in the psychology and visual domain as part of an experimental study on human aesthetic perception. Some of the images are symmetrical and some are asymmetrical, all varying in terms of their aesthetics, which are ranked by humans. The results of the presented nature-inspired algorithm is then compared to what humans in the study aesthetically appreciated and ranked. Whilst the authors believe there is still a long way to have a strong correlation between a computational model of complexity and human appreciation, the results of the comparison are promising.

Title: Towards Polyphony Reconstruction Using Multidimensional Multiple Sequence Alignment
Authors: Dimitrios Bountouridis,Frans Wiering, Dan Brown, Remco C. Veltkamp
Abstract: The digitization of printed music scores through the process of optical music recognition is imperfect. In polyphonic scores, with two or more simultaneous voices, errors of duration or position can lead to badly aligned and inharmonious digital transcriptions. We adapt biological sequence analysis tools as a post-processing step to correct the alignment of voices. Our multiple sequence alignment approach works on multiple musical dimensions and we investigate the contribution of each dimension to the correct alignment. Structural information, such musical phrase boundaries, is of major importance; therefore, we propose the use of the popular bioinformatics aligner Mafft which can incorporate such information while being robust to temporal noise. Our experiments show that a harmony-aware Mafft outperforms sophisticated, multidimensional alignment approaches and can achieve near-perfect polyphony reconstruction.

Title: Melody Retrieval and Classification Using Biologically-Inspired Techniques
Authors: Dimitrios Bountouridis, Dan Brown, Hendrik Vincent Koops, Frans Wiering, Remco C. Veltkamp
Abstract: Retrieval and classification are at the center of Music Information Retrieval research. Both tasks rely on a method to assess the similarity between two music documents. In the context of symbolically encoded melodies, pairwise alignment via dynamic programming has been the most widely used method. However, this approach fails to scale-up well in terms of time complexity and insufficiently models the variance between melodies of the same class. Compact representations and indexing techniques that capture the salient and robust properties of music content, are increasingly important. We adapt two existing bioinformatics tools to improve the melody retrieval and classification tasks. On two datasets of folk tunes and cover song melodies, we apply the extremely fast indexing method of the Basic Local Alignment Search Tool (BLAST) and achieve comparable classification performance to exhaustive approaches. We increase retrieval performance and efficiency by using multiple sequence alignment algorithms for locating variation patterns and profile hidden Markov models for incorporating those patterns into a similarity model.

Title: Evolved Aesthetic Analogies to Improve Artistic Experience
Authors: Aidan Breen, Colm O'Riordan, and Jerome Sheahan
Abstract: It has been demonstrated that computational evolution can be utilised in the creation of aesthetic analogies between two artistic domains by the use of mapping expressions. When given an artistic input these mapping expressions can be used to guide the generation of content in a separate domain. For example, a piece of music can be used to create an analogous visual display. In this paper we examine the implementation and performance of such a system. We explore the practical implementation of real-time evaluation of evolved mapping expressions, possible musical input and visual output approaches, and the challenges faced therein. We also present the results of an exploratory study testing the hypothesis that an evolved mapping expression between the measurable attributes of musical and visual harmony will produce an improved aesthetic experience compared to a random mapping expression. Expressions of various fitness values were used and the participants were surveyed on their enjoyment, interest, and fatigue. The results of this study indicate that further work is necessary to produce a strong aesthetic response. Finally, we present possible approaches to improve the performance and artistic merit of the system.

Title: Deep Artificial Composer: A Creative Neural Network Model for Automated Melody Generation
Authors: Florian Colombo, Alexander Seeholzer, and Wulfram Gerstner
Abstract: The inherent complexity and structure on long timescales make the automated composition of music a challenging problem. Here we present the Deep Artificial Composer (DAC), a recurrent neural network model of note transitions for the automated composition of melodies. Our model can be trained to produce melodies with compositional structures extracted from large datasets of diverse styles of music, which we exemplify here on a corpus of Irish folk and Klezmer melodies. We assess the creativity of DAC-generated melodies by a new measure, the novelty of musical sequences, showing that melodies imagined by the DAC are as novel as melodies produced by human composers. We further use the novelty measure to show that the DAC creates melodies musically consistent with either of the musical styles it was trained on. This makes the DAC a promising candidate for the automated composition of convincing musical pieces of any provided style.

Title: A Kind of Bio-inspired Learning of mUsic stylE
Authors: Roberto De Prisco, Delfina Malandrino, Gianluca Zaccagnino, Rocco Zaccagnino and Rosalba Zizza
Abstract: In the field of Computer Music, computational intelligence approaches are very relevant for music information retrieval applications. A challenging task in this area is the automatic recognition of musical styles. The style of a music performer is the result of the combination of several factors such as experience, personality, preferences, especially in music genres where the improvisation plays an important role. In this paper we propose a new approach for both recognition and automatic composition of music of a specific performer's style. Such a system exploits: (1) a one-class machine learning classifier to learn a specific music performer's style, (2) a music splicing system to compose melodic lines in the learned style, and (3) a LSTM network to predict patterns coherent with the learned style and used to guide the splicing system during the composition. To assess the effectiveness of our system we performed several tests using transcriptions of solos of popular Jazz musicians. Specifically, with regard to the recognition process, tests were performed to analyze the capability of the system to recognize a style. Also, we show that performances of our classifier are comparable to that of traditional two-class SVM, and that it is able to achieve an accuracy of 97%. With regard to the composition process, tests were performed to verify whether the produced melodies were able to catch the most significant music aspects of the learned style.

Title: Using autonomous agents to improvise music compositions in real-time
Authors: Patrick Hutchings and Jon McCormack
Abstract: This paper outlines an approach to real-time music generation using melody and harmony focused agents in a process inspired by jazz improvisation. A harmony agent employs a Long Short-Term Memory (LSTM) artificial neural network trained on the chord progressions of 2986 jazz 'standard' compositions using a network structure novel to chord sequence analysis. The melody agent uses a rule- based system of manipulating provided, pre-composed melodies to improvise new themes and variations. The agents take turns in leading the direction of the composition based on a rating system that rewards harmonic consistency and melodic flow. In developing the multi-agent system it was found that implementing embedded spaces in the LSTM encoding process resulted in significant improvements to chord sequence learning.

Title: Generating Polyphonic Music Using Tied Parallel Networks
Authors: Daniel D. Johnson
Abstract: We describe a neural network architecture which enables prediction and composition of polyphonic music in a manner that preserves translation invariance of the dataset. Specifically, we demonstrate training a probabilistic model of polyphonic music using a set of parallel, tied-weight recurrent networks, inspired by the structure of convolutional neural networks. This model is designed to be invariant to transpositions, but otherwise is intentionally given minimal information about the musical domain, and tasked with discovering patterns present in the source dataset. We present two versions of the model, denoted TP-LSTM-NADE and BALSTM, and also give methods for training the network and for generating novel music. This approach attains high performance at a musical prediction task and successfully creates note sequences which possess measure-level musical structure.

Title: Mixed-initiative Creative Drawing with webIconoscope
Authors: Antonios Liapis
Abstract: This paper presents the webIcononscope tool for creative drawing, which allows users to draw simple icons composed of basic shapes and colors in order to represent abstract semantic concepts. The goal of this creative exercise is to create icons that are ambiguous enough to confuse other people attempting to guess which concept they represent. webIcononscope is available online and all creations can be browsed, rated and voted on by anyone; this democratizes the creative process and increases the motivation for creating both appealing and ambiguous icons. To complement the creativity of the human users attempting to create novel icons, several computational assistants provide suggestions which alter what the user is currently drawing based on certain criteria such as typicality and novelty. This paper reports trends in the creations of webIcononscope users, based also on feedback from an online audience.

Title: Clustering Agents for the Evolution of Autonomous Musical Fitness
Authors: Roisin Loughran and Michael O’Neill
Abstract: This paper presents a cyclical system that generates autonomous fitness functions or Agents for evolving short melodies. A grammar is employed to create a corpus of melodies, each of which is composed of a number of segments. A population of Agents are evolved to give numerical judgements on the melodies based on the spacing of these segments. The fitness of an individual Agent is calculated in relation to its clustering of the melodies and how much this clustering correlates with the clustering of the entire Agent population. A preparatory run is used to evolve Agents using 30 melodies of known 'clustering'. The full run uses these Agents as the initial population in evolving a new best Agent on a separate corpus of melodies of random distance measures. This evolved Agent is then used in combination with the original melody grammar to create a new melody which replaces one of those from the initial random corpus. This results in a complex adaptive system creating new melodies without any human input after initialisation. This paper describes the behaviour of each phase in the system and presents a number of melodies created by the system.

Title: EvoFashion: Customising Fashion Through Evolution
Authors: Nuno Lourenco, Filipe Assunção, Catarina Maçãs, and Penousal Machado
Abstract: In todays society, where everyone desires unique and fashionable products, the ability to customise products is almost mandatory in every online store. Despite of many stores allowing the users to personalize their products, they do not always do it in the most efficient and user-friendly manner. In order to have products that reflect the user's design preferences, they have to go through a laborious process of picking the components that they want to customise. In this paper we propose a framework that aims to relieve the design burden from the user side, by automating the design process through the use of Interactive Evolutionary Computation (IEC). The framework is based on a web-interface that facilitates the interaction between the user and the evolutionary process. The user can select between two types of evolution: (i) automatic; and (ii) partially-automatic. The results show the ability of the framework to promote evolution towards solutions that reflect the user aesthetic preferences.

Title: A Swarm Environment for Experimental Performance and Improvisation
Authors: Frank Mauceri and Stephen M. Majercik
Abstract:This paper describes Swarm Performance and Improvisation (Swarm-PI), a real-time computer environment for music improvisation that uses swarm algorithms to control sound synthesis and to mediate interactions with a human performer. Swarm models are artificial, multiagent systems where the organized movements of large groups are the result of simple, local rules between individuals. Swarms typically exhibit self-organization and emergent behavior. In Swarm-PI, multiple acoustic descriptors from a live audio feed generate parameters for an independent swarm among multiple swarms in the same space, and each swarm is used to synthesize a stream of sound using granular sampling. This environment demonstrates the effectiveness of using swarms to model human interactions typical to group improvisation and to generate organized patterns of synthesized sound.

Title: Niche Constructing Drawing Robots
Authors: Jon McCormack
Abstract:This paper describes a series of experiments in creating autonomous drawing robots that generate aesthetically interesting and engaging drawings. Based on a previous method for multiple software agents that mimic the biological process of niche construction, the challenge in this project was to re-interpret the implementation of a set of evolving software agents into a physical robotic system. In this new robotic system, individual robots try to reinforce a particular niche defined by the density of the lines drawn underneath them. The paper also outlines the role of environmental interactions in determining the style of drawing produced.

Title: Automated Shape Design by Grammatical Evolution
Authors: Manuel Muehlbauer, Jane Burry, and Andy Song
Abstract:This paper proposes a automated shape generation methodology based on grammatical genetic programming for specific design cases. Two cases of the shape generation are presented: architectural envelope design and facade design. Through the described experiments, the applicability of this evolutionary method for design applications is showcased. Through this study it can be seen that automated shape generation by grammatical evolution offers a huge potential for the development of performance-based creative systems.

Title: Evolutionary Image Transition Using Random Walks
Authors: Aneta Neumann, Bradley Alexander, Frank Neumann
Abstract:We present a study demonstrating how random walk algorithms can be used for evolutionary image transition. We design different mutation operators based on uniform and biased random walks and study how their combination with a baseline mutation operator can lead to interesting image transition processes in terms of visual effects and artistic features. Using feature-based analysis we investigate the evolutionary image transition behaviour with respect to different features and evaluate the images constructed during the image transition process.

Title: Evaluation Rules for Evolutionary Generation of Drum Patterns in Jazz Solos
Authors: Fabian Ostermann, Igor Vatolkin, Günter Rudolph
Abstract:The learning of improvisation in jazz and other music styles requires years of practice. For music scholars which do not play in a band, technical solutions for automatic generation of accompaniment on home computers are very helpful. They may support the learning process and significantly improve the experience to play with other musicians. However, many up-to-date approaches can not interact with a solo player, generating static or random patterns without a direct musical dialogue between a soloist and accompanying instruments. In this paper, we present a novel system for the generation of drum patterns based on an evolutionary algorithm. As the main extension to existing solutions, we propose a set of musically meaningful jazz-related rules for the real-time validation and adjustment of generated drum patterns. In the evaluation study, musicians agreed that the system can be successfully used for learning of jazz improvisation and that the wide range of parameters helps to adapt the response of the virtual drummer to the needs of individual scholars.

Title: Assessing Augmented Creativity: Putting a Lovelace Machine for Interactive Title Generation through a Human Creativity Test
Authors: Yasser S. Arenas Rebolledo, Peter van der Putten, and Maarten H. Lamers
Abstract: The aim of this study is to find to what extent computers can assist humans in the creative process of writing titles, using psychological tests for creativity that are typically used for humans only . To this end, a computer tool was designed that generates new titles to users, based on knowledge generated from a pre-built corpus. This paper gives a description of both the development of the system as well as tests applied to the participants, derived from classical psycho- logical tests for human creativity. A total of 89 participants divided in two groups completed two tasks which consisted of generating titles for paintings. One group was allowed to use a template- based system for generating titles, the other group did not use any tools. The results of the experiments show higher creativity scores for the combination of participants augmented by a computational creativity tool.

Title: Play it Again: Evolved Audio Effects and Synthesizer Programming
Authors: Benjamin D. Smith
Abstract:Automatic programming of sound synthesizers and audio devices to match a given, desired sound is examined and a Genetic Algorithm (GA) that functions independent of specific synthesis techniques is proposed. Most work in this area has focused on one synthesis model or synthesizer, designing the GA and tuning the operator parameters to obtain optimal results. The scope of such inquiries has been limited by available computing power, however current software (Ableton Live, herein) and commercially available hardware is shown to quickly find accurate solutions, promising a practical application for music creators. Both software synthesizers and audio effects processors are examined, showing a wide range of performance times (from seconds to hours) and solution accuracy, based on particularities of the target devices. Random oscillators, phase synchronizing, and filters over empty frequency ranges are identified as primary challenges for GA based optimization.

Title: Fashion Design Aid System with Application of Interactive Genetic Algorithms
Authors: Nazanin Alsadat Tabatabaei Anaraki
Abstract:These days, consumers can make their choice from a wide variety of clothes provided in the market; however, some prefer to have their clothes custom-made. Since most of these consumers are not professional designers, they contact a designer to help them with the process. This approach, however, is not efficient in terms of time and cost and it does not reflect the consumer's personal taste as much as desired. This study proposes a design system using Interactive Genetic Algorithm (IGA) to overcome these problems. IGA differs from traditional Genetic Algorithm (GA) by leaving the fitness function to the personal preference of the user. The proposed system uses user's taste as a fitness value to create a large number of design options, and it is based on an encoding scheme either describing a dress as a whole or as a two-part piece of clothing. The system is designed in the Rhinoceros 3D software, using python, which provides good speed and interface options. The assessment experiments with several subjects indicated that the proposed system is effective.

Title: Generalisation Performance of Western Instrument Recognition Models in Polyphonic Mixtures with Ethnic Samples
Authors: Igor Vatolkin
Abstract: Instrument recognition in polyphonic audio recordings is a very complex task. Most research studies until now were focussed on the recognition of Western instruments in Western classical and popular music, but also an increasing number of recent works addressed the classification of ethnic/world recordings. However, such studies are typically restricted to one kind of music and do not measure the bias of "Western" effect, i.e., the danger of overfitting towards Western music when the classification models are optimised only for such tracks. In this paper, we analyse the performance of several instrument classification models which are trained and optimised on polyphonic mixtures of Western instruments, but independently validated on mixtures created with randomly added ethnic samples. The conducted experiments include evolutionary multi-objective feature selection from a large set of audio signal descriptors and the estimation of individual feature relevance.

Title: Exploring the Exactitudes Portrait Series with Restricted Boltzmann Machines
Authors: Sam D. Verkoelen, Maarten H. Lamers, Peter van der Putten
Abstract: In this paper we explore the use of deep neural networks to analyze semi- structured series of artworks. We train stacked Restricted Boltzmann Machines on the Exactitudes collection of photo series, and use this to understand the relationship between works and series, uncover underlying features and dimensions, and generate new images. The projection of the series on the two major decorrelated features (PCA on top of Boltzmann features) results in a visualization that clearly reflects the semi structured nature of the photos series, although the original features provide better classification results when assigning photographs to series. This work provides a useful case example of understanding structure that is uncovered by deep neural networks, as well as a tool to analyze the underlying structure of a collection of visual artworks, as a very first step towards a robot curator.

Title: Evolving Mondrian-Style Artworks
Authors: Miri Weiss Cohen, Leticia Cherchiglia, Rachel Costa
Abstract: This paper describes a Genetic Algorithm (GA) software system for automatically generating Mondrian-style symmetries and abstract artwork. The research examines Mondrian's paintings from 1922 through 1932 and analyses the balances, color symmetries and composition in these paintings. We used a set of eleven criteria to define the automated system. We then translated and formulized these criteria into heuristics and criteria that can be measured and used in the GA algorithm. The software includes a module that provides a range of GA parameter values for interactive selection. Despite a number of limitations, the method yielded high quality results with colors close to those of Mondrian and rectangles that did not overlap and fit the canvas.

Title: Predicting Expressive Bow Controls for Violin and Viola
Authors: Lauren Jane Yu and Andrea Pohoreckyj Danyluk
Abstract: Though computational systems can simulate notes on a staff of sheet music, capturing the artistic liberties professional musicians take to communicate their interpretation of those notes is a much more difficult task. In this paper, we demonstrate that machine learning methods can be used to learn models of expressivity, focusing on bow articulation for violin and viola. First we describe a new data set of annotated sheet music with information about specific aspects of bow control. We then present experiments for building and testing predictive models for these bow controls, as well as analysis that includes both general metrics and manual examination.

Publication Details

Submissions will be rigorously reviewed for scientific and artistic merit. Accepted papers will be presented orally or as posters at the event and included in the evostar proceedings, published by Springer Verlag in a dedicated volume of the Lecture Notes in Computer Science series. The acceptance rate at evoMUSART 2016 was 40% for papers accepted for oral presentation, and 24% for poster presentation.

Submitters are strongly encouraged to provide in all papers a link for download of media demonstrating their results, whether music, images, video, or other media types. Links should be anonymised for double-blind review, e.g. using a URL shortening service.

Additional information and submission details

Submit your manuscript, at most 16 A4 pages long, in Springer LNCS format (instructions downloadable from http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0)
Submission link: https://myreview.saclay.inria.fr/evomusart17 page limit: 16 pages
The reviewing process will be double-blind; please omit information about the authors in the submitted paper.

Important dates:

Submission Deadline: 1 November 2016
EXTENDED DEADLINE: 15 November 2016
(site remains open for final changes until 21 Nov)
Notification: 9 January 2017
Camera-ready: 25 January 2017
Mandatory registration per paper: 1 February 2017
Student bursary deadline: 20 February 2017
Early registration discount: 1 March 2017
Registration deadline: 10 April 2017
EvoStar dates: 19-21 April 2017

Twitter: