{"id":684,"date":"2025-03-08T13:07:17","date_gmt":"2025-03-08T12:07:17","guid":{"rendered":"https:\/\/www.evostar.org\/2025\/?page_id=684"},"modified":"2025-03-10T21:53:27","modified_gmt":"2025-03-10T20:53:27","slug":"evomusart-accepted-papers","status":"publish","type":"page","link":"https:\/\/www.evostar.org\/2025\/evomusart-accepted-papers\/","title":{"rendered":"EvoMUSART Accepted Papers"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Long talks<\/h2>\n\n\n\n<li>The Importance of Context in Image Generation: A Case Study for Video Game Sprites<br><em>Roberto Gallotta, Antonios Liapis and Georgios N. Yannakakis<\/em><\/li>\n<li>An Ensemble Approach to Music Source Separation: A Comparative Analysis of Conventional and Hierarchical Stem Separation<br><em>Samarth S Rao, Saarth Vardhan, Pavani R Acharya, Oorjitha Ratna Jasthi and Natarajan Subramanyam<\/em><\/li>\n<li>Yin-Yang: Developing Motifs With Long-Term Structure And Controllability<br><em>Keshav Bhandari, Geraint A. Wiggins and Simon Colton<\/em><\/li>\n<li>AI in Music and Healthcare: A Comparative Survey<br><em>Roisin Loughran and Ceara Treacy<\/em><\/li>\n<li>Towards Human-Quality Music Accompaniment using Deep Generative Models and Transformers<br><em>Arash Sadeghi Amjadi, Andrew Vardy and Andrew Staniland<\/em><\/li>\n<li>Future Sight: Fine-tuning Language Models for Dynamic Story Generation<br><em>Brian Zimmerman, Gaurav Sahu and Olga Vechtomova<\/em><\/li>\n<li>Foundations of LLCM: Labelled Lambek Calculus for Music Analysis<br><em>Matteo Bizzarri and Satoshi Tojo<\/em><\/li>\n<li>Combining local search and directed mutation in evolutionary approaches to 4-part harmony<br><em>Elia Pacioni and Francisco Fernandez De Vega<\/em><\/li>\n<li>Search-based Negative Prompt Optimisation for Text-to-Image Generation<br><em>Guillermo Iglesias, Mar Zamorano and Federica Sarro<\/em><\/li>\n<li>Cellular Au-Tonnetz: A Unified Audio-Visual MIDI Generator Using Tonnetz, Cellular Automata, and IoT<br><em>Thomas Didiot-Cook<\/em><\/li>\n<li>Exploring the Application of AIGC in Ink-Wash Animation Creation\uff1aA Case Study of Gragon Gate<br><em>Youchun Liu, Lei Wang and Danqi Zheng<\/em><\/li>\n<li>Large-image Object Detection for Fine-grained Recognition of Punches Patterns in Medieval Panel Painting<br><em>Josh Bruegger, Diana Catana, Vanja Macovaz, Matias Valdenegro-Toro, Matthia Sabatelli and Marco Zullich<\/em><\/li>\n<li>Balancing Indeterminacy and Structure: Neural Text Generation for Artistic Inspiration<br><em>Olga Vechtomova and Gaurav Sahu<\/em><\/li>\n<li>Exploring Bridges Between Creative Coding and Visual Generative AI<br><em>Jiaqi Wu and Eytan Adar<\/em><\/li>\n<li>Exploiting the Temporal Order of Sound Features for Onset Detection<br><em>Jo\u00e3o Ramos, Rolando Miragaia, Gustavo Reis, Patr\u00edcio Domingues and Carlos Grilo<\/em><\/li>\n<li>Perceptions of AI in Animation Production<br><em>Dalong Hu, Minsoo Choi, Nandhini Giri, Christos Mousas and Nicoletta Adamo-Villani<\/em><\/li>\n\n\n\n<h2 class=\"wp-block-heading\">Short talks<\/h2>\n\n\n\n<li>Evolving the Embedding Space of Diffusion Models in the Field of Visual Arts<br><em>Marcel Salvenmoser and Michael Affenzeller<\/em><\/li>\n<li>All YIN No YANG: Geometric abstraction of oil paintings with trained models, noise and self-reference<br><em>Lu\u00eds Arandas, Iulia Ionescu, Murad Khan, Mick Grierson and Miguel Carvalhais<\/em><\/li>\n<li>Generating Virtual Landscapes and Environmental Narratives with StyleGAN2<br><em>Amalia Foka<\/em><\/li>\n<li>Music Similarity Through Geometric Overlap<br><em>Raymond Conlin and Colm O&#8217; Riordan<\/em><\/li>\n<li>Aesthetic biases and opacity tactics in the training of visual artificial intelligence models<br><em>Bruno Caldas Vianna<\/em><\/li>\n<li>Graph Neural Network vs Feature-based based Folk Music Evolution Analysis<br><em>Adam Deedman and Mathieu Barthet<\/em><\/li>\n<li>Short video interestingness: a machine learning  approach to determine creative cues in  audiovisual production<br><em>Claudia Rabaioli, Alessandra Grossi and Francesca Gasparini<\/em><\/li>\n<li>Towards the Automatic Evaluation of Legibility for Graphic Design Posters<br><em>Daniel Lopes, Jo\u00e3o Macedo, Iria Santos, Alvaro Torrente-Pati\u00f1o, Jo\u00e3o Correia and Penousal Machado<\/em><\/li>\n<li>Steering Large Text-to-Image Model for Abstract Art Synthesis: Preference-based Prompt Optimization and Visualization<br><em>Aven-Le Zhou, Wei Wu, Yu-Ao Wang and Kang Zhang<\/em><\/li>\n<li>Exploring Multi-Objective Evolution for Aesthetic &#038; Abstract 3D Art<br><em>Veeramanohar Avudaiappan and Ritwik Murali<\/em><\/li>\n<li>Automated Selection and Ordering of Clip Sequences for Music Videos based on Tonal Tension and Visual Features<br><em>Nicol\u00e1s Rojas-Morales<\/em><\/li>\n<li>EmotioNotes Dataset: Decoding emotions in classical music through Concert Program Notes<br><em>Pratik Khanal and Patrick J. Donnelly<\/em><\/li>\n<li>SyMuRBench: Benchmark for symbolic music representations<br><em>Petr Strepetov and Dmitrii Kovalev<\/em><\/li>\n","protected":false},"excerpt":{"rendered":"<p>Long talks The Importance of Context in Image Generation: A Case Study for Video Game SpritesRoberto Gallotta, Antonios Liapis and Georgios N. Yannakakis An Ensemble Approach to Music Source Separation: A Comparative Analysis of Conventional and Hierarchical Stem SeparationSamarth S Rao, Saarth Vardhan, Pavani R Acharya, Oorjitha Ratna Jasthi and Natarajan Subramanyam Yin-Yang: Developing Motifs [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-684","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/www.evostar.org\/2025\/wp-json\/wp\/v2\/pages\/684","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.evostar.org\/2025\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.evostar.org\/2025\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.evostar.org\/2025\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.evostar.org\/2025\/wp-json\/wp\/v2\/comments?post=684"}],"version-history":[{"count":4,"href":"https:\/\/www.evostar.org\/2025\/wp-json\/wp\/v2\/pages\/684\/revisions"}],"predecessor-version":[{"id":716,"href":"https:\/\/www.evostar.org\/2025\/wp-json\/wp\/v2\/pages\/684\/revisions\/716"}],"wp:attachment":[{"href":"https:\/\/www.evostar.org\/2025\/wp-json\/wp\/v2\/media?parent=684"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}