ThinkTanks 2022

IPEM UGent's Think Tanks are dedicated to exploring themes of its research in a critical sense. External experts are invited to take part.

Overview of IPEM's Think Tanks 2022

Investigating the role of ancillary gestures in saxophone performance through multiple person’s perspectives | Nádia Moura

Friday September 2nd at 13.30, online & De Muide, De Krook IPEM UGent

Ancillary gestures play a vital role in music performance, from facilitative to communicative purposes. In my ongoing research, I am studying gestures in saxophone performance with the aim of understanding how these can be consciously used as a performance enhancement strategy by performers, teachers, and students. Based on the coupling of perception-action described on theories of embodied music cognition (Leman, 2016), a study involving multiple person’s perspectives was designed, involving the subjective view of performers (first person) through interviews, and audience (second person) through surveys, and the objective measurements of audio and motion (third person). In this ThinkTank, I will present the saxophone gesture vocabulary resulting from the observational analysis of 100 recordings of professional players, looking forward to hearing suggestions about the development of a quantitative movement study. Secondly, I will present an ongoing experiment* targeted at evaluating the impact of gestural profiles on the observers’ perception of two contrasting musical pieces.

Nádia Moura is currently a PhD candidate (FCT fellowship – Portuguese Foundation for Science and Technology) at the School of Arts and the Research Centre for Science and Technology of the Arts, at Universidade Católica Portuguesa. She holds an M.Ed. in Music Teaching (2019) from the same institution and a B.Mus. in Saxophone Performance (2017) from Aveiro University, PT. Has lectured saxophone and group music classes in music schools and conservatories and frequently performs in wind orchestras, big bands and chamber music ensembles. Her ongoing research is focused on the analysis of expressiveness/communication through body language in saxophone performance using multimodal datasets.

Hyperinstruments and algorithmic composition in real-time | Dr Nicola Baroni (Conservatorio di Milano)

Thursday May 19th at 13.30, online & De Muide, IPEM UGent

Since the last years of XXth century hyperinstruments were developed as real-time technologies enabling traditionally-trained musicians to collaborate interactively with electroacoustic systems responding to their musical gestures. Hyperinstruments are acoustic instruments augmented by sensing technologies (motion capture and input microphones) which track on-stage the sound producing movements and the musical sounds of the performer. In this way the digitally collected gestural data can be re-mapped in order to modulate, as live variables, some of the operations embedded inside the algortimic electroacoustic processes. This generative approach allows to make evolve the electroacoustic structures as a consequence of a live performance (Rowe, 2004).

Measuring and Modeling Expression in Music Performance | Dr Erica Bisesi (Montreal University)

Thursday May 19th at 13.30, online & De Muide, IPEM UGent

In this seminar, I will present a novel approach to the analysis of musical structure, performance and gesture, based on the concept of accent. Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer) (Parncutt, 2003). Immanent accents can involve, among others, temporal grouping (phrasing), meter, melody, and harmony; performed accents can involve changes in timing, dynamics, articulation, and timbre. A computational model of immanent accent salience in tonal music that automatically predicts the positions and saliences of metrical, melodic and harmonic accents was recently presented (Bisesi, Friberg and Parncutt, 2019a). Such a model was then extended to interpret local variations of tempo in terms of performed accents in seven harpsichord performances, and to classify them depending on different interpretation strategies (Caron, Bisesi and Traube, 2019b). The same method can be applied to relate immanent accents to other performance features, like dynamics, articulation and timbral descriptors (e.g., roughness, harmonicity and brightness) (Bisesi and Heroux, in progress). In an ongoing project, we plan to relate profiles of expressive features to gesture. By means of techniques of motion tracking, we will extract basic parameters describing the movement of human limbs and muscular tensions, and map them to performers’ expressive intentions and significant score events (Baroni, Caruso and Bisesi, in progress). Our collaboration aims to further expand the knowledge on the theory of expressive performance by incorporating the promising perspectives of embodied cognition (Varela, Thompson and Rosch, 1991; Leman, 2007), also taking advantage of a combination of quantitative and qualitative methods broadly adopted in the artistic research.

Suskewiet Visions | Filip Van Dingenen

Friday April 22nd at 13.30, online & De Muide, IPEM UGent

Artist Filip Van Dingenen was selected for a residency in the context of STARTS - Repairing The Present, organized by GLUON, Cleantech hub Snowball in Harelbeke, and together with Intermunicipal organisation Leiedal. With Suskewiet Visions Filip Van Dingenen aims to create awareness for the act of listening and learning from birds to shift our dominant human voice into a resonance for potential co-creation of a future landscape.

Keywords: mixed reality, presence, music concerts, human-music interaction | Stéphanie Wilain

Friday April 8th at 13.30, online & De Muide, IPEM UGent

Extended Reality technologies (or XR encompassing virtual reality, augmented reality, mixed reality) offer methodological tools that allow merging different disciplines, including architecture (Whyte, 2003), acoustics (Johansson, 2019), entertainment and education (Vostinar, Horvathova, Mitter, & Bako, 2021), psychology (Blascovich et al., 2002), neuroscience (Parsons, Gaggioli, & Riva, 2017), healthcare (Riva, Wiederhold, & Mantovani, 2019). MusiXR-Taiko is the first study in the context of MusiXR, a project that aims at improving group musical experiences in XR by tuning audio realism and visual simulations together for optimal subjective feeling of presence. Presence is rooted in the study of behavioural, cognitive, and emotional responses of users (musicians and audience) engaged in such experiences. Currently, empirical methods for assessing the subjective user experience in musical VR environments, in terms of ‘presence’ (Slater, 2018) can be improved. The MusiXR-Taiko study aims to test a first version of the protocol that will be used throughout the MusiXR project to measure and model the sense of presence in its multidimensionality. Specifically, this study consists in exploring the emergence and fabric of the sense of presence by having an audience experiencing a Japanese drumming performance (‘Taiko’’) throughout three different contexts of reality: physical reality (live performance), projected reality (screen projection of pre-recorded performance), and mixed reality (holographic scene of a pre-recorded 3D performance). Data collection and analysis involves quantitative measurement, analysis and modelling of body movement and physiological data, as well as qualitative assessments. Presence is expected to be greater in the mixed-reality condition than in the screen projection condition.

Rhythmic sound does not facilitate visual attention task performance compared to non-rhythmic sound | Jorg Dewinne

Friday March 17th at 13.30, online & De Muide, IPEM UGent

In a century where humans and machines - powered by artificial intelligence or not - increasingly work together, it is of interest to understand human processing of multi-sensory stimuli in relation to attention and working memory. My current work explores whether and when supporting visual information with rhythmic auditory stimuli can optimize multi-sensory information processing. For this purpose a novel working memory paradigm was developed where participants are presented with a series of five target digits randomly interchanged with five distractor digits. Their goal is to remember the target digits and recall them orally. Depending on the condition support is provided by audio and/or rhythm. It is expected that the sound will lead to a better performance and that this benefit is different for rhythmic and non-rhythmic sound. Last but not least, some variability is expected across participants. The effect of auditory support could be confirmed, but no difference was observed between rhythmic and non-rhythmic sounds. Overall performance was indeed affected by individual differences, such as visual dominance or perceived task difficulty. Surprisingly a music education did not significantly affect the performance and even tended towards a negative effect. As a follow-up, I am working on the analysis of the recorded EEG-data to further understand the underlying processes of attention.

Jorg De Winne is a PhD student at WAVES research group, department of Information Technology (INTEC) and at IPEM, the Institute for Psychoacoustics and Electronic Music, Department of Art, Music and Theatre sciences. He holds the degree of Master of Science in Electrical Engineering main subject Communication and Information Technology. His current FERARI-project (Feedback system for a more Engaging, Rewarding and Activating Rhythmic Interaction) aims to proof that an automatic neurofeedback system can be used to make the interaction of humans with a digital environment more engaging, rewarding and activating. The project is part of the bigger WithMe project, that extends the FERARI-project by checking whether an AI-system could benefit from direct access to biomonitoring of the person it is communicating with. Both projects require an interdisciplinary approach with topics like machine learning, interaction, signal processing, rhythm and music. Other interests include soundscape design, VR, auralization and mental restoration.

A selection of spatial composition research at ASIL | Bavo Van Kerrebroeck, Celien Hermans, Ella Jacobs, Lennert Carmen, Charo Calvo, Pieter-Jan Maes

Friday March 11th at 13.30, online & De Muide, IPEM UGent

Since the advent of electronic and electro-acoustic music, audio playback modes evolved from mono, stereo, multichannel, to 3D sound. These advances led to the development of compositional and performance practices in which the “spatiality of sound” functions as a primary musical parameter, alongside pitch, tone duration, timbre, and dynamics. Sound spatialization offers rich additional ways for the expression of emotions, creative narratives, and imaginative thoughts and feelings in music. A core question pertains to the arrangement and control of sound in 3D space. In line with the embodied music cognition theory, we attribute a key role to bodily gestures in musical expression and sense-making. This presentation will present several projects investigating how to complement the predominantly technological focus of gestural interfaces for sound spatialization with (1) artistic knowledge of the music composition, and (2) scientific knowledge in the domain of embodied music cognition and interaction. The aim is to fully exploit the artistic and creative potential of rendering gestural, imaginative expressions and repertoires into corresponding sound trajectories in 3D space. In addition, a core goal of the project is to explore the complementarity of VR visual displays in the arrangement and experience of gesture-based 3D sound. We will end this presentation with a demo of a spatial composition work created at ASIL and presented at ICMC 2021 as well as an opportunity to try a prototype of an immersive gestural-interface for spatial sound composition.

The Ipem Research Team

The Digital Score: Investigating Technological Transformation of the Music Score (DigiScore) | Craig Vear

Friday March 3th at 13.30, online & De Muide, IPEM UGent

In this talk I will introduce and discuss my ERC Consolidator funded project The Digital Score: Investigating Technological Transformation of the Music Score (DigiScore). The opening section will position the digital score amongst a broader understanding of the function and purpose of all music scores: that of a communications interface between musicians. After defining that which signals a digital score as a different proposition, I outline the research aims and objectives of the DigiScore project. I will position this research among the core principles of flow, phenomenology, embodiment, and media affect, and outline the focus of the research as seeking meaning-making from inside the creative acts of digital score musicking (Small 1989). The final section will outline some of the current new insights, and present key questions that I would wish to discuss with those present in the room as a way of allowing their voice into the development of this research and expanding the community of researchers in this area.

Prof. Craig Vear is Research Professor at De Montfort University where he is a director of the Creative AI and Robotics Lab in the Institute of Creative Technologies. His research is naturally hybrid as he draws together the fields of music, digital performance, creative technologies, Artificial Intelligence, creativity, gaming, mixed reality and robotics. He has been engaged in practice-based research with emerging technologies for nearly three decades, and was editor for The Routledge International Handbook of Practice-Based Research, published in 2022. His recent monograph The Digital Score: creativity, musicianship and innovation, was published by Routledge in 2019, and he is Series Editor of Springer’s Cultural Computing Series. In 2021 he was awarded a €2 Million ERC Consolidator Grant to continue to develop his Digital Score research -

Dwelling Xenakis. An augmented reality project on Evryali for piano solo | Pavlos Antoniadis

Friday January 11th at 13.30, online & De Muide, IPEM UGent

This talk will present an ongoing augmented reality project based on Iannis Xenakis’ work Evryali for piano solo. Drawing on an interdisciplinary framework, we transform the performance space into a hybrid environment consisting of physical, symbolic and virtual elements. The synergy of these elements reveals to the audience affordances of meaning and action that normally remain exclusive to the performer. Importantly, the audience is invited to ‘dwell’ this environment, meaning to experience the affordances in a fully embodied and multimodal way, often including real-time interaction. This hybrid environment is inextricably linked to the visualization and communication of the performer’s learning process, defined here as embodied navigation of notational affordances.

Pavlos Antoniadis (PhD in musicology, University of Strasbourg-IRCAM; MA in piano performance, University of California, San Diego; MA in musicology, University of Athens) is a Greek pianist, musicologist and creative technologist living between Berlin, Strasbourg and Athens. Next to an ongoing international career as a pianist for contemporary and experimental music, he has been equally engaged in performance science research, focusing on embodied music cognition, technology-enhanced musicianship and the biopolitics of human and machine learning. He currently holds a Humboldt Stiftung Fellowship at the Berlin Institute of Technology (TU - Audiokommunikation) and has an ongoing long-term collaboration with the Interaction-Son-Musique-Mouvement team at IRCAM (Paris), as well as with the EUR-ArTeC, Université Paris 8.

The Ear as an Eye | Francesca Ajossa

Friday January 11th at 13.30, online & De Muide, IPEM UGent

Nowadays music is just a few mouse clicks away and comes to us in many different forms; yet, no matter in which circumstances, it is primarily linked to the auditory stimuli that it consists of. Numerous studies show, however, that visual information also play a fundamental role in the way an audience experiences a musical performance. In contrast to traditional organ performances, where the visual element is almost absent because of the hidden position of the console, the aim of this project is to use the great communicative power of vision to enhance the expressiveness of the performance and possibly overcome the limitations pointed out in the literature through the idea of the “music-projected moving bodies”: a danced choreography that is added to the performer-audience line of communication. The main artistic tool here is the scientific knowledge of Embodied Music Cognition, where the body is seen as a fundamental mediator between music and mind. A study of musicians’ movements whilst playing is conducted to create the movement vocabulary, to be used in a choreography which structure is based on the application of the Laban Technique to the musical score of O. Messiaen’s cycle “Les Corps Glorieux”, as a way to translate music into movement. Through a series of iterative cycles where dialogue with dancers and audience and expert feedback are used as the main selection criteria, a final cross-modal performance is created.

Francesca Ajossa (1999) started her musical training at a very young age in the class of prof. Angelo Castaldo at the “Palestrina” Conservatory in Cagliari, where she obtained the Bachelor Diploma in 2018. Her activity as a concert organist has brought her to play at various festivals in Europe and Hong Kong as well as winning several prizes. She recorded a CD dedicated to the Organ Music from the 19th Century in Sardinia (Tactus, 2017) and in 2019 her recording of Organ Works by Italian female composers was published by “Stradivarius”. In 2020, Francesca graduated “cum Laude” from the Master of Music at Codarts Rotterdam, where she studied with prof. Ben van Oosten, and one year later also concluded her Music Psychology studies at the University of York. She’s currently a PhD candidate at KU Leuven (Belgium) and the main organist of the Church in ‘t Woudt (Netherlands).

Practicing Odin Teatret's Archive: training transmission, interaction and creativity | Adriana La Selva and Ioulia Marouda

Friday January 1st at 13.30, online IPEM UGent

The progress made in the fields of technology, information theory, computational modeling, and immersive multisensory displays put the notion of the body as archive in a new perspective, especially as far as theatre technique is concerned. In line with these recent developments, this research proposes to investigate what it means to develop, practice and perform an archive. We propose to create a sustainable model for the development, transmission and distribution of virtually archived theatre acting techniques, in which the user becomes interactively and creatively engaged in the production of knowledge about theatre training practices.  By elaborating on hybrids of both physical and virtual spaces, we allow a user to drift towards a new awareness of embodied knowledge transmission, production and distribution, in which the freedom of a theatre laboratory provides the space for an interactive and creative encounter with codified artistic techniques and practices through virtual reality immersion. In this sense, the archive becomes a dramaturgical tool for the actor, dancer and performer, an ‘architecture of access’ to find one's way through the great amount of data available nowadays through lived experience. By treating archive/embodied heritage as an interactive tool where the main focus is on an interdisciplinary functionality of one's experience, this research gives voice to its potential on fostering innovative expressive communication.

In this Think Tank, we will share the progress of our collaboration so far, the MoCap developments with the practitioners involved in the project and our first translations of their practices to the virtual archive. We will also like to share some concerns regarding these translations and think with you all on possible solutions. This senior research project is funded by FWO and coordinated by S:PAM (Studies in Performance and Media- Ghent University) and IPEM (Institute for Psychoacoustics and Electronic Music), in collaboration with Utrecht University (NL), Manchester Metropolitan University (UK), Aalborg University (DK) and Odin Teatret (DK).

Adriana La Selva is a theatre-maker, a performer, a networker and a researcher. Her research investigates what it means to practice an archive, by addressing the transmission of embodied practices through virtual media and dramaturgical approaches to archival practices. In 2009, she concluded her Master’s degree in Contemporary Arts, practice-based, at the University of Lancaster, UK, on Deleuze and Guattari’s notion of becoming in relation to physical theatre. She is since 2015 member of the international theatre group The Bridge of Winds, led by Odin Teatret actress Iben Nagel Rasmussen. Adriana co-founded Cross Pollination together with Marije Nie, an international network of performers and researchers, which focuses on the dialogue in-between practices, new ways of knowledge building and understanding collaboration. She created theater m u s t, a company based in Antwerp focused on theatre for young audiences, and is the artistic director of comm’on vzw, an organization focusing on artistic exchange and applied theatre, based in Ghent.

Ioulia Marouda is a multidisciplinary designer whose work expands between interactive art and scenography. Her research interests include the transmission of embodied knowledge through immersive and interactive technologies as well as the translation of physiological data as a way to explore the possibilities of the virtual body to uncover qualities otherwise invisible. Ever since her diploma studies in Architecture at the National Technical University of Athens, she grew an unusual interest in the way digital technologies affect our perception of space. This led her to study further in the Interactive Architecture Lab at UCL. She has worked with design studios and in theatre in Germany and the UK designing physical and digital temporary spaces. Ultimately, through working across mediums she aims to express the story of today.

Remark by Luc Nijs during the thinktank: L’intentionnalité se fait encore dans un espace euclidien qui l’empêche de se comprendre elle-même et doit être dépassée vers un autre espace, ‘topologique’, [...]. » (DELEUZE, G., Foucault, Les Editions de Minuit, Paris, 2004, [1986], p. 117-118).