Audiovisual Performance "Sensitive Aggregation"
2024. ComfyUI, NimAI, Midjourney, DALL-E, HeyGen, MAX MSP, Touch Designer, Adobe Photoshop, Adobe Premiere Pro, Cubase.

An experimental situation in which human emotions are subjected to algorithmic interpretation in real time. The performance is structured as a public act of observation at the meeting point of two incommensurable orders: live musical expression and its computational processing.

A musician sequentially embodies eleven emotional states according to Izard's classification, performing musical fragments in a live setting. In parallel, virtual agents interpret each state, unfolding an illustrative video sequence on a large screen. The interpretations are grounded in the prototypical scenarios of the Geneva Centre for Affective Sciences: emotion here has already been formalised as a reproducible pattern, extracted from lived experience and translated into a classifiable signal.

The two processes unfold simultaneously yet belong to incommensurable registers. The reduction of emotion to a classifiable signal becomes observable: each response of the system retains the form of an affective statement while systematically losing what makes it an experience. The system produces interpretation without the participation of experience.

The performance exposes a structural asymmetry that the discourse of affective computing tends not to articulate: between a signal and its affective content, there is no computable transition. Emotion arises at the intersection of bodily experience, cultural codes, and situational context, in precisely that zone which remains impenetrable to computational procedure. The question that lingers after the performance is this: what exactly are we prepared to recognise as emotion when it is a system doing the recognising?

The performance was first presented at the New Stage of the Alexandrinsky Theatre, Saint Petersburg, Russia, in November 2024.

Special thanks to Swiss Center for Affective Science and University of Geneva, Department of Psychology for prototypical scenarios of The GEneva Multimodal Emotion Portrayals (GEMEP), Switzerland;

the interdisciplinary research team SMART lab at Toronto Metropolitan University for The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Canada;

research group: Huawei Cao and Ragini Verma, Radiology Department at the University of Pennsylvania, David G. Cooper, Math and Computer Science Department at Ursinus College, Michael K. Keutmann, Department of Psychology at the University of Illinois at Chicago, Ruben C. Gur, Neuropsychiatry section of the Psychiatry Department of the University of Pennsylvania and Philadelphia Veterans Administration Medical Center, Ani Nenkova, Department of Computer and Information Science, University of Pennsylvania;

whose materials are published in the public domain.

Author: Lucien
Sound: Franko, Lucien

Curator: Christina Ots
Studio Head: Helena Nikonole
See more
Made on
Tilda