Media Installation "sensitive_aggregation"
2024. ComfyUI, NimAI, Midjourney, DALL-E, HeyGen, MAX MSP, TouchDesigner, Adobe Photoshop, Adobe Premiere Pro, Cubase.
The work translates the investigation of affective computing into the form of a sustained operational environment. Six screens establish a space in which each screen corresponds to one of the six basic emotions according to Ekman's classification: joy, sadness, fear, surprise, anger, and disgust. Original characters voice a textual narrative constructed on the basis of GEMEP prototypical scenarios, accompanying audiovisual structures aggregated from the RAVDESS and CREMA-D databases: coherent yet reduced representations of emotion.
The viewer moves freely within the environment, choosing which screen to observe. Six pairs of headphones allow immersion in the sound layer of each channel: a musical composition and a narrative unfolding around a specific emotional state. The content plays in a continuous loop regardless of the viewer's presence: the environment exists as a self-sufficient process that requires no external participation.
The installation renders the mechanism of reduction itself observable: the computational model constructs a coherent picture precisely where affective experience remains fundamentally heterogeneous, situational, and bodily conditioned. The gap between the complexity of lived experience and its algorithmic processing is not an event here, but a permanently operating condition of the environment.
From this gap, the ethical dimension of affective computing becomes visible. Emotion recognition technologies operate with markers that vary across cultures, bodies, and situations. The capacity to aggregate and classify emotional data opens a space for control and manipulation: a territory on which computational precision and ethical permissibility cease to coincide.
The project was first presented at the exhibition "The Realism of the Invisible" at the Air Gallery, Art & Science Centre, ITMO University, Saint Petersburg, Russia, in July 2024.
Special thanks to Swiss Center for Affective Science and University of Geneva, Department of Psychology for prototypical scenarios of The GEneva Multimodal Emotion Portrayals (GEMEP), Switzerland;
the interdisciplinary research team SMART lab at Toronto Metropolitan University for The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Canada;
research group: Huawei Cao and Ragini Verma, Radiology Department at the University of Pennsylvania, David G. Cooper, Math and Computer Science Department at Ursinus College, Michael K. Keutmann, Department of Psychology at the University of Illinois at Chicago, Ruben C. Gur, Neuropsychiatry section of the Psychiatry Department of the University of Pennsylvania and Philadelphia Veterans Administration Medical Center, Ani Nenkova, Department of Computer and Information Science, University of Pennsylvania;
whose materials are published in the public domain.
Author: Lucien
Sound: Franko, Lucien
Curator: Christina Ots
Studio Head: Helena Nikonole
See more