#affective_operations
#perceptual_thresholds
//Google Colab, HeyGen, ElevenLabs, Adobe Photoshop, Adobe Premiere Pro, Cubase.

Lanas
Audiovisual Performance


An audiovisual performance in which a neural network model is trained to work with the linguistic material of emotional states. The foundation is a dataset of over a thousand words, collected
by hand from Russian-language synonym and association dictionaries and mapped onto eleven basic affective categories. Trained on this data, the model generates new lexical units that
are syntactically similar to the originals yet stripped of semantic content.

The performance unfolds as a public act of observation. A character appears on screen:
he speaks, his speech is formed, his presence is legible, and it is precisely this legible presence that exposes the absence of what usually stands behind a word. Speech synthesis
and animation produce a subject of utterance behind whom there is no experience. A musician accompanies this process through live improvisation, building a parallel line in real time: a human articulation of the same affective states with which the system operates. The two lines coexist without entering into dialogue — one produces form without content, the other fills content without any guarantee of form.

The viewer finds themselves between two orders: a computational operation indifferent
to meaning, and a bodily act in which meaning is inseparable from experience. In this way,
the limit of affective computing becomes apparent: the structure of emotion is reproducible,
but the experience that fills it does not belong to the domain of syntax.

The performance was first presented at the Dom 77 creative cluster as part of the All-Russian Night of Museums initiative, Samara, Russia, in May 2023.

Author: Lucien
Sound: Franko, Lucien

Curator: Laura Sinanyan
#affective_operations
#perceptual_thresholds
//Google Colab, HeyGen, ElevenLabs, Adobe Photoshop, Adobe Premiere Pro, Cubase.

Lanas
Audiovisual Performance

An audiovisual performance in which
a neural network model is trained
to work with the linguistic material
of emotional states. The foundation
is a dataset of over a thousand words, collected by hand from Russian-language synonym and association dictionaries and mapped onto eleven basic affective categories. Trained
on this data, the model generates new lexical units that are syntactically similar to the originals yet stripped of semantic content.

The performance unfolds as a public
act of observation. A character appears
on screen: he speaks, his speech
is formed, his presence is legible,
and it is precisely this legible presence that exposes the absence of what usually stands behind a word. Speech synthesis and animation produce
a subject of utterance behind whom there is no experience. A musician accompanies this process through live improvisation, building a parallel line
in real time: a human articulation
of the same affective states with which the system operates. The two lines coexist without entering into dialogue — one produces form without content,
the other fills content without
any guarantee of form.

The viewer finds themselves between two orders: a computational operation indifferent to meaning, and a bodily
act in which meaning is inseparable from experience. In this way, the limit
of affective computing becomes apparent: the structure of emotion
is reproducible, but the experience
that fills it does not belong
to the domain of syntax.

The performance was first presented
at the Dom 77 creative cluster as part
of the All-Russian Night of Museums initiative, Samara, Russia, in May 2023.

Author: Lucien
Sound: Franko, Lucien

Curator: Laura Sinanyan
Made on
Tilda