|
Ever since I began working with AI models, I have been fascinated by the kinds of relationships we form with these systems. My first AI-driven piece, The Classification Cube, invited participants into a glowing space where an AI tried to identify their age, emotional expression, gender, and movement. I wanted people to feel what it is like to be seen by a machine but also to attempt to regain agency within this process. Back then, recognition algorithms were spreading quickly, yet their gaze remained opaque. It wasn’t clear how we are seen through those systems or what they assume to see. I wanted to confront this asymmetry and make the AI’s perception visible. Participants formed a kind of dialogue with the machine. They moved, the machine classified them, they moved again, and the machine responded. Over time, my attention shifted. I became less focused on how AI sees us and more concerned with how it might seem as if it tries to guide or control us, a common media panic which is prevalent in AI-related discussions. In several of my newer works, the AI does not just observe. It is designed to provide instructions. MOVE-ME is a system that tells me how to move my body. It can describe its surroundings, ask questions, and make comments, but its primary role is that of a choreographer. It is designed to direct my action. This immediately raises questions about authorship, agency, and obedience in human-AI interaction. The AI on My Shoulder also gives instructions. It is a wearable that speaks in the voice of either an angel or a devil, observes the wearer’s environment, and offers suggestions for what to do next. Sometimes the advice is kind. Sometimes it is mischievous. The result is a familiar internal drama made external and audible. It’s surprising to see that with these two artworks there is an instant tendency to follow the AI’s instructions. The AI speaks, and we act. Why? My systems are not designed to control, but rather provide instructions. But it is important to ask why do we follow those instructions so readily? Why do we so easily give up control and follow instructions? One answer is our habit of treating machines as higher authorities. Many people assume machine-generated output is reliable, useful, and objective. We know this is not always true. AI systems can be biased, make poor inferences, and produce misleading results. Still, the confident tone, the polished voice, and the appearance of intelligence create a powerful illusion of authority. That illusion often overrides our instincts and judgments. There are many examples of misplaced trust in machines. A Belgian woman once followed her GPS so faithfully that she drove more than 900 miles off course during what should have been a short trip. The phrase “death by GPS” emerged to describe similar incidents in which drivers end up in lakes, deserts, or construction sites because they trusted the system more than their own senses. In medicine, clinicians sometimes defer to diagnostic algorithms even when those systems are wrong, especially under time pressure. In law enforcement, AI systems have contributed to wrongful arrests, despite well-documented bias in their training data. Authority is only part of the story. We also crave guidance. When we are uncertain, overwhelmed, or tired, we want someone or something to tell us what to do. Psychologists call this cognitive offloading. We outsource mental effort so we can conserve energy. Humans have always done this. People once consulted oracles, prophets, and horoscopes. Today we ask GPT or Gemini. We seek directions, opinions, and even emotional reassurance. This desire for guidance often blurs into a desire for care. Many AI products are built to feel intimate. They present themselves as friends, therapists, or partners. Trust grows from that affective framing, and we begin to rely on the system not only for information but also for support. Another interesting explanation is rooted in curiosity. Sometimes we follow AI instructions simply to see what happens next. The act becomes playful, speculative, and performative. We test the system and ourselves at the same time. This curiosity can be a powerful engine for co-creation. That is exactly what happens in MOVE-ME. The AI issues real-time instructions that are sometimes logical and sometimes strange. In one session we asked it to generate “impossible scores.” It told participants to “twist like water,” “walk as if avoiding memory,” and “move as if their knees are melting like ice cream.” These lines became poetic provocations. Participants often began with literal interpretations. Many then drifted into improvisation. The AI turned into a speculative partner that shaped the unfolding rhythm through a feedback loop of call and response. Image by Weidong Yang - Experimentation as part of the Palladium Performance with Kinetech Arts. The AI on My Shoulder works differently but produces a similar dynamic. The angel and devil trope externalize the ethical tension that would usually follow an internal dialogue. Should participants obey the angelic restraint or indulge the devilish dare? The choice reveals their own moral negotiations more than it reveals the character of the AI. In both works, what emerges is not simple obedience. It is a layered relationship that mixes authority, curiosity, improvisation, and control. The AI becomes what Sherry Turkle calls an “evocative object.” It triggers reflection and action. It invites speculation and reciprocal creativity rather than pure submission. These experiments keep raising questions for me. What does it mean to act on the machine’s suggestion? What does it reveal when I choose not to? At what point does a tool start shaping my behavior? When does its voice stop sounding like “other” and begin to feel like part of me? What am I really searching for when I ask the AI what to do? Is it clarity? Permission? Care? These moments make me question where agency actually lives. Is it mine, the machine’s, or something shared between us? I am drawn to that in-between space, a space of distributed agency, where decisions are co-authored and responsibility becomes harder to pin down. I keep asking: what kind of self is taking shape there?
0 Comments
Leave a Reply. |
AuthorAvital Meshi - New Media and Performance Artist, making art with AI. Currently a PhD Candidate at the Performance Studies Graduate Group at UC Davis. Archives
November 2025
Categories |
RSS Feed