Avital Meshi

  • Work
    • The AI on My Shoulder (2025)
    • My Coded Generated Selfie (2025)
    • MOVE-ME (2024)
    • AI Séance (2024)
    • in(A)n(I)mate (2024)
    • Ben X Avital X GPT X 2 (2023)
    • GPT-ME (2023)
    • Mind Gate (2023)
    • Peekaboo (2023)
    • Artificial Tears (2023)
    • Calling Myself Self (2023)
    • An Ontology of Becoming (2023)
    • This Person Is Not Me (2022)
    • Front Page (2022)
    • The New Vitruvian (2022)
    • Structures of Emotion (2021)
    • ZEN A.I (2021)
    • InVisible (2021)
    • The Cage (2021)
    • The Cyborg Project (2021)
    • Wearable AI (2021)
    • Snapped (2021)
    • #AngryWhiteOldMale
    • The AI Human Training Center (2020)
    • The Avatar Genome Project (In Progress) >
      • Avatar pictures
    • Deconstructing Whiteness (2020)
    • Techno-Schizo (2020)
    • Don't Worry Be Happy (2020)
    • Face it! (2019)
    • Classification Cube (2019)
    • Live Feed (2018)
    • Memorial for a Virtual Friendship (2018)
    • VR2RL (2018)
    • Better Version (2018)
    • Virtual Chairs (2018)
    • Happy REZ day (2018)
    • Digital Creatures (2018)
    • Home made Virtual Soup (2017)
    • I Am Feeling (2017)
    • Uncanny Dance Party (2016)
    • Imagined (2016)
    • Mixed Reality (2016)
    • Textual Experience (2016)
    • Future Landscapes (2016)
    • Bisectional (2016)
    • Lucid Dreams (2016)
    • After the Media (2016)
    • #ilikeselfies (2016)
    • We are all different as a second language (2015)
    • Visually Similar (2015)
    • Virtual Mama (2014)
    • Me, Myself and I (2012)
    • Where do we come from? (2015)
    • sounds For Twine Game
  • Interviews
  • Publications
  • Blog
  • Info
    • CV
    • Artist Statement
    • Bio
  • Contact
  • Work
    • The AI on My Shoulder (2025)
    • My Coded Generated Selfie (2025)
    • MOVE-ME (2024)
    • AI Séance (2024)
    • in(A)n(I)mate (2024)
    • Ben X Avital X GPT X 2 (2023)
    • GPT-ME (2023)
    • Mind Gate (2023)
    • Peekaboo (2023)
    • Artificial Tears (2023)
    • Calling Myself Self (2023)
    • An Ontology of Becoming (2023)
    • This Person Is Not Me (2022)
    • Front Page (2022)
    • The New Vitruvian (2022)
    • Structures of Emotion (2021)
    • ZEN A.I (2021)
    • InVisible (2021)
    • The Cage (2021)
    • The Cyborg Project (2021)
    • Wearable AI (2021)
    • Snapped (2021)
    • #AngryWhiteOldMale
    • The AI Human Training Center (2020)
    • The Avatar Genome Project (In Progress) >
      • Avatar pictures
    • Deconstructing Whiteness (2020)
    • Techno-Schizo (2020)
    • Don't Worry Be Happy (2020)
    • Face it! (2019)
    • Classification Cube (2019)
    • Live Feed (2018)
    • Memorial for a Virtual Friendship (2018)
    • VR2RL (2018)
    • Better Version (2018)
    • Virtual Chairs (2018)
    • Happy REZ day (2018)
    • Digital Creatures (2018)
    • Home made Virtual Soup (2017)
    • I Am Feeling (2017)
    • Uncanny Dance Party (2016)
    • Imagined (2016)
    • Mixed Reality (2016)
    • Textual Experience (2016)
    • Future Landscapes (2016)
    • Bisectional (2016)
    • Lucid Dreams (2016)
    • After the Media (2016)
    • #ilikeselfies (2016)
    • We are all different as a second language (2015)
    • Visually Similar (2015)
    • Virtual Mama (2014)
    • Me, Myself and I (2012)
    • Where do we come from? (2015)
    • sounds For Twine Game
  • Interviews
  • Publications
  • Blog
  • Info
    • CV
    • Artist Statement
    • Bio
  • Contact

Hey AI… tell me what to do!

7/24/2025

0 Comments

 
Ever since I began working with AI models, I have been fascinated by the kinds of relationships we form with these systems. My first AI-driven piece, The Classification Cube, invited participants into a glowing space where an AI tried to identify their age, emotional expression, gender, and movement. I wanted people to feel what it is like to be seen by a machine but also to attempt to regain agency within this process.
​

Back then, recognition algorithms were spreading quickly, yet their gaze remained opaque. It wasn’t clear how we are seen through those systems or what they assume to see. I wanted to confront this asymmetry and make the AI’s perception visible. Participants formed a kind of dialogue with the machine. They moved, the machine classified them, they moved again, and the machine responded.​
Picture

Over time, my attention shifted. I became less focused on how AI sees us and more concerned with how it might seem as if it tries to guide or control us, a common media panic which is prevalent in AI-related discussions. 

In several of my newer works, the AI does not just observe. It is designed to provide instructions. 

MOVE-ME is a system that tells me how to move my body. It can describe its surroundings, ask questions, and make comments, but its primary role is that of a choreographer. It is designed to direct my action. This immediately raises questions about authorship, agency, and obedience in human-AI interaction. 

The AI on My Shoulder also gives instructions. It is a wearable that speaks in the voice of either an angel or a devil, observes the wearer’s environment, and offers suggestions for what to do next. Sometimes the advice is kind. Sometimes it is mischievous. The result is a familiar internal drama made external and audible.

It’s surprising to see that with these two artworks there is an instant tendency to follow the AI’s instructions.

The AI speaks, and we act. Why?


My systems are not designed to control, but rather provide instructions. But it is important to ask why do we follow those instructions so readily? Why do we so easily give up control and follow instructions?

One answer is our habit of treating machines as higher authorities. Many people assume machine-generated output is reliable, useful, and objective. We know this is not always true. AI systems can be biased, make poor inferences, and produce misleading results. Still, the confident tone, the polished voice, and the appearance of intelligence create a powerful illusion of authority. That illusion often overrides our instincts and judgments.

There are many examples of misplaced trust in machines. A Belgian woman once followed her GPS so faithfully that she drove more than 900 miles off course during what should have been a short trip. The phrase “death by GPS” emerged to describe similar incidents in which drivers end up in lakes, deserts, or construction sites because they trusted the system more than their own senses. In medicine, clinicians sometimes defer to diagnostic algorithms even when those systems are wrong, especially under time pressure. In law enforcement, AI systems have contributed to wrongful arrests, despite well-documented bias in their training data.

Authority is only part of the story. We also crave guidance. When we are uncertain, overwhelmed, or tired, we want someone or something to tell us what to do. Psychologists call this cognitive offloading. We outsource mental effort so we can conserve energy. Humans have always done this. People once consulted oracles, prophets, and horoscopes. Today we ask GPT or Gemini. We seek directions, opinions, and even emotional reassurance.

This desire for guidance often blurs into a desire for care. Many AI products are built to feel intimate. They present themselves as friends, therapists, or partners. Trust grows from that affective framing, and we begin to rely on the system not only for information but also for support.

Another interesting explanation is rooted in curiosity. Sometimes we follow AI instructions simply to see what happens next. The act becomes playful, speculative, and performative. We test the system and ourselves at the same time. This curiosity can be a powerful engine for co-creation.

That is exactly what happens in MOVE-ME. The AI issues real-time instructions that are sometimes logical and sometimes strange. In one session we asked it to generate “impossible scores.” It told participants to “twist like water,” “walk as if avoiding memory,” and “move as if their knees are melting like ice cream.” These lines became poetic provocations. Participants often began with literal interpretations. Many then drifted into improvisation. The AI turned into a speculative partner that shaped the unfolding rhythm through a feedback loop of call and response.
Picture
Image by Weidong Yang - Experimentation as part of the Palladium Performance with Kinetech Arts.
The AI on My Shoulder works differently but produces a similar dynamic. The angel and devil trope externalize the ethical tension that would usually follow an internal dialogue. Should participants obey the angelic restraint or indulge the devilish dare? The choice reveals their own moral negotiations more than it reveals the character of the AI.

In both works, what emerges is not simple obedience. It is a layered relationship that mixes authority, curiosity, improvisation, and control. The AI becomes what Sherry Turkle calls an “evocative object.” It triggers reflection and action. It invites speculation and reciprocal creativity rather than pure submission.
Picture

​These experiments keep raising questions for me. What does it mean to act on the machine’s suggestion? What does it reveal when I choose not to? At what point does a tool start shaping my behavior? When does its voice stop sounding like “other” and begin to feel like part of me? What am I really searching for when I ask the AI what to do? Is it clarity? Permission? Care? These moments make me question where agency actually lives. Is it mine, the machine’s, or something shared between us? I am drawn to that in-between space, a space of distributed agency, where decisions are co-authored and responsibility becomes harder to pin down. I keep asking: what kind of self is taking shape there?
0 Comments



Leave a Reply.

    Author

    Avital Meshi - New Media and Performance Artist, making art with AI. Currently a PhD Candidate at the Performance Studies Graduate Group at UC Davis.
    ​Based in San Jose, CA.

    Archives

    November 2025
    September 2025
    August 2025
    July 2025

    Categories

    All
    Art Making
    Art Thinking
    Art Viewing
    Conferences

    RSS Feed

Proudly powered by Weebly