Then came the large language models (LLMs). I began working with GPT before it went mainstream, and I quickly fell in love with it. Back when it was still GPT-2, it felt like the model and I were struggling side by side. Its writing was clumsy and scattered, sentences often began with one idea and ended somewhere completely different. But I found something delightful in its unpredictability. I spent hours talking with it, captivated by its creative sparks and incoherent stream of consciousness.
Then everything changed. The model got better. Much better! I found myself feeling both amazed and unsettled. How has it improved so quickly? Why couldn’t I keep up? That frustration eventually pushed me to explore ways to integrate GPT into my own life and embody it. I developed GPT-ME, a wearable system that allowed GPT to whisper words into my ear during all my social interactions. For months, I spoke GPT’s words instead of coming up with them myself. That experiment was highly transformative and I continue performing it, thinking and writing about it. As someone who has willingly landed her voice to a machine, I often find myself asking: what’s the point of writing, now that AI can do it for me? If GPT can generate sentences faster, more clearly, and often more eloquently than I can, then why bother? Why struggle with grammar, flow, or the constraints of a second language when a model can produce polished text in seconds? Framing the question this way reflects a common mindset, one that sees AI as something designed to outperform humans and eventually replace us. It turns our relationship with AI into a zero-sum game, a competition over labor, intelligence, and decision-making. In the process, we begin to relinquish the very practices that shape our identities and define our humanity. But this competitive framing isn’t inherent to the technology itself. It is rather a product of the cultural, political, and economic systems through which AI is developed and understood. There are other ways for us to imagine our lives alongside AI technology. What if, instead of treating AI as a replacement, we saw it as a companion? Or a collaborator? What if writing with AI wasn’t a form of surrender, but an invitation to think differently, to stretch the boundaries of authorship, voice, and self? When I write with GPT, I’m not trying to compete. I’m trying to engage. There is a back-and-forth in which I write something and give it to GPT. Sometimes I ask GPT to rewrite what I’ve written, making it clearer and more coherent. Other times, I ask it to respond directly to my thoughts. Sometimes I let it complete my sentences. Sometimes it surprises me, and I follow its lead. Other times, I reject its suggestions and start again. This interaction is not just about getting this piece of text done. Instead, it becomes an intimate relationship between GPT and me. A correspondence that reveals how language emerges between us, shaped by context, intention, and negotiation. Writing in this hybrid mode makes me more aware of the choices I make. It slows me down. It reminds me that more than anything, language is a space of encounter. Between me and the machine. Between me and my readers. Between who I am and who I might become. So I write, not in spite of AI, but with it. It is not about proving that I can. Not even primarily so others can read what I produce. I write to cultivate relationships, to explore what becomes possible when authorship is shared, distributed, and unstable. Come to think about it, wasn’t it always like that? Writing has never been a purely solitary act. My words carry the echoes of every book I’ve read, every conversation I’ve had, every teacher who corrected my drafts. Even when I thought I was writing alone, I was in dialogue with ideas, with conventions, with the words of others. The self, after all, is never entirely separate. It is shaped through relations, through language, through time. Maybe the real shift we’re experiencing isn’t just about letting AI write with or for us. Maybe the deeper transformation lies in having AI as our reader. GPT becomes an entity that reads my words and responds, evaluates, and reflects. Beyond that, it is also being trained on what I write. Might this be the reason to write these days?
1 Comment
Ever since I began working with AI models, I have been fascinated by the kinds of relationships we form with these systems. My first AI-driven piece, The Classification Cube, invited participants into a glowing space where an AI tried to identify their age, emotional expression, gender, and movement. I wanted people to feel what it is like to be seen by a machine but also to attempt to regain agency within this process. Back then, recognition algorithms were spreading quickly, yet their gaze remained opaque. It wasn’t clear how we are seen through those systems or what they assume to see. I wanted to confront this asymmetry and make the AI’s perception visible. Participants formed a kind of dialogue with the machine. They moved, the machine classified them, they moved again, and the machine responded. Over time, my attention shifted. I became less focused on how AI sees us and more concerned with how it might seem as if it tries to guide or control us, a common media panic which is prevalent in AI-related discussions. In several of my newer works, the AI does not just observe. It is designed to provide instructions. MOVE-ME is a system that tells me how to move my body. It can describe its surroundings, ask questions, and make comments, but its primary role is that of a choreographer. It is designed to direct my action. This immediately raises questions about authorship, agency, and obedience in human-AI interaction. The AI on My Shoulder also gives instructions. It is a wearable that speaks in the voice of either an angel or a devil, observes the wearer’s environment, and offers suggestions for what to do next. Sometimes the advice is kind. Sometimes it is mischievous. The result is a familiar internal drama made external and audible. It’s surprising to see that with these two artworks there is an instant tendency to follow the AI’s instructions. The AI speaks, and we act. Why? My systems are not designed to control, but rather provide instructions. But it is important to ask why do we follow those instructions so readily? Why do we so easily give up control and follow instructions? One answer is our habit of treating machines as higher authorities. Many people assume machine-generated output is reliable, useful, and objective. We know this is not always true. AI systems can be biased, make poor inferences, and produce misleading results. Still, the confident tone, the polished voice, and the appearance of intelligence create a powerful illusion of authority. That illusion often overrides our instincts and judgments. There are many examples of misplaced trust in machines. A Belgian woman once followed her GPS so faithfully that she drove more than 900 miles off course during what should have been a short trip. The phrase “death by GPS” emerged to describe similar incidents in which drivers end up in lakes, deserts, or construction sites because they trusted the system more than their own senses. In medicine, clinicians sometimes defer to diagnostic algorithms even when those systems are wrong, especially under time pressure. In law enforcement, AI systems have contributed to wrongful arrests, despite well-documented bias in their training data. Authority is only part of the story. We also crave guidance. When we are uncertain, overwhelmed, or tired, we want someone or something to tell us what to do. Psychologists call this cognitive offloading. We outsource mental effort so we can conserve energy. Humans have always done this. People once consulted oracles, prophets, and horoscopes. Today we ask GPT or Gemini. We seek directions, opinions, and even emotional reassurance. This desire for guidance often blurs into a desire for care. Many AI products are built to feel intimate. They present themselves as friends, therapists, or partners. Trust grows from that affective framing, and we begin to rely on the system not only for information but also for support. Another interesting explanation is rooted in curiosity. Sometimes we follow AI instructions simply to see what happens next. The act becomes playful, speculative, and performative. We test the system and ourselves at the same time. This curiosity can be a powerful engine for co-creation. That is exactly what happens in MOVE-ME. The AI issues real-time instructions that are sometimes logical and sometimes strange. In one session we asked it to generate “impossible scores.” It told participants to “twist like water,” “walk as if avoiding memory,” and “move as if their knees are melting like ice cream.” These lines became poetic provocations. Participants often began with literal interpretations. Many then drifted into improvisation. The AI turned into a speculative partner that shaped the unfolding rhythm through a feedback loop of call and response. Image by Weidong Yang - Experimentation as part of the Palladium Performance with Kinetech Arts. The AI on My Shoulder works differently but produces a similar dynamic. The angel and devil trope externalize the ethical tension that would usually follow an internal dialogue. Should participants obey the angelic restraint or indulge the devilish dare? The choice reveals their own moral negotiations more than it reveals the character of the AI. In both works, what emerges is not simple obedience. It is a layered relationship that mixes authority, curiosity, improvisation, and control. The AI becomes what Sherry Turkle calls an “evocative object.” It triggers reflection and action. It invites speculation and reciprocal creativity rather than pure submission. These experiments keep raising questions for me. What does it mean to act on the machine’s suggestion? What does it reveal when I choose not to? At what point does a tool start shaping my behavior? When does its voice stop sounding like “other” and begin to feel like part of me? What am I really searching for when I ask the AI what to do? Is it clarity? Permission? Care? These moments make me question where agency actually lives. Is it mine, the machine’s, or something shared between us? I am drawn to that in-between space, a space of distributed agency, where decisions are co-authored and responsibility becomes harder to pin down. I keep asking: what kind of self is taking shape there? Any day is a good day to start a blog. This one will focus on my experiences with art-making, art-viewing, and art-thinking. For a long time, I struggled to express myself in writing. English is not my first language, which made it even more challenging. However, with the advent of large language models (LLMs), every piece of text I write, whether it's a short email or a lengthy book chapter, now gets proofread by one LLM or another (including this post). My favorite prompt is “rewrite for clarity and coherence,” which I use to refine my text. To me, this process is like photo editing. When I was a professional photographer, no photo could be published without at least some editing.
My engagement with LLMs goes beyond writing. As you can see from my artworks, AI is my primary medium, and I explore it through performances, installations, and new media theory. I follow a practice-as-research (PAR) methodology, which is just another way of saying that I live with AI, something we all do these days. Through my art, I aim to understand its impact on my life, behavior, and social interactions. Beyond creating art, I also maintain a dedicated practice of art viewing. I regularly visit museums and galleries to see what other artists are making and what they are thinking about. Contemporary art is my favorite, though I also take time to appreciate modern and classic works. My art-making is constantly in dialogue with the work of others. I don’t work in a vacuum, and I find it difficult to claim that my work is entirely original because, honestly, I don’t believe anything can be called truly original. The art I encounter is an essential source of inspiration for me. Sometimes, the connection is obvious, and my work responds directly to something I've seen. Other times, the influence is less clear, but I know it’s there. Art-thinking includes engaging in conversations, reading books and articles, and spending time reflecting and formulating questions or answers in my mind. These activities also inspire my art in various ways. In fact, this blog will be part of my art thinking in itself. As I continue to explore, I hope to share insights, inspirations, and reflections that spark meaningful conversations and discussions. I invite you to join me in this ongoing dialogue with art, AI, and the world around us. I look forward to hearing your thoughts and reflections as we navigate this ever-changing landscape. |
AuthorAvital Meshi - New Media and Performance Artist, making art with AI. Currently a PhD Candidate at the Performance Studies Graduate Group at UC Davis. Archives
November 2025
Categories |
RSS Feed