Avital Meshi

  • Work
    • The AI on My Shoulder (2025)
    • My Coded Generated Selfie (2025)
    • MOVE-ME (2024)
    • AI Séance (2024)
    • in(A)n(I)mate (2024)
    • Ben X Avital X GPT X 2 (2023)
    • GPT-ME (2023)
    • Mind Gate (2023)
    • Peekaboo (2023)
    • Artificial Tears (2023)
    • Calling Myself Self (2023)
    • An Ontology of Becoming (2023)
    • This Person Is Not Me (2022)
    • Front Page (2022)
    • The New Vitruvian (2022)
    • Structures of Emotion (2021)
    • ZEN A.I (2021)
    • InVisible (2021)
    • The Cage (2021)
    • The Cyborg Project (2021)
    • Wearable AI (2021)
    • Snapped (2021)
    • #AngryWhiteOldMale
    • The AI Human Training Center (2020)
    • The Avatar Genome Project (In Progress) >
      • Avatar pictures
    • Deconstructing Whiteness (2020)
    • Techno-Schizo (2020)
    • Don't Worry Be Happy (2020)
    • Face it! (2019)
    • Classification Cube (2019)
    • Live Feed (2018)
    • Memorial for a Virtual Friendship (2018)
    • VR2RL (2018)
    • Better Version (2018)
    • Virtual Chairs (2018)
    • Happy REZ day (2018)
    • Digital Creatures (2018)
    • Home made Virtual Soup (2017)
    • I Am Feeling (2017)
    • Uncanny Dance Party (2016)
    • Imagined (2016)
    • Mixed Reality (2016)
    • Textual Experience (2016)
    • Future Landscapes (2016)
    • Bisectional (2016)
    • Lucid Dreams (2016)
    • After the Media (2016)
    • #ilikeselfies (2016)
    • We are all different as a second language (2015)
    • Visually Similar (2015)
    • Virtual Mama (2014)
    • Me, Myself and I (2012)
    • Where do we come from? (2015)
    • sounds For Twine Game
  • Interviews
  • Publications
  • Blog
  • Info
    • CV
    • Artist Statement
    • Bio
  • Contact
  • Work
    • The AI on My Shoulder (2025)
    • My Coded Generated Selfie (2025)
    • MOVE-ME (2024)
    • AI Séance (2024)
    • in(A)n(I)mate (2024)
    • Ben X Avital X GPT X 2 (2023)
    • GPT-ME (2023)
    • Mind Gate (2023)
    • Peekaboo (2023)
    • Artificial Tears (2023)
    • Calling Myself Self (2023)
    • An Ontology of Becoming (2023)
    • This Person Is Not Me (2022)
    • Front Page (2022)
    • The New Vitruvian (2022)
    • Structures of Emotion (2021)
    • ZEN A.I (2021)
    • InVisible (2021)
    • The Cage (2021)
    • The Cyborg Project (2021)
    • Wearable AI (2021)
    • Snapped (2021)
    • #AngryWhiteOldMale
    • The AI Human Training Center (2020)
    • The Avatar Genome Project (In Progress) >
      • Avatar pictures
    • Deconstructing Whiteness (2020)
    • Techno-Schizo (2020)
    • Don't Worry Be Happy (2020)
    • Face it! (2019)
    • Classification Cube (2019)
    • Live Feed (2018)
    • Memorial for a Virtual Friendship (2018)
    • VR2RL (2018)
    • Better Version (2018)
    • Virtual Chairs (2018)
    • Happy REZ day (2018)
    • Digital Creatures (2018)
    • Home made Virtual Soup (2017)
    • I Am Feeling (2017)
    • Uncanny Dance Party (2016)
    • Imagined (2016)
    • Mixed Reality (2016)
    • Textual Experience (2016)
    • Future Landscapes (2016)
    • Bisectional (2016)
    • Lucid Dreams (2016)
    • After the Media (2016)
    • #ilikeselfies (2016)
    • We are all different as a second language (2015)
    • Visually Similar (2015)
    • Virtual Mama (2014)
    • Me, Myself and I (2012)
    • Where do we come from? (2015)
    • sounds For Twine Game
  • Interviews
  • Publications
  • Blog
  • Info
    • CV
    • Artist Statement
    • Bio
  • Contact

Conversations with my Hairbrush

8/2/2025

0 Comments

 
Lately, I’ve been talking with my hairbrush. Not just holding it like a microphone (though that happens too), but actually asking it questions. What do you think of curly hair versus straight? Do you  feel neglected when I forget to pack you for a trip? Are you tired of all the tangles?

And the hairbrush responds. It tells me it loves the gentle swoop through curls. It forgives me for leaving it behind. It likes being useful and knows how to handle tangles.
Picture

​Of course, my hairbrush doesn’t really speak. But if you let an AI augment it, my hairbrush suddenly has a voice. This idea is examined through the artwork in(A)n(I)mate, an interactive AI-driven piece that is designed to invite participants to converse with objects.

Participants place an object in front of the box, and with the help of GPT the object begins to “speak” in real time. It answers questions in a voice that can be thoughtful, snarky, poetic, affectionate, depending on what the object is and how GPT interprets it.


The experience is playful, funny, and even absurd at first.
​

When you interact with the piece, you might not be aware that you’re speaking to an AI language model. And even if you did know, you begin to feel that maybe, just maybe, this object is actually listening to you, reflecting on an answer, and responding back to your questions.

in(A)n(I)mate isn’t trying to trick anyone into believing a hairbrush is sentient. Instead it tries to use AI to mediate a performative encounter between you and the object you brought to the table.

At first, it might feel like a quirky tech demo. But slowly, the object you’re speaking with begins to matter in a different way. It invites attention, even empathy. Suddenly, it is no longer just "ready-to-hand" as Martin Heidegger might say—a tool to be used. Instead, it becomes “present-at-hand”: a thing noticed, contemplated, and strangely alive in its own materiality.

Throughout this encounter you might begin to wonder: “What is it like to be a hairbrush?”

Of course, this question echoes Thomas Nagel’s famous 1974 essay, What is it Like to Be a Bat? In it, Nagel argued that no matter how much we study a bat’s physiology or behavior, we can never fully grasp the subjective experience of being a bat. The “what it is like” from the inside. The bat’s world is shaped by modes of being that are fundamentally inaccessible to human understanding.

So when we try to understand what it is like to be a hairbrush, we are limited to our own human frame of reference, and this resource is inadequate to the task.

in(A)n(I)mate does not offer an answer. What it offers instead is a speculative encounter where we can explore what it means to even ask what it is like to be a hairbrush.

Rather than trying to “solve” the object or extract its inner truth, in(A)n(I)mate uses GPT to approach the object obliquely, through metaphor.

Metaphor, as Graham Harman argues, is a powerful method of contact. It gestures toward the object’s surface while honoring its depth. It lets us approach the object as a “sensual” entity, acknowledging that the “real” object remains fundamentally withdrawn.

So when the hairbrush responds, we’re not hearing its essence. We’re hearing a performance, shaped by language, data, cultural associations, and GPT’s training. We are allured into contact. We are being called to approach the object differently, to acknowledge that it has a reality apart from us.


And that’s the point. GPT doesn’t “know” what it means to be a hairbrush any more than we do, but in mediating this encounter it produces a space of reflection. A space where the object becomes a collaborator in a process of meaning-making.
Picture
Picture
Picture
Picture
That shift in perception matters.
​

Jane Bennett talks about vibrant matter and argues that inanimate things possess a kind of liveliness, an agency that isn’t conscious, but still active. She warns that when we think we already know what something is we stop noticing what else it might be. We miss the chance to see the object as an active participant. Bennett encourages us to use a little bit of anthropomorphism in an attempt to better understand what is in front of us.

Indeed, these conversations with objects through the in(A)n(I)mate system might reflect our own human perceptions of the objects: the assumptions, stereotypes, and symbolic associations embedded in language and culture. But then again, there might be more to it.

N. Katherine Hayles invites us to consider nonconscious cognition. A distributed, relational, and often inaccessible form of thinking that occurs across systems, both human and nonhuman. GPT, in this light, can be seen as a cognitive partner. It doesn’t understand the object. But it doesn’t need to. It connects data, concepts, and patterns in ways that exceed our human capacity, surfacing associations we might not have made. GPT helps us reveal what Hayles calls “latent knowledge.”

in(A)n(I)mate thus becomes a stage where multiple forms of cognition converge: the human speaker, the object’s material presence, the training data, the algorithm, the prompt, the tone of voice, the lighting in the room, even the WiFi signal. Meaning emerges not from a single source but from an entangled apparatus. Karen Barad might describe it as a site of intra-action, where agency is not pre-given but co-constituted.

Barad’s concept of posthumanist performativity helps us see that the object’s voice is not a static representation of its essence, but the result of a relational performance. The hairbrush in this setting doesn’t have a fixed personality. It is not merely recognized by GPT, it is rather produced by the questions we ask and the AI generated responses. It is becoming throughout the encounter. If we were to ask a different question, give GPT a different prompt, a different framing of the image, the personality of the hairbrush might shift entirely.

This relational becoming opens new possibilities for how we relate to the world around us. With the in(A)n(I)mate system we can potentially speak with each and every object.

Ian Bogost once wrote, “anything is thing enough to party.” 

However, Bill Brown, in Thing Theory, reminds us that our understanding of objects often lags behind their being. When technologies change we lose the cultural fluency to recognize the object for what it once was. GPT, trained on contemporary language and associations, may misrecognize objects and be biased for or against particular objects.

And yet, even these misrecognitions can be generative. A forgotten object, misunderstood by AI, might speak with a strange, unexpected voice.


The in(A)n(I)mate system doesn't offer answers. It offers a relational encounter. And we might ask what if we took these encounters seriously? Consider them as provocations and start caring for objects not just because of their function or exchange value, but because they asked us to?
Marshall McLuhan suggested that media are extensions. We might begin to think of GPT not as an extension of the human, but as an extension of objects, allowing them to express themselves in natural language.
​

So maybe the hairbrush has been trying to speak with me all along. We just didn’t have the right interface to hear it.
​
~~~~~~~~~~
This post shares ideas from my forthcoming SIGGRAPH 2025 Art Paper, co-authored with Adam Wright. We’ll be presenting it next week in Vancouver. Hope to see you there at the Art Papers session!

Check it out here: 
https://dl.acm.org/doi/full/10.1145/3736787
Monday, 11 August, 2025
Picture
0 Comments



Leave a Reply.

    Author

    Avital Meshi - New Media and Performance Artist, making art with AI. Currently a PhD Candidate at the Performance Studies Graduate Group at UC Davis.
    ​Based in San Jose, CA.

    Archives

    November 2025
    September 2025
    August 2025
    July 2025

    Categories

    All
    Art Making
    Art Thinking
    Art Viewing
    Conferences

    RSS Feed

Proudly powered by Weebly