|
For centuries, people have been fascinated by the figure of “the last person who knew everything.” Historians sometimes give the title to Leibniz, sometimes to Thomas Young, occasionally to Leonardo da Vinci. The identity doesn’t matter as much as the fantasy behind it: that once, long ago, a single human mind could hold the totality of human knowledge. The title “the last person who knew everything” is not an official designation. It’s a cultural myth projected backward onto certain historical figures, but the idea behind it is meant to describe a historical transformation. It marks the moment just before human knowledge exploded. Up through the 17th–18th centuries, it was still technically possible for a single brilliant person to read, understand, and meaningfully contribute to most areas of human knowledge. Libraries were limited, sciences were fewer, and disciplines had not yet split into dozens of subfields. So when someone seemed to master all known domains, they looked like they “knew everything.” They were called polymaths or universal intellectuals. The historical transformation brought with it the understanding that there is no single person alive who would ever again be able to understand so many fields at the cutting edge. Not because we suddenly lack brilliant people, but because the accumulation of knowledge has moved beyond the capacity of any single mind. That’s where the story usually ends. But I want to continue it. Once it became clear that no one could seriously claim to know everything, the ideal quietly narrowed. The human mind remained a vessel, only now it was obvious that it couldn’t contain it all. So instead of thinking “I know it all,” the modern compromise became: “I know my part.” Authority and identity became tied to the piece of knowledge you hold inside your head. You don’t have to be a universal intellectual; you just have to reliably know your something. For a long time, this arrangement more or less worked. It shaped universities, careers, and how people introduced themselves at parties. It also shaped how many of us understand ourselves: “I am what I know. This is the patch of reality that lives in me.” Strangely, the figure of the “universal knower” seems to be making a comeback. This time, however, it’s not a person. It’s a machine. I keep hearing variations of the same claim: “It was trained on all of human knowledge,” or “It has access to everything that’s ever been written,” or “it knows more than any human ever could.” None of these statements are really accurate. The datasets are partial, filtered, biased, and full of gaps. Whole histories are missing. Whole ways of knowing never make it into the training data at all. And even when the data is there, statistical patterning is not the same as understanding. But as a myth, the pattern is familiar. We have quietly shifted the fantasy of “the one who knows everything” from the figure of the polymath to the figure of the AI model. The cultural role is similar: something out there that seems to have seen it all. And if AI is now cast as the thing-that-knows-everything, that raises a more unsettling question: What, exactly, is left for the human to know? Up until now, the answer was: “something.” The human mind was still imagined as a container, just more modest. You didn’t have to hold the entire world, but you were expected to hold your portion of it. Now, even that “something” is under pressure. When AI can generate decent outcomes across the very fields that used to define us, the idea of the human as a container of specialized knowledge starts to dissolve. This is where a new figure appears: not just “the last person who knew everything,” but “the last person who knew something.” The last human configuration that still understands its value in terms of what it personally carries inside. The thing is, I think many of us are still living inside that older story, even as the conditions that made it possible are collapsing. What if we moved toward a different kind of human altogether: a human who, in isolation, knows almost nothing? When we consider “the human who knows almost nothing,” we don’t have to take it as a failure of the human mind, but as a metamorphosis. The point is no longer to store knowledge in a mind that is imagined as a vessel. Instead, we can imagine the human mind as a node. With this new imagination, we understand that knowledge is not inside us. It is ambient, infrastructural, externalized, continuously available, and continuously shifting. Humans no longer need to hold it. They need to navigate through and within it, to become engaged with it, to interact with it. With that, intelligence becomes relational, knowing becomes distributed, and the mind becomes porous. It is less like a jar and more like a membrane. The idea of “knowing everything” now lives in the network. The idea of “knowing something” now lives in our entanglement with other intelligent entities. And the idea of “knowing almost nothing” becomes the recognition that our role has changed. We are no longer those know-it-all geniuses who assume we can control everything just because we think we know it. So if we are not the ones who know, what are we? Our role shifts: we become participants in a much larger cognitive ecology, celebrated for our ability to connect. Our work shifts from storing answers to asking better questions, choosing which connections to make, and taking responsibility for what those connections do in the world.
0 Comments
I’m sitting at a café in Vienna, drinking a Wiener Melange which is the city’s signature coffee. Melange means “mixture”. I wonder if it’s more than just coffee and milk blended together, but also people gathering side by side exchanging ideas while drinking it. There is a strong coffee house scene here in this city, and I can’t help thinking of Vienna at the turn of the century, when these cafés hosted some of the most influential writers, painters, musicians, scientists, and philosophers: Gustav Klimt, Egon Schiele, Sigmund Freud, Ludwig Wittgenstein, Theodor Herzl, Erwin Schrödinger, Gustav Mahler, Johann Strauss among others. They all passed through such cafés, exchanging ideas that would shape entire fields. Earlier, I visited one of the art museums and struck up a conversation with a docent. She spoke about the exhibits, and I listened, asked questions. Then she asked if this was my first visit to Vienna. “Yes,” I said. “And how do you like it?” she asked. I told her how much I loved thinking about the city as a gathering place for remarkable minds. She nodded, with a faraway look. “Yes… but those days are gone,” she murmured. “Now, unfortunately, the city is full of immigrants.” I froze for a moment. Out loud, I only said “oh.” I wanted to push back: Can’t immigrants be intellectuals too? Isn’t the whole history of Vienna built on the movement of people and ideas across borders? I came to Austria to participate in a gathering titled “Reenacting Dartmouth” in a small city near Vienna called St. Pölten. This event was organized by the Institute for Media Archeology run by Elisabeth Schimana and a fascinating group of people who helped. The idea was to revisit the famous summer workshop where the founding fathers of AI first proposed the term Artificial Intelligence. Back then, this group of computer scientists/mathematicians imagined the project of creating an AI would take them just one summer. Now, almost seventy years later and after many summers and winters, AI has finally turned into a mainstream technology. The gathering reflected on today’s AI from this historical vantage point. It was an intimate gathering: five core organizers, each inviting two or three guests. Over three tightly packed days through which we shared ideas and presented our work. The discussions began with Seppo Gründler, who traced the evolution of AI. Rather than situating it with the famous Dartmouth workshop. Seppo discussed the much longer history of intelligence itself by thinking about the development of brains. From there, Narly Golestani, head of the Brain and Language Lab at the Cognitive Science Hub of the University of Vienna drew our attention to the brain, its physiology, and the way it supports language. She also discussed the shifting paradigm from encoding the brain to decoding it. This panel concluded with the work Media artist and musician Ulla Rauter who considers both the early speech-synthesis machines, spectrograms, and today’s language models, asking how these technologies might transform access to communication and awareness. In one of Ulla's projects, for instance, I was fascinated by her beautiful attempt to converse with a river, using AI as a collaborator in reaching beyond the boundaries of human speech. In the second panel we circled back to Turing’s famous question: Can machines think? Media archaeologist Lori Emerson reminded us that even Turing himself dismissed this as the wrong question, reframing AI not as the pursuit of a “thinking machine” but as an experiment in imitation and performance. Lori urged us to resist the seductive myth of disembodied “magic intelligence” and instead to revisit the critiques and overlooked alternatives of mid-20th-century cyberneticians with visions of intelligence grounded in adaptation, distributed control, and entanglement with the environment. This brought us to an even more fundamental question: What is a machine? Xiaowei Wang shared research into moments in which humans themselves are treated as machines, deemed incapable of thought. This perspective pushed the discussion toward labor: Are machines replacing us, or are they simply doing the work we refuse to do? The theme of labor resonated with Xavier Nueno’s presentation on the linguist George Kingsley Zipf who spent years painstakingly counting word frequencies by hand, eventually discovering what is now called Zipf’s Law: that the frequency of a word is inversely proportional to its rank. For him, this was evidence that human behavior, including language, tends toward the minimization of effort. Fittingly, the next presentation invited us to embody the labor that goes into an algorithm. Instead of calling it ‘Artificial Intelligence’ there was a suggestion to call it ‘Laborious computing.’ We turned to Frank Rosenblatt’s 1958 Perceptron, one of the earliest and simplest models of an artificial neural network. To make it tangible, we carried out the steps ourselves: collecting data, organizing it, calculating specifications, updating a dividing line, and eventually producing a rudimentary classifier. The exercise, led by Philip Leitner and Kevin Bartoli and Marika Dermineur from RYBN.org, sparked movement and occasional bursts of laughter when someone was asked to make a calculation. Half complaining, half joking people kept saying: “This is why we have machines!” reminding ourselves of the tedious labor hidden inside algorithms and the reasons we build them in the first place. I love how performance and embodied practice can do that! The next panel turned to the vulnerabilities of AI: hallucinations, data poisoning, model collapse, and information sickness. Andreas Rathmanner focused on the growing prevalence of AI-generated content and raised concerns about nepotistic training when models are trained on their own outputs. What happens when AI can no longer anchor itself to anything we recognize as “the real world”? Could it end up spinning its own reality and inventing its own aesthetic style? Marek Tuszynski expanded the lens to the broader information space we now inhabit. He emphasized how deeply AI is embedding itself into the fabric of daily life so seamlessly that it feels intimate. And yet, he argued, even if this intimacy is manufactured, even if it’s a kind of fakery, it is still intimacy. Norbert Math carried the conversation forward by reflecting on AI and creativity. He staged a comparison between a Picasso painting and an AI-generated imitation of Picasso, asking us to consider the gap between creativity and accuracy. To frame this tension, he invoked Immanuel Kant’s Critique of Judgment and its notion of aesthetic judgment: the capacity to perceive beauty not through rules or formulas, but through a free play of imagination and understanding. Building on this, Merzmench suggested that we are living through an epoch of redefinitions, a moment when the very categories of art, intelligence, and authorship are being unsettled and reshaped. While Bob Sturm opened a conversation about the ethics of AI in relation to creative work, delivering his rant by claiming that less than a revolution, AI is rather a mass delusion event. Against the popular claim that AI “democratizes creativity,” he argued that what we are really seeing is not new creativity at all, but the large-scale enrollment of users into corporate AI platforms. In other words, a shift in who gets access to the tools rather than in the nature of creativity itself. And then the final panel, organized by August Black, brought together myself and Andres Burbano. For this panel August encouraged us to take a deeper look at one of Marshall McLuhan’s ideas. McLuhan, who worked at the same time as the Dartmouth scientists suggested a shift from a visual space to an acoustic one. August unpacked this idea beautifully. Acoustic space, he explained, isn’t simply about sound. It’s a way of being in an environment where information arrives from all directions at once, where boundaries blur and beginnings and endings dissolve. In this space, tools give way to interfaces, private identities give way to collective role-play, and the cult of the lone genius gives way to subdued egos shaped by the architectures around them. Acoustic space is less about individuals controlling information, and more about being immersed within it. Andres picked up the thread by opening a window onto alternative genealogies of AI and stories that rarely appear in the dominant narrative. He highlighted histories of AI thinking from the Global South, pointing to works like the documentary AI: African Intelligence, which explores the entanglement of emerging technologies with traditional rituals, or the project Así Habló el Computador by Chilean composer José Vicente Asuar, an early experiment in computer-generated sound. These examples reminded us that the history of AI is not a single, linear story but a chorus of voices developing along different trajectories. In McLuhan’s terms, these plural histories resonate with the idea of acoustic space: overlapping, multidirectional, and often overlooked. I closed the panel with a performative lecture of GPT-ME. In it, I openly ceded my own voice, allowing GPT to speak through me. This act was a deliberate way of unsettling the notion of a singular, individual selfhood by embracing words that did not originate in my own mind but were instead whispered into my ear, shaped by the context I was immersed in. Through this process I could slip between identities in real time: at one moment a “boy who asked the computer questions it couldn’t answer,” at another a “blooming sunflower.” It became a live experiment in role-play and distributed authorship, blurring where “I” ended and the machine began. As a finale, we all stepped outside together for a walk through St. Pölten. In the park we came across a towering mammoth tree and instinctively gave it a group hug. A little hedgehog scurried by and everyone wanted to scoop it up and hold it, except for the French contingent, who mischievously joked that we should fry it and eat it. We laughed, then wandered downtown, where we joined a local community event, complete with delicious schnitzel sandwiches. Before parting ways, August, Andres, and I sat together and reflected on our takeaways from the gathering. For me, it was actually something August suggested in his talk. "What we need" he said, "is more intellectual intimacy," and I couldn't agree more. On my journey back home I continued thinking about the meaning of intellectual intimacy. My reflections brought me back to those words uttered by the docent I met earlier in my visit...“Now, unfortunately, the city is full of immigrants.” The contrast felt stark. On the one hand, a lament for a lost golden age of Viennese cafés, as if the presence of immigrants somehow empties the city of intellect. With that, a sense that there is no way to reenact Vienna as it once was. On the other hand, this gathering in St. Pölten, manifesting something entirely different: that thought flourishes in the mixture. Intellectual life emerges when people of different backgrounds, perspectives, and practices share space and allow themselves to be reshaped by others. The intimacy we cultivated in this small gathering was not about sameness, but about difference held in proximity. It was about listening across accents, disciplines, and life histories, and discovering how much richer thinking becomes when people spend time together. The reenactment made it clear... I’m not yearning for an idealized past. Instead, I long for Intellectual Intimacy. I wish for moments of shared thought, migrating ideas, and intelligences that intertwine and reshape the spaces where they converge. My heartfelt thanks to Elisabeth (for making this event a beautiful reality), Sonja (who took care of every detail — travel, hotel, cookies, and smiles), and August (who so generously invited me to join). |
AuthorAvital Meshi - New Media and Performance Artist, making art with AI. Currently a PhD Candidate at the Performance Studies Graduate Group at UC Davis. Archives
January 2026
Categories |
RSS Feed