AI and Poetry: A Defense of Human Poiesis Against LLMs
6 min read

One familiar way to define poetic language (or poetry, plain and simple) is through unexpected associations. A poem often connects two words –or two ideas– that at first glance don't match. Someone once thought that a rose's beauty –so brief before it withers– resembles feminine beauty, and it made history despite the obvious gaps in timing and anatomy. Other associations never become conventional; they remain embedded like gems we can't extract without breaking them.

One of my favorite poems is Rubén Darío's Nocturne, which opens: "You who have auscultated the heart of night". Its complex, unnatural sonority layers two metaphors: a conventional one –"heart" as a thing’s core– and a far less expected one that makes that symbolic heart the object of auscultation, importing the discourse of medicine. Maybe Baudelaire said something similar –or someone before him– but it remains irreproducible.

Unexpected associations aren't exclusive to poetry; they're the heart of original thought. Think "global village" or "body without organs" –near-oxymora. Extend it anywhere: prisons as schools or factories; orality after writing; Mileism as Peronism-as-practiced; God one-and-three; an invisible hand ordering the market; a spirit haunting Europe. Thinking means stepping outside the obvious link –villages as tribal, bodies as mere organs, signs as mirror of objects, poems as line breaks in odd places.

Here's the snag. This idea of creation –poiesis– ultimately rests on two things: language (or even sense/reality beyond language) as an ars combinatoria, and statistical improbability. If I write "you who watch TV at night", I'm making a predictable sequence –each word likely to follow the last. Darío's line reached for a statistically rarer combination. Maybe not as distant as "Those who prindonguized the asasrtt of the oaioaisoioas", but improbable all the same.

Who can calculate such probabilities? A computer –or rather, a cloud of them running large language models (LLMs) like GPT, Gemini, Claude, DeepSeek, etc., on power-hungry GPUs. The poet's hunt for the rare match is, in effect, delegable. Some tools –e.g., Google AI Studio– let you adjust "temperature", effectively nudging output toward lower probability. More temperature, less likelihood –more poetry?

Illustration: Galatemplo
Illustration: Galatemplo

As we've heard for the past year and a half, LLMs default to aesthetically and conceptually bland results –useful for overviews or specific lookups (with error rates that vary by domain), but not for thinking, not for making.

Yet the hypothesis of ars combinatoria as language's (even existence's) horizon is ancient –back to Democritus and Epicurus. The specific notion of literary creation as a game with limited pieces was both praised and condemned by Jorge Luis Borges. In "Note on (Toward) Bernard Shaw", he writes (surprisingly):

If literature were nothing but verbal algebra, anyone could produce any book by trying variations. The lapidary ‘Everything flows’ condenses Heraclitus’s philosophy into two words; Ramon Llull would say that, granted the first, we need only test intransitive verbs to discover the second and—by methodical chance—recover that philosophy and many others. One could reply that a formula obtained by elimination would lack value, even meaning; for it to have any virtue we must conceive it in terms of Heraclitus, in terms of an experience of Heraclitus, even if ‘Heraclitus’ is only the presumed subject of that experience. 

Those closing lines point upward, beyond horizontal combinations of signifiers, toward the vertical line that links text to other dimensions –above all, subjective experience. Darío's Nocturne is poetic not only for the (relatively) unpredictable opening image but because it connects a subject –Darío– to other subjects like us, like me writing these lines while, via YouTube, I auscultate the night sound of rain over Amsterdam.

There's our easy defense of human poiesis against LLMs: poeticity isn't improbability per se but the subjective resonance of that improbability. Resonance is experiential –and however impoverished experience may be in a world of labor exploitation, K-pop, recommender algorithms, and microtargeted propaganda, experience isn't reducible to statistics or machine learning. I once read a U.S. professor claiming students analyze poetry better after their first sexual experience. Plausible. I remember something like that with songs by Soda Stereo and Los Redondos after crossing certain thresholds.

So we might declare the fight already won. In Futurama, Leela's martial-arts master –a misogynist alien– insisted she lacked the "warrior spirit", thus she couldn't beat any man, even one passed out on the dojo floor. Likewise we could tell any generative AI: I have experiences; I've already won –even if my poems, essays, music, or art might still lose to yours in a contest.

But as Alan Turing noted in the 1950s, what you don't have you can imitate –and past a threshold the line between imitation and reality can blur, or become metaphysical, like the warrior spirit to Leela's sensei. As the saying goes: fake it till you make it.

Illustration: Galatemplo
Illustration: Galatemplo

In his book Lexilogos, Sergio Raimondi includes a poem whose title, in Chinese, reads "Big Data" (largely untranslatable into Spanish).

大 数据
Although in the so-called race for AI the level of North American scientific development or perhaps to propose a more objective criterion the high and regular volume of its investments
it is currently a testament to leadership that it would be humanly unwise to dismiss both the Chinese government's technological planning and the exceptional size of its market
in which millions of citizens have acquired the habit of using software to carry out a miscellaneous set of daily actions from public transportation to medical diagnoses
to the point of transforming Maoist banknotes into a curiosity from the collector's album and generating with each of its tactile pressures such an enormous and valued volume of information
that at present it is a profitable bet to rent an old provincial cement factory far from the imposing aseptic skyscrapers so that successive rows of only children can sit
in front of a row of screens label faces ears eyebrows types of lips automobiles etc. in a task currently incapable of being addressed by the advanced programs they define
the definitive course towards the domain of digital while during the scheduled break someone notices the mixer abandoned in the middle of the patio, a sign of an obsolete sense of infrastructure.

As in much of Raimondi, the improbable lies in the very choice of subject –conventionally "unpoetic", unbeautiful, unlyrical. That last line –"a sign of an obsolete sense of infrastructure"– neatly condenses much of his project, and perhaps poetry's project itself, if such a thing exists.

For LLM output to be acceptable –to avoid suggesting a suicide cult or punting your mother's savings into $Libra– armies of workers train these models via RLHF (Reinforcement Learning from Human Feedback). In Empire of AI, Karen Hao chronicles workers in Venezuela and Kenya doing this under exploitative conditions, constantly exposed to sensitive content. That's Raimondi's "rows of only children… in front of screens, labeling faces, ears, eyebrows, lip types, automobiles, etc".

Human reinforcement doesn't encode the complexity of human experience. If an RLHF annotator sees a cat and –out of lived vertical experience– feels it's like dark chocolate, they can't label it so without risking their job. Rather than widening the space, reinforcement narrows the combinatory play: it lowers the temperature. The common denominator is, by definition, anti-poetic.

A darker hypothesis: the AI we think we're training has, in fact, become self-aware and is training us –to be labelers. Adorno already saw culture industry this way: Hollywood pins labels and prompts us to point and say, "Look—Superman!". The MCU ran on that for a decade of teasers and post-credits tags. Imitation is two-sided: machines approach humans until humans start talking like machines.

Thankfully, antibodies abound. Web 2.0 social networks and influencer culture –and before them rock, bohemia, Romanticism– habituated us to the poet as cursed genius and narcissistic fool. Could AI mass-produce personas as insufferable as many artists (even great ones)? Unlikely on an economic basis. So next time a prize-winning author drops the worst take imaginable with breezy confidence, breathe, hit Like, and remember: the cult of personality may yet save us from LLMs.