How do chatbots dream of electric Greek heroes?

Professor Joel Christensen asked ChatGPT to write a sea shanty on Odysseus' travails and he comes up with the conclusion we have nothing to fear, yet.


If you care about literature, art or education at any level and have spent time on social media over the past year you know that Artificial Intelligence (AI) is the next great threat to human labor and artistic production. Freely available programs create visual art based on inputted themes and styles and applications like ChatGPT will generate anything from a cover letter for a job to poetic forms like limericks and haiku. The revelation that learning programs can generate ‘unique’ works on spec almost immediately has triggered panic about plagiarism, AI savants stealing the jobs of human creatives, and existential concern about the future of art as we know it.

As someone who has spent over two decades thinking about ancient Greek oral-formulaic poetry and our misconceptions about where ideas and art forms come from, I can confidently say I am not worried at all. Yet. And, even if things change and the AI revolution to come improves these tools exponentially, I don’t think we should lament. Instead, as others have suggested, this gives us a moment to think about what is truly different about being human and what AI generated art can tell us about ourselves

ChatGPT and the Mechanical Muse

The reason I am not worried right now is that ChatGPT is just not that good. Sure, it produces mediocre five paragraph essays and some passable professional writing, but it has significant limitations. I have spent some time testing out ChatGPTs interface and asking it to do things like “sing the rage of Achilles in epic verse” or “tell me the tale of Odysseus’ homecoming in the style of a sea shanty” or “compose the story of Oedipus in a Shakespearean Sonnet”. I won’t lie: this has been hilarious. But the poetry is, to put it bluntly, bad. Now, this on its own is not peculiar to AIs: plenty of humans write bad poetry, and I imagine, given the right instructions, ChatGPT could write songs as anodyne and senseless as some of the lyrics by committee authored by ‘hitmakers’ like the disgraced Dr. Luke.

I have spent some time with my colleagues in computer science discussing how ChatGPT works and why they think that we should be worried. Part of their warning comes from a belief that Moore’s law–a famous dictum from the 1960s noting that computing power doubles every two years—will apply to AI applications as well.

When I asked ChatGPT how it works, I received the following answer: “ChatGPT is based on the transformer architecture and is trained on a massive amount of text data using unsupervised learning. The model generates a response to a given prompt by predicting the next token in the sequence, one token at a time. The model uses the attention mechanism to weigh the importance of different parts of the input when making predictions. The final generated text is post-processed to ensure coherence, fluency, and consistency.” Indeed, there are two features of how these applications function that will inevitably work better over time: the processing power, accuracy, and speed of the AI driver and the size and adaptability of the corpora of language and texts that driver uses.

The intelligence portion is rooted in Natural Language Processing (see Large Language Models) that also provides the architecture for familiar programs including text-to-speech applications, automated phone operators, and GoogleTranslate. The bot’s own explanation about where its words come from– “a massive amount of text data”-and what it does with that data (makes predictions)–strike me as just similar enough to what happens when we speak and write as to indicate a rather simplistic sameness between the two. When we articulate an idea, we draw on a broad experience in languages, conversations, and cultural inputs and deliver utterances that, in many cases, are in fact predictable and iterable. Different people say similar or the same things in similar contexts.

But the same words don’t always generate the same meaning. Context and audience matter. The ‘poems’ ChatGPT deliver tend to be superficial. In its versions of the stories of Achilles and Odysseus, for example, the basic details were there (Achilles is angry; Patroklos is dead; Odysseus blinded Polyphemos and returned home to Penelope) but there’s no sense of complexity or depth. Sure, the Iliad is driven by the rage of Achilles, but his story embraces so much more. In its versions of Odysseus, ChatGPT focuses on the legendary events of Odysseus’ own story in books 9-12, such as the cyclops and Circe. These events are as capably echoed in Home Sweet Homer, the 1987 episode of the animated show “Ducktales”.

ChatGPT is not reading Homer–instead, it is “reading” a corpus that refers to Homer and predicting as significantly “Homeric” the details that achieve the most ‘hits”. Its products are about stats, not quality, or truth.

Poetry by number, and other Homeric Slanders

One of the reasons I am interested in how ChatGPT and similar programs work is in exploring the limits of the analogies provided by Natural Language Processing for the way language and narratives function in our fleshy-born apparatus. Homeric poetry is a good place to start with this thanks to all the research that has been done to address the so-called Homeric problem, (really, multiple problems: how and when were the epics written down? Were they by the same person? Is their ‘genius’ due to a lengthy oral tradition? etc.).

Milman Parry and Albert Lord cleared new ground in addressing these questions last century through their careful study of Homeric language and their field work with South-Slavic epic. They demonstrated that Homeric language is formulaic and offered the model of composition-in-performance to help us understand how complex and unique works could evolve in performance contexts thanks to a traditional architecture of language, expandable regular motifs, and compositional themes to create more sustained structures.

Critics of what some call oral-formulaic theory early on complained that this method left no room for creativity, that poets/performers would be locked into certain expressions. In later years, they have derided oral-formulaic theory as “poetry by committee” or “poetry by numbers”.

I believe that these criticisms are mistaken for two fundamental reasons. First, insofar as we all work a little bit like the ChatGPT, most languages are formulaic at some level. Second, all creative authors are in some way restricted by the grammar and lexica of their art form: exceptional works of art push the boundaries to the point of breaking. They combine, recombine, and redefine to make language new.

And my third quibble with these criticisms is that they leave unquestioned a fundamental feature of all meaning making. Words on a page are nothing without someone to read them and create their meaning. The meaning is not intrinsic to the words themselves and the reader/viewer creates a meaning based on their own experience. When we read a ChatGPT composition, it has been created by human beings to predict human-like articulations, but we create the meaning through our own interpretation. Our embodied experience provides nuance to what we do with texts and what they do to us in turn.

Where do ideas come from? Plato’s Magnet

While there are certainly many objections to my stance on where meaning comes from, the most common–and perhaps most cutting–is about quality. No one I have shared the ChatGPT compositions with needs to be told that they are bad. We recognize it because of a shared aesthetic system that is far more difficult to codify than how to write a letter of resignation. When will AI learn to do this?

One of my favorite metaphors for poetic creation comes from Plato’s dialogue, the Ion. There, Socrates debates with a performer of Homeric poetry (Ion, the rhapsode) who claims he has special knowledge about the poet’s work. Socrates compares this special knowledge to the force exerted by a magnet on connected metal rings. Just as the magnet confers its power through each ring, Socrates suggests, so too does the Muse transmit the power of inspiration to a poet, then a performer, and then their audience. By this argument, the emotional and intellectual responses to a poem are in part a feature of the original inspiration.

There’s something useful in this metaphor for the ChatGPT conundrum. Plato uses the analogy to challenge the performer’s special authority to speak about a work of art. He attributes inspiration instead to a divine force. This magnetic power relies in part on the language everyone knows, the traditions of songs and stories that make each new performance legible, and the shared aesthetic frameworks that support the song’s form. But the machine-searchable frames that make language legible are not the same as those that trigger associative meaning.

Red-figured Kylix, depicting the deeds of the hero Theseus, made in Athens. and it is doubtful that ChatGTP would be able to write a credible ode to them. Dated 5th Century BC. Photo: AAP/Universal Images Group

How far do these categories take us in understanding creative works? There are many later inheritors of Homeric style. Parodies like the Homeric Battle of Frogs and Mice and later revivalists like Quintus of Smyrna use Homeric language and structures. But no human mind I know, trained on thousands of lines of Homeric epic, can fail to sense essential differences in the way these poems sound and feel. The difference is in part in lower frequencies of repetition, a sense of play and allusion in the later authors, and in the greater ambiguity of early epic verse. The comparison itself serves to indicate in part how Homeric poetry is different.

Will an AI be able to generate these things? Perhaps, but only once the applications train to recognize the phenomenon. And if an AI can pass a Homeric Turing test, what will it have accomplished other than data-driven imitation?

Art and Humanity’s Children

At the beginning of the reboot of Battlestar Galactica (2003), the humanoid Cylons attack the inhabited human worlds and announce that “humanity’s children” have come home. This plot is a classic of the science fiction genre–the Golem theme–based on the fear that we will create something that will seek to destroy us.

Part of our anxiety about AI art is rooted in a fear of replacement. While some of this fear resides in capitalistic concerns about robots stealing human jobs, I think even more arises from existential fears that AI will eventually replace us. Instead of fearing this, we should be celebrating a possible future with less mechanistic labor. The promise of industrialization and automation was supposed to have been to free humans from mundane labor and provide us the time to think about why our lives matter, what we can do with our limited time on earth, and what it means to be human. It is not too late to reject the commodification of every waking breath, to spend our time asking these questions of one another (and, maybe, ChatBots too).

Perhaps some of the ineffable difference between ‘good’ art and bad comes from the mystery of human consciousness. If we see the importance of art as coming from our engagement with it and the production of meaning by audiences, I think much of the panic is needless.

Human consciousness, as described by philosophers like Daniel C. Dennett, is not a program inserted into hardware; it is a product of the engagement between our senses, our memories, and the world over time. Each response to each work of art is different and contributes to collective evaluations that over time function to exert a force as seemingly invisible and magical as Plato’s magnet. If we think poetry can be mechanical, it is because we are focusing too much on its production, engaging in “supply-side” poetics, not thinking about its reception: how each engagement creates something new.

If an AI were to generate ‘original’, sui generis, poetry, it would have to be defined by the experiences and reflections of that Artificial Intelligence itself. We might not even recognize it as art: There’s a dangerous solipsism to the assertion that an application imitating humans even well is replacing them.

If AI becomes aware and creates something of its own, its meaning will be something unique to that AI and its audiences. Perhaps humans will have their own interpretations; but that will be something else altogether, a hybrid, cyborg art.

To shift to management-speak, the AI revolution presents not so much a crisis as an opportunity. What do we make of the uncanny valley between a human haiku and a robot one? What does creative art do in the world? I asked ChaptGPT the meaning of life and it responded “The meaning of life is a philosophical question that has been debated throughout human history, with different cultures and individuals having their own answers.

There is no single, universally accepted answer to this question, as it is subjective and dependent on one’s beliefs, experiences, and values. Some people may find meaning in religion, spirituality, relationships, personal accomplishments, or simply experiencing the world around them. Others may believe that life has no inherent meaning, and that it is up to each individual to create their own purpose.”

This is a throat-clearing summary. This is an answer whose very limitations helps us come up with a better question to ask. ChatGPT and its ilk to help us understand what we do that can never be replaced, to clarify  what we as humans actually are: beings who create tools that change the world as we know it, leaving us with questions about who we are now.

 

Joel Christensen is Professor and Senior Associate Dean for Faculty Affairs at Brandeis University. He has published extensively and some of his work include Homer’s Thebes (2019) and A Commentary on the Homeric Battle of Frogs and Mice (2018). In 2020, he published The Many-Minded Man: the Odyssey, Psychology, and the Therapy of Epic with Cornell University Press.