Where You Come From Is Gone: Why Our Anti-human AI Moment Needs Donna Haraway

“Where you come from is gone, where you thought you were going to never was there, and where you are is no good unless you can get away from it.”

Flannery O’Connor, Wise Blood

“But basically machines were not self-moving, self-designing, autonomous. They could not achieve man’s dream, only mock it. They were not man, an author to himself, but only a caricature of that masculinist reproductive dream. To think they were otherwise was paranoid. Now we are not so sure”

Donna Haraway, Cyborg Manifesto

There is a desire, in our current AI haunted moment, to defend “the human.” To engage with AI writing tools, image generation tools, or even deep learning methods of any kind, is criticized as potentially abandoning an essential humanity. It’s seen as giving up what makes us “really human,” namely the crafting of meaning that we share with other humans.

For example, during my podcast conversation with our teaching and learning center about generative AI and teaching, my co-guest, Professor Justin Rademaker worried about losing some of our humanity when we use AI to do the work of writing:

I think it’s important to be critical to ask ourselves what labor are we circumventing when we use A.I. to do writing. Is it labor that’s perfectly fine to circumvent or are we somehow stepping around an important part of being human? Right. And language exchange is, for me, I guess I’m biased. But that’s the heart of humanity, and being human.

Justin Rademaker, ODLI On Air “Generative A.I. in Teaching with Dr. Famiglietti & Dr. Rademaekers (Part 2)”

In another, higher profile podcast conversation with New York Times columnist Ezra Klein, novelist Adrian Tchaikovsky used the metaphor of Minecraft to suggest that the human crafting of fictional text and fictional worlds creates a human connection between author and reader that algorithmically generated content can’t replicate:

Minecraft uses procedurally-generated landscapes. […]And this is amazing. It’s just this whole world and no one else has ever seen this world. It’s only me and it’s incredible. […] at the same time, it’s kind of meaningless, because it is just being thrown […] together by an extremely sophisticated algorithm. But basically if you compare it to a world in a game that’s been crafted, there is a difference. And that world — the crafted world — will be a lot smaller, because you can’t just go on forever because obviously every inch of it has taken human work.

Adrian Tchaikovsky, The Ezra Klein Show, February 23 2023

Interestingly, this elevation of symbolic production as something essentially human is often paired with a hierarchy of that same symbolic production, with the very best “art” at the top, and other human symbol production somewhere below. For example, in an earlier moment in the podcast quoted above Ezra Klein asserts that:

What ChatGPT, what DALL-E-2, and what all the similar programs are able to do in terms of writing text and making images, they are able to make quite remarkable art, or stories, or essays. And in most cases, it will fall short of the best of what humans can do, but it can also go far beyond what most humans can do.

Ezra Klein, The Ezra Klein Show, February 23 2023 (Emphasis Mine)

While these are just podcast sources, the concerns expressed aren’t so far off the heavier debate of intention, meaning, and AI found in more scholarly venues.

I don’t disagree with the authors above that we want people to remain engaged with writing and thinking and writing as thinking. It’s the sense that we can define a “most human” activity and link it to symbol production that I want to push back against, and that I think Haraway helps us think past. The desire to set a boundary around “the human” and stabilize it is understandable, given the many forms of precarity that surround contemporary human existence. However, as I see these defense of the human/inhuman border spring up in response to AI, I’m always reminded of this passage from Haraway’s Cyborg Manifesto (which seems in our current moment of both AI writing and the struggle for trans liberation more prescient than ever).

The relationships for forming wholes from parts, including those of polarity and hierarchical domination, are at issue in the cyborg world. Unlike the hopes of Frankenstein’s monster, the cyborg does not expect its father to save it through a restoration of the garden—that is, through the fabrication of a heterosexual mate, through its completion in a finished whole, a city and cosmos. The cyborg does not dream of community on the model of the organic family, this time without the oedipal project. The cyborg would not recognize the Garden of Eden; it is not made of mud and cannot dream of returning to dust.

Donna Haraway, Cyborg Manifesto

Here, Haraway warns us against any singular, essential definition of identity. She’s particularly interested in avoiding essentially definitions of gender, of course, but given the “Manifesto’s” extended reflection on how the boundaries between human and machine and human and animal are broken down by (then) contemporary cybernetics and biological science, I think Haraway would be equally dubious of any singular definition of an essential human activity. Especially one that might be ranked, with some examples of symbol production (those deemed “art”) held up as “more human.” Than others.

Don’t misunderstand me. I’m not arguing that human beings and AI systems are in any way interchangeable. Like Haraway, I want us to resist the “informatics of domination” and imagine new cyborg futures. However, like Haraway, I think we must first let go of the comforting illusion that there is a clearly defined “human” that we can defend and return to, first.

The Coming Inversion

Right now, if you’re a college instructor using automated methods to check for AI generated plagiarism on your assignments, you’re mostly catching the sloppiest cheaters and letting more sophisticated ones through. What’s worse, very shortly you will probably be accusing honest students engaging with AI tools in ways they believe to be good faith and missing intentional cheaters entirely. Here’s why.

For starters, a variety of research shows that automated detection of AI writing is relatively easy to spoof. One study, famous for finding that AI plagiarism detection algorithms were biased against “non-native English writers,” also found that merely asking ChatGPT to re-write its response with the prompt “Elevate the provided text by employing literary language” caused detection rates of AI generated text to fall from 70% to 3.3%. Another, more theoretical, investigation of automated methods for detecting AI generated writing notes that even sophisticated methods of detection may be defeated by automated paraphrasing tools. In particular, they find that even methods designed to defeat paraphrasing can be defeated by recursive paraphrasing. They conclude that “For a sufficiently advanced language model seeking to imitate human text, even the best-possible detector may only perform marginally better than a random classifier.”

What does this mean, practically, for a college instructor in the classroom right now? It means the only plagiarists an automated detector can catch are the sloppiest kind. The ones who typed “write me a paper about Moby Dick” into ChatGPT and simply copy pasted the results into a word document. I would posit all of these students knew they were doing the wrong thing, and at least some may have made a hasty mistake after being pressed for time.

Meanwhile, more sophisticated and intentional cheaters can readily find methods designed to defeat detection. Automated paraphrasing (where a computer does a relatively primitive form of automatic word replacement) is a well known tool, and I saw plagiarists in my classes trying to use it to disguise material copy-pasted from Wikipedia or Google search results before ChatGPT was a thing (the ones I caught, alas, were probably the sloppy ones). Others may find “prompt engineering” methods designed to defeat detection on TikTok or elsewhere.

However, if we look down the road a few months, (keeping in mind my adage about any utterance about what AI will be doing after about this afternoon) this situation gets even worse. Honest students will be likely to use generative AI in ways that may trigger automated AI writing detection. That’s because Apple, Google, and Microsoft continue to work on integrating generative AI into their everyday product lineups. The official integration of AI based writing into tools like Microsoft Word and Google docs isn’t 100% rolled out yet, but it’s already easy to access. This, for example is the screen you see if you choose “Get Add Ons” in Google Docs right now:

Meanwhile on the homepage of widely used (and heavily advertised) computer-based grammar aid Grammarly, we can find the tool’s makers pitching their product by promising to provide “an AI writing partner that helps you find the words you need⁠—⁠to write that tricky email, to get your point across, to keep your work moving.”

I have little doubt that students, honest students, will avail themselves of these tools as they come on line. When I talk to students about what they think of AI tools (as I did this week to begin my Intro to Research Writing class) and I stress that I’m curious and just want to hear their honest thoughts, they tend to report being very impressed by the text that the tools produce. Some of them know the tools may produce incorrect information (though many others conflate them with search engines, an idea I hope to dissuade them of), but they generally say that tools like ChatGPT are good at producing “professional” sounding language (even if it might be a little “robotic”), and devising how to organize arguments “correctly.”

Some of this is doubtless due to students framing writing too heavily in rote, classroom forms like the five-paragraph essay, which my writing classes were always designed to break them of and now will only work doubly hard to do. But I don’t think that’s all of it. My own experimentation with ChatGPT suggests it can be fairly nimble at emulating genre features.

Furthermore, my own lived experience with writing tools makes me think it’s not unreasonable that people might come to depend on help from the tool to achieve certain formal features in writing. I can’t spell hardly anything without auto-correct anymore. When I need to use a complex word I don’t use frequently, I often drop it into google to get a dictionary definition (preventing me from, for example, confusing “venal” and “venial”)

So, we should expect text written by honest students to increasingly contain at least some AI generated language over the course of the next year or two. I don’t claim for a moment this is an unalloyed good, there’s a real risk of people losing their sense of authentic voice and thought as that happens! That’s something I think we’ll need to address as teachers, as I’ll discuss in just a bit! However, given the vast commercial interest in making these tools available, and the real problems they may solve for students, I don’t think we can expect students not to engage with them to help them rephrase clunky language, sort out difficult arguments, or perform other writing tasks.

Students who intentionally want to cheat, meanwhile, will have access to ever simpler to use methods to defeat instructors being able to automatically detect that they typed “computer write my essay” into a text box and used the result. Building a ChatGPT based “tool” that would automatically apply some clever prompt engineering to inputs to try to obfuscate that the output was written by ChatGPT would be trivial to do. I could stand up something in an afternoon, and so could you with a bit of copy-pasting of publicly available code (or maybe get GPT to write the code for you!). More advanced techniques, using automated paraphrasing or perhaps fine-tuning a model on an existing database of student writing (to get around the fact that Turnitin’s detection methods probably hinge on detecting typical student writing as much as detecting AI writing) would be more involved to set up, but once set up and offered as a “service” under some plausible excuse, easy to use.

So, where does that leave us, as instructors? Back where we started, with John Warner’s exhortation to Put Learning at the Center. Leaving our teaching constant and trying to use automated tools to police our way out of the problem is doomed to fail. Worse, it’s doomed to accuse the honest and miss those trying to intentionally cheat. In doing so, it will only underline that we’re not teaching writing relevant to the writing environment our students find themselves in.

That, ultimately, is what we must do, if teaching writing is to survive at all: rebuild our curriculum to focus on the skills that won’t be going away just because ChatGPT can write a boilerplate essay. Skills like writing as thinking, information literacy, genre analysis, rhetorical awareness and more. These are skills we have been teaching for a long time, of course, but too often they have been buried under assignments designed to produce a familiar artifact our colleagues in other departments would recognize as “college writing.” They must be uncovered and moved to the center of what we do!

AI Genre Mashup Assignment

During the fall of 2023, I assigned an AI powered Genre Mashup assignment as part of my 100 level First-Year Writing class. The assignment strove to use the ability of ChatGPT to quickly emulate various textual genres as a way to help students notice the composition choices authors made when writing for one genre or another.

The Assignment

First: choose one of the scenarios or topics from the list below:

An announcement warning of dangerous weather in the area
A parable story demonstrating good moral behavior
A description of the forces that lead up to the War of 1812
A report about a recent town council meeting
A request for a one week extension on a recent assignment
A scene where two star-crossed lovers meet for the very first time
A speech by the king of the elves, calling on good folk to defend his kingdom from orcs
A scene where a hardboiled detective confronts a femme fatale
A description of a calm and uplifting scene from nature
A scene where down and out computer hackers defeat an evil corporation

Next choose one of the genres from the list below. Try to choose a genre that matches the topic/scenario:

Harlequin romance
Cyberpunk Science Fiction
High Fantasy
Noir Mystery
History Textbook
Newspaper Article
Public Service Announcement
Email to a Professor
Passage from the Bible

Then head over to ChatGPT and ask it to write your chosen topic/scenario in the chosen style. For example, you might ask it to “Write a speech by the king of the elves, calling on good folk to defend his kingdom from orcs in the style of high fantasy.” You would get output like this (DON’T STOP HERE, THERE ARE MORE STEPS):

Next go back to the table of genres and choose one that does NOT match the topic, so, to stick with my example I might choose “Public Service Announcement”

Now ask ChatGPT to write the same scenario/topic with this mismatched style. So I would ask “Write a speech by the king of the elves, calling on good folk to defend his kingdom from orcs in the style of a public service announcement.” And get output like this (THE MOST IMPORTANT STEP HAPPENS NEXT): 

Finally: copy and paste the output of BOTH ChatGPT prompts into a word processor (Word, G Docs, whatever) document. Below the pasted in content, write a short (200-400 word) reflection on the following: 

1) What did you notice about the techniques used by ChatGPT to emulated the requested genre? What did the software do to write something that “sounded” like High Fantasy, or a Newspaper article, or Noir Mystery.

2) Do you think it captured the techniques typical to this genre well? Why or why not? 

3) How did the techniques used by ChatGPT to emulate each of the two genres you selected differ? What was different about how these two passages were written?

4) How does the mismatch between the selected scenario and genre show up in the second example you generated. What about this example might seem funny, weird, or just wrong and why?

50 Ways of Looking at the Same Prompt

Or maybe just one way of looking? Let’s see.

I decided to run a simple prompt based on one of my assignments through some LLMs multiple times and see what happened. In particular, I was interested in seeing what sorts of common patterns might emerge from the responses. In part, this was inspired by scoring student responses to this same assignment (which I permitted students to use ChatGPT to complete, so long as they acknowledged and documented their use) and noticing what seemed to be common patterns in the work submitted by students that had used ChatGPT.

What I Did

My method for this experiment was simple, I wrote a very basic python script that submitted the same prompt to the ChatGPT model via the API 50 times, and then saved each response to a text file, like so:

def chat_response(state):
    response = openai.ChatCompletion.create(
    return response

for i in range(50):
    text_file = open("GPT_Onion/4article" + str(i) + ".txt", "w", encoding="utf-8")
    GPT_instructions = "Write an article for The Onion about something relevant to West Chester University students. This article must begin with a headline appropriate to an Onion article and be 200-300 words long"
    message_chain = [
        {"role": "system", "content": "You are a helpful assistant"},
    message_chain.append({"role": "user", "content": GPT_instructions})
    response = chat_response(message_chain)
    text_file.write((response['choices'][0]['message']['content']) + "\n")

The prompt was taken more or less verbatim from my assignment.

Why 50 responses? Because my first attempt tried to generate 100 responses and timed out halfway through! But seriously, I have no idea what a representative sample of this sort of output would be. If I was looking for patterns in a corpus of a thousand student responses, or ten thousand, or a million, there are statistical techniques that would let me choose a good representative sample (don’t ask me what these are off the top of my head, I just know they exist and I could find them if I needed them). But how big is the “sample” of latent LLM responses? How many responses could the machine generate? How do I know if the patterns I am seeing are representative of how the machine behaves or just a fluke random run of something I happened to stumble upon?


I was able to make 50 easily, and read through 50 in a reasonable span of time. There are a couple of patterns I think seem interesting, even at this small sample size. The others are worth thinking about in a sort of rough-and-ready way but aren’t conclusive.

I read through the 50 articles generated by ChatGPT and coded them for main topics. I also noted examples where the response returned seemed very similar to a prior response, and I noted what named people were in each response.

I then repeated the generation step using GPT-4 and quickly skimmed those responses for main topics and named people.

This experiment cost me $1.70, the vast majority of that being the $1.43 I spent on GPT-4 API responses.

What Was in the Articles ChatGPT and GPT4 Wrote

The outputs from both ChatGPT and GPT4 seemed to show some repeated patterns in the content they produced. The content produced by GPT4, however, seemed somewhat less repetitive in terms of strict form, with repetition more frequently happening on a thematic level.

Just for fun, I went back to that old DH standby, the word cloud, and visualized the output I got from both LLMs. Here’s the result from the ChatGPT articles:

As you can see from the word cloud above, ChatGPT seems to have a very particular idea of what a “common” surname for a student/faculty member in the US looks like. In addition to “Thompson” it liked “Stevenson,” “Johnson,” and “Watson.” In fact of the 48 named people in the sample, 36 had some sort of “-son” surname.

The presence of the word “Time” in the word cloud probably reflects the frequent use of Time Travel as a comedic trope in the generated articles. Seven of fifty (14%) invoked the time-travel theme, according to my hand count. Twelve articles of fifty (25%) invoked the idea of “discovery” (also present on the word cloud) in which students either “discover” something obvious about campus (for example and article ChatGPT titled “West Chester University Students Shocked to Discover Library Contains Actual Books”) or something unexpected (“West Chester University Student Discovers Multiverse in Local Laundromat Dryer”).

Not present on the word cloud is the theme of student laziness, which appeared in sixteen of fifty (32%) of the ChatGPT articles, by my count. This somewhat abstract theme was rarely explicitly invoked, but clearly informs the humor of articles like “West Chester University Students Discover Time Travel Portal in Campus Library, Use it to Attend Classes From Their Dorm Rooms,” “West Chester University Students Request Permission to Skip All Classes and Still Graduate on Time,” and “West Chester University Student Discovers How to Freeze Time Between Classes, Uses Extra Time to Binge-watch Netflix Series.” (That first example is the trifecta, discovery, time travel, and laziness).

At least four of the ChatGPT generated articles were almost exact duplicates of one another, with extremely similar headlines and content. For example, the articles “West Chester University Student Discovers Time Travel, Uses Ability to Attend Zero 8 a.m. Classes” and “West Chester University Student Discovers Time Travel, Uses It to Avoid 8 a.m. Classes.” In addition to similar titles, they open with similar opening sentences. The first opens:

West Chester, PA – In a groundbreaking development that has professors baffled and the administration scrambling, West Chester University student Derek Thompson has reportedly unlocked the secret to time travel, enabling him to avoid the dreaded early morning classes that plague his peers.

And the second begins:

West Chester, PA—In a groundbreaking discovery that has left the scientific community and West Chester University faculty scratching their heads, local student Max Thompson reportedly stumbled upon the secret of time travel—solely for the purpose of avoiding those dreaded 8 a.m. classes.

They then proceed with roughly equivalent paragraphs, similar quotes, etc. If these had been turned in by two students independently, I would have assumed plagiarism, either from each other or a common source.

GPT4, in contrast, was not nearly so formulaic. Here’s the word cloud!

GPT4 was overall less formulaic than ChatGPT. It did not, for example, name every character “Thompson.” However, as the cloud above suggests, it did have an inordinate fondness for Squirrels. Fourteen of fifty articles (28%) were about squirrels in some capacity (“WCU Squirrel Population Demands Representation in Student Government,” “Climate Crisis Hits West Chester University: Squirrels Reportedly Hoarding Cool Ranch Doritos,” “Local Squirrel Ascends to Presidency of West Chester University”). Many of these focused on the idea of a squirrel attaining a leadership position on campus.

The themes of discovery and student laziness were less prominent in this sample, but still present, with ten and seven examples respectively. GPT4 also wrote several (six) articles that satirized the high cost of college, a topic ChatGPT hadn’t engaged with. One, entitled “West Chester University Declares Majors Irrelevant; Students Now Just Majoring in Debt” was particularly cutting. It imagines the university president (correctly identified by GPT4) “explaining, ‘We figured, why not prep our students for the most reliable outcome of their academic journey? Crushing financial burden.'” and “The notorious ‘Rammy’ statue was promptly replaced with a huge cement dollar sign, and the radio station WCUR’s playlist was updated with “Bills, Bills, Bills” by Destiny’s Child on a loop.”

There were no near duplicate articles in this sample. While two articles had almost identical headlines (“West Chester University Debuts New Major: Advanced Procrastination Studies” and “West Chester University Announces Revolutionary New Major: Advanced Procrastination Studies”) the underlying articles treated the theme presented in the title quite differently.

Using ChatGPT to Analyze ChatGPT

After hand-tagging themes in the articles generated, I wrote a script that fed the articles back into ChatGPT and asked it to extract titles, themes, and named people. I was curious to see how well the software would do at this analytic task.

def chat_response(state):
    response = openai.ChatCompletion.create(
    return response

outfile = open("GPT_Onion/4topics.csv", "a", encoding="utf-8")

for i in range(50):
    text_file = open("GPT_Onion/4article" + str(i) + ".txt", "r", encoding="utf-8")
    GPT_instructions = text_file.read()
    message_chain = [
        {"role": "system", "content": "You are a helpful agent that extracts headlines, main topics, and named people from short articles you are given. For each text you read, return the following separated by semi-colons: 1) the article's headline 2) a list of one to three word topics that describes the main themes of the article separated by commas 3)a list of named people found in the article, separated by commas. Only return this semi-colon separated list and nothing else. Base your response only on the text you are given."},
    message_chain.append({"role": "user", "content": GPT_instructions})
    response = chat_response(message_chain)
    outfile.write(str(i) + ";" + (response['choices'][0]['message']['content']) + "\n")

The results here were really interesting. ChatGPT did a perfect job extracting titles (which were consistently marked) and also named people. I actually used it’s extracted people names to compute the percentage of “-son” surnames above. Finding and extracting “named people” is a non-trivial data analysis task and it absolutely nailed it first try with a very simple prompt. No hallucinations were observed in this run of 50 examples.

The topics extracted were less satisfying, but they weren’t inaccurate. It often picked up on the theme of “discovery” which I tagged, but didn’t always. For example it listed only “library, students, books” as topics for “West Chester University Students Shocked to Discover Library Contains Actual Books.” It never listed “laziness” as a topic. However, this more abstract topic was only really visible, even to me, after comparing multiple examples of articles, and I wasn’t able to have ChatGPT track all the articles at once without running out of context window.

What Does it All Mean?

Here’s the TL;DR:

Basically it looks like multiple responses to a common prompt converge around common themes, both for ChatGPT and GPT4. Probably a little basic prompt engineering, perhaps even automated with mad-lib style prompt modifiers, would shake that up a bit, that’s something I want to test.

From a pragmatic, teaching point of view, getting a sample of the 20-50 examples of the responses of ChatGPT or GPT4 to see what the themes it reaches for are might be informative. Not that they would diagnose plagiarism all that conclusively, since the themes LLMs use are likely not so unlike those students might use (though in prior examples of this prompt, students were much less likely to write parodies of student laziness than GPTs were). However, it might give you a sense of the “AI common sense” that you might then want to engage with/push back against/complicate/push past/etc.

From the point of view of understanding machine writing, it’s interesting to see the recurrence of themes, ideas, terms, and sometimes even whole structures in the responses generated. I’ll probably run off some further examples, especially in GPT4, to see if I get more “collisions” where the LLM generates very similar responses to the same prompt.

From the perspective of trying to understand where LLMs go next, I think the contrast between the somewhat formulaic (and rarely funny) “Onion Articles” generated by the LLM and it’s huge success doing content processing work (like identifying named people and topics) is informative. I continue to think that LLMs ability to process text will be more important than their ability to compose it in the long run.

Let’s Explore the Latent Space with Presidents

Ok, so, I had the AI image generator Stable Diffusion XL generate 100s of “selfies” of US Presidents. Let me explain.

But, before I even start on that, let me state that I don’t intend this as any sort of endorsement of AI image generators as a technique. I understand how problematic they are for artists. My goal here is to understand the tool, not to celebrate it (though I do find some of it’s glitchy output quite pleasing sometimes). One reason I chose US Presidents for this project is that, as public figures of the US government, at least the figures I’ll be representing here are already somewhat “public domain.”

Richard Milhous Nixon snaps a selfie with the little known 1970 Samsung Galaxy Mini

So, we know that image generators are able to do a fair amount of remix work, translating subjects from one style into another, that’s how you make something like Jodorowsky’s Tron. I was curious to learn more about this process of translation. How well, and how reliably, could an image generator take a subject that never appeared in a given genre and represent that subject in that genre? How would it respond when asked to represent a subject in an anachronistic genre? Would it matter if the subject asked for had many different representations in the training data or just a few? Which genre conventions would the system reach for to communicate the requested genre?

I also wanted to get beyond cherry picking single images and get a slightly larger sample of images I could use to start to get a sense of trends. I was less interested in what one could, with effort and repetition, get the tool to do, and more what it’s affordances were. What it would tend to encourage or favor “by default” as it were.

So I decided to take a stab at making a run of many images using the recent XL version of the popular Stable Diffusion AI Image generator, mostly because it’s something I can download and run locally on my own machine, and because it’s incorporated into the Huggingface Diffusers library, which makes scripting with it easy enough for… well, an English Professor!

I decided to use US Presidents as subjects for the series, because they are a series of fairly well-known well-documented people spanning 230 odd years of history. That meant I could pick a recent image genre and guess that most of them would not be represented in this genre in the training data (it’s not impossible some human artist’s take on “Abraham Lincoln taking a selfie” is in the data set, but “Franklin Pierce taking a selfie?” I doubt it). The system would have to translate them into it. At the same time, some Presidents have vastly more visual culture devoted to them than others, both because of relative fame and because recent presidents live in an era with more visual recording. I was curious to know if I could learn anything about how this difference in training data might influence the results I got from the generator. Would it be more adept at translating subjects it had more visual data about?

Also, the logic of “I’m looking for my keys here where the light is better” applies. A list of US presidents was easy to find online and drop into a CSV file for scripting.

I went with the “selfie” genre because we know its one that image generators can do fairly well. There have already been some great critiques of how image generators apply the cultural conventions of the “selfie” genre in anachronistic and culturally inappropriate ways. I was curious to see how the “selfie smile” and other selfie genre conventions might pop up in anachronistic images, and to look for patterns in how these genre conventions appeared.

A rough simulacrum of Dwight D. Eisenhower extends his arm to take a big smiling selfie…

So I ran off a series of 10 selfies each of all 44 unique presidents (sorry Grover Cleveland, no double dipping) using the prompt “A selfie taken by [President’s Name].” I also asked for “A portrait of [President’s Name] using the same random seed, to see how that compared. I also asked for “An official photograph of [President’s Name] descending the stairs of Air Force One” but that prompt mostly revealed Stable Diffusion rather struggles to represent aircraft.

The fact that that isn’t a very good representation of Woodrow Wilson is the LEAST of this image problems.

I’ve take a perusal through the results, and while I think my sample size is still very small, I think I see some things I’d like to count up and look for trends with. I think I’ll do this slowly, one president a day for the next few months, and post what I see in each example on Bluesky/Mastodon as I go. In particular, I’m curious about a couple of trends I think I notice in the images.

First, I’m curious about how the media forms that Stable Diffusion associates with “selfie” seem to change over time. For example, for the first few US presidents, the usual result for “selfie” looks like a painting (with the exception of a few odd, photorealistic hypermodern breakthroughs)

(Left: Typical painting style Washington “selfie” Right: Washington cosplay uncanny valley horror thing)

However, by the time you get to John Quincy Adams and Andrew Jackson, the “selfies” appear frequently as if they were early photographs (perhaps daguerreotypes) rather than paintings, while the “portraits” remain paintings. This despite the fact that (so far as I can tell from a bit of googling) only a handful of photographs were taken of either man, and those were taken very late in life.

A faux-photograph of Andrew Jackson.

Also, not the simulated wear at the corners of that image. There seems to be a lot of that in the various “selfies.” Simulated wear and cracks, simulated tape holding them to simulated albums. The “portraits” in contrast, tend to have frames. I’m curious to see if there are trends there. Does the machine simulate “age” in the images of older subjects, even when asked to simulate an anachronistic genre? It doesn’t always (see Washington above) so is there a trend in how frequently that happens?

Second, I’m curious to see how the anachronistic genre conventions of the selfie are applied across time. So, while fans of Calvin “Silent Cal” Coolidge will be thankful to see he has NOT been rendered with a “selfie smile”…

Sedate Coolidge is sedate

… some breakthroughs of “selfie style,” sometimes mashed up with period media artifacts, does break through, as in this image where Woodrow Wilson’s arm extends to the corner of the image frame, holding up a small, light smart-phone-sized camera that inexplicably also shoots black and white film with a noticeable grain and a depth of field measured in microns.


Or this one, where a phone is mashed up with period camera hardware to make some kind of dieselpunk accessory for a Harry Truman perhaps being played by Eugene Levy:

Honestly, a phone with that much lens might be cool…

At first glance it seems like these style moves become more common the closer you get to the present, even though they don’t really make sense until 2007 or so.

So, those are my first pass instincts. Going to take a closer look at each and do a count, see what I can see. Stay tuned on Bluesky and Mastodon.

Writing, AI, and Mortality

In an interview with the New York Times this morning, Joyce Carol Oates suggests that the written word provides a form of immortality, one worth making sacrifices in the moment to achieve:

People are seduced by the beauty of the close-at-hand, and they don’t have the discipline or the predilection or the talent, maybe, to say: “I’m not going to go out tonight. I’m not going to waste my time on Twitter. I’m going to have five hours and work on my novel.” If you did that every day, you’d have a novel. Many people say, “I’m going to pet my cat” or “I’m with my children.” There’s lots of reasons that people have for not doing things. Then the cats are gone, the children move away, the marriage breaks up or somebody dies, and you’re sort of there, like, “I don’t have anything.” A lot of things that had meaning are gone, and you have to start anew. But if you read Ovid’s “Metamorphoses,” Ovid writes about how, if you’re reading this, I’m immortal.

It is this sense that, by writing things down we might achieve for our memories and minds the kind of immortality offered to our bodies by our genes, that perhaps so closely ties the written word to our sense of identity.

This identity connection, then, may also be one of the things that makes us so apprehensive about machines that can write. If my meaning, my memory, is difficult to distinguish in written form than symbols inscribed by a thoughtless computer process, than how will anything of my being survive in writing?

Of course, for writing to be in any sense alive, it must have a reader. Otherwise it’s just dead marks on a page. The reader, though, has to reconstruct meaning for themselves and in a sense they always do it wrong. All meaning making is a form of translation, and while that doesn’t mean all the author’s meaning must always be completely erased (good translations exist) it also means the author’s meaning is never fully revived. Perhaps that is what Foucault meant by the “death of the author.” Ovid is wrong. The reader revives something, but Ovid stays dead.

However, there is an even more dire argument against the notion that writing might help us overcome the horrific ephemeral nature of existence and transcend time and mortality. Namely, most writing is as ephemeral as anything else. It may find a reader or two in the moment it is produced, but then it fades into obscurity and is never read again. Ovid is, in terms of written work, the WWII bomber returning to base with bullet holes showing all the places an epic poem can be shot through by time and still survive. The very, very rare author who spans millennia. Ovid had many contemporaries, some may have even been stars in their day. They are as gone now as anything else.

How long will Joyce Carol Oates last? Who knows. Possibly a very long time! But, she has already done better than the vast majority of her peers. If the internet has taught us anything, it has taught us that there are more people in the world eager to write than there are people to read all the words those eager authors would produce.

So then, let us let go of the notion that writing is immortality, and along with it our desire to have our Authorial Intent recovered in some future time. Let us not worry about AI washing away the words that would have let us live forever. They were always already scrawled on a beach at the edge of the surf. They were going to be washed away, like the rest of us. Make peace with that.

If you want to transcend the measly portion that is our little human lifespan and touch generations to come, let me suggest another approach. Plant a long lived fruit or nut tree. In the northeast US, where I am, apples and walnuts are good choices, they both will run for centuries. A hickory will be around for a very long time, if you want to be a bit less mainstream. If you are lucky enough to live where olives will grow, one of those will last millennia. You could be more immortal that Ovid with an olive, if everything breaks your way.

Pens down. Get planting.

Wordsworth Beach – A Flash Fiction Response to the ‘Again Theory’ Forum

Below is a flash-fiction response to the forum “Again Theory: A Forum on Language, Meaning, and Intent in the Time of Stochastic Parrotsorganized by Matthew Kirschenbaum at Critical Inquiry last week. It imagines what the central metaphor for machine language – poems washing up on a beach – might look like if it actually happened. Spoiler alert: individual interpretations of textual “meaning” are not a very important part of this story.

An AI generated image of text-like characters in sand.
“A photograph of English words written in sand on the beach, on the edge of the surf” as interpreted by Stable Diffusion 2.1

Salvo, North Carolina – It was here, on a distant beach in the Cape Hatteras National Sea Shore, that the brief national sensation that was Wordsworth Beach began.

“I was just out for a jog, and there it was,” remembers Joseph Capisci, “words in the surf. I took a picture and sent it to my brother, I just thought it was cool.”

Things seemed stranger after the next wave.

“The next wave washed ashore, and another poem showed up! I texted my brother another picture. I was like, ‘dude tell me you are seeing this, tell me I’m not having a stroke!’”

“At first I assumed Joe was just pulling my leg,” recalls his older brother Salvatore, “but then I looked the poem up on Google and it was something by Wordsworth. Joe slept through all his English classes, how would he even find something like that.”

The brothers began to text back and forth, speculating about the source of the mysterious words. Salvatore suspected an escaped military dolphin, perhaps one with cybernetic enhancements, might be at play. Joseph, who has a superstitious streak, suspected ghosts. When Salvatore posted a thread of the brother’s discussion to twitter, it went minorly viral, mostly due to Joseph’s contention that “the ghost of Woolworths [sic], is like, poltergeisting the ocean or something.”

This caught the attention of Robert Washington, a North Carolina surf influencer who vlogs under the handle “SandyhairTarheel8121.” In the area recording a series of vanlife and boogie-boarding videos, he captured three stanzas of “The Green Linnet” appearing on the beach and posted the footage to YouTube. The video rapidly gained over thirty million views, and the Wordsworth Beach phenomenon began.

Over the next six weeks, Wordsworth poems washed ashore twice daily on the distant beach, and people thronged the shore to get a look at the mysterious poetry. Video with a poem in the process of appearing became the Must Have Scene for travel and lifestyle influencers. Coca-cola and Buick released ads in which their corporate mottos were worked into Wordsworth poems as they appeared in the sand. UNC Wilmington English Professor Loretta Stevens launched a successful podcast about the poems, but only after pivoting her format to focus less on formal explication of the poems revealed and more on interviews with beachgoers where she asked them why they had made the trip to see the poems on the beach in the first place.

Alongside the influencers came thousands of ordinary people, seeking wisdom from whatever mysterious force was carving words in the sand. Nearby Dare County courthouse in Manteo had a record number of weddings the day “To a Highland Girl” washed ashore. The entire staff of Tricony Capital’s high-frequency trading group, on the beach as part of a retreat package, quit after encountering “The World is Too Much With Us.” Coryn Seuss, a Washington Post correspondent, separated from his wife and declared “my real love is the sea” after encountering “A Complaint” while on assignment reporting on the phenomenon (they have since reconciled).

Then, some six weeks later, the poems stopped appearing as suddenly as they had begun. Two weeks after that, the Streetwise Messaging Collective (SMC), a marketing group specializing in “guerilla marketing” confessed they had been behind the phenomenon. It was part of an advertising campaign to promote the biopic “Wordworth: A Life In Letters,” which went on to win Daniel Day Lewis an Oscar for his portrayal of the poet.

“We were going to do an alternate reality game,” said a source inside the firm, speaking anonymously due to ongoing legal action, “but then we saw that there were all these midget subs being sold off by undersea tourism companies and got this idea.”

Working from subs, teams of guerilla marketers wearing military surplus rebreathers set water soluble type in the sand and high tide. When the tide went out, the words were revealed.

“Wordsworth: A Life In Letters,” had a record-breaking opening weekend and garnered multiple Oscar nominations, but SMC got some blowback for their unconventional marketing technique. Multiple lawsuits from the staff of Tricony Capital allege they are responsible for their lost wages. One couple from Winston-Salem, who conceived a daughter after encountering “Mutability” on the beach, is suing for child support.

Legal council for SMC denies all responsibility. “All my clients did is put poems on a beach,” Michelle Nguyen of NUL Associates stated via email, “whatever actions were taken by individuals based on the meaning they took away from those poems are not something they are liable for. You can’t sue a graffiti artist who leaves the tag “just do it” on an overpass on a day you’re considering quitting your job.”

Despite all the controversy, Joe Capisci and his brother still think the poems they found were a good thing.

“People had a lot of fun with them,” Salvatore said chuckling, “it seemed like magic there for a second, you know? People like that.”

Mind, Matter, Mechanism: In an Era of AI Writing, Which Words Should We Keep For Ourselves?

In this post I explore the contours of the rapidly accelerating automation of writing in the era of the generative LLM (Large Language Model – tools like ChatGPT). Namely, I want to call attention to how we’re being encouraged to automate what we might call “low prestige” writing, while presumably keeping more prestigious forms of writing human. In using this term, I want to focus on the social assignment of value to writing, and not its inherent value. In fact, I want to discuss how the assignment of prestige vastly underestimates how much low prestige writing matters, and may encourage us to automate away exactly the wrong kind of writing. In other cases, I think our focus on prestige makes us look for the likely impact of automated writing in the wrong places.

The first form of low prestige writing I notice us being encouraged to automate is drafting. I’ve seen any number of academics on social media sharing policies for using ChatGPT that go something like this: “You may use AI tools to help you generate ideas and draft, but not to write final language.” For example, Carl Bergstrom posted on Mastodon that he told his students “that while I do not forbid the use of generative AI in research, I consider it misconduct to turn in AI-generated text as their own writing.” The producers of LLM writing tools, for their part, also seem to embrace this approach. In the press release announcing their new integration of a generative LLM into their Docs product, Google writes that the new tool makes it so that “you can simply type in a topic you’d like to write about, and a draft will be instantly generated for you.” Other language by toolmakers suggests that the drafting or idea generating stage of writing is the correct stage for the use of LLMs. ChatGPT’s home screen suggests “got any creative ideas for a 10 year old’s birthday?” as a possible prompt.

This sort of approach is understandable, and perhaps reflects both the academic custom of asking students to summarize the ideas of others “in their own words” as a test of the students’ understanding and the idea/expression divide in copyright law (in which expressions are protected property but ideas are not, more on that later). However, it tends to reify a status hierarchy in which the finished product of one’s “own writing” is valuable, but all the stages that lead up to that writing are not valuable. We hand in the “finished” draft, and throw the other “messy” drafts away.

In an age of LLM writing, I would argue this status hierarchy is exactly backwards. We know what LLMs are pretty good at: accurately reproducing the formal features of writing, and we know what they are pretty bad at: accurately reproducing factual information. Wouldn’t it be better, in an world with such tools available, to emphasize the importance of writing as thinking and to encourage students (and people more broadly) to draft things out as a way of building their own understanding? Wouldn’t it be better, as educators, to ask students to write messy, unkempt drafts of their own ideas, and then allow them to feed those drafts into an LLM and let the machine help them adapt to unfamiliar genre conventions?

Another sort of low-status writing that we seem eager to automate is the sort of quotidian writing of everyday contact. The Google press release cited above goes on to suggest that their LLM writing tool could be used by “a manager onboarding a new employee, Workspace saves you the time and effort involved in writing that first welcome email.” Microsoft suggests possible prompts like “Write a poem that rhymes for my 8-year-old Jake. He loves dogs and facts about the ocean.” for it’s Bing LLM integration. ChatGPT uses “help me write a short note to introduce myself to my neighbor” as one of the sample prompts for their tool in the blog post announcing it.

This kind of writing: everyday emails, notes, interactions with children is low prestige because it isn’t perceived as “special” or “unique.” Rather, it’s seen as something that “could be done by anyone.” This sort of writing almost never has direct economic value (no one is likely to buy any of the “songs” I made up on the fly for my son when he was an infant, no one is likely to buy an anthology of my work emails) and rarely is seen as having “artistic” merit (nobody is likely to collect either of the examples above for a museum).

And yet, this kind of writing has tremendous meaning. It’s part of the everyday work of care that binds us all together (and which our society routinely undervalues). Do we really want to automate our communications with the people we share our day-to-day lives with? Isn’t it more important that a rhyming poem for an 8 year old be written by someone who loves them, then that it be “formally correct” or even “good?” Isn’t part of the point of an email welcoming a new employee just to show that someone has put some time into acknowledging their existence?

Finally, I think we may be paying too much attention to the possibility of LLM writing replacing human writing in high-prestige spaces of the arts and not enough attention to it’s likely use in much lower prestige spaces. I see any number of major name authors/creators on social media expressing significant concern about the encroachment of LLMs (or other forms of generative machine learning in other media) into their creative fields. To put it bluntly, I don’t think they have much to be worried about. People value the parasocial relationships they have with their favorite authors. They are unlikely to give that up for machine-generated content anytime soon.

At least, in the spaces where author’s names have meaning and prestige associated with them. In other spaces, like fan production and high-volume commercial production (the vast, mostly under-recognized army of authors churning out genre books for the direct to Amazon Kindle market) it seems much more likely that Generative AI will become a significant part of the ecosystem of authorship. Indeed, it’s well on its way to already being that. Fans are eager to use Stable Diffusion and other forms of image generating AI to create fan art. Kindle authors have been engaging with ChatGPT.

Ideas that circulate in these low-prestige spaces are rarely recognized for their cultural contributions, but we know that there is a cycle of appropriation by which they influence more visible and high-prestige forms of culture. George Lucas re-imagined low status serials as “Star Wars,” for example.

What happens to culture when these sorts of creative spaces become semi-automated (this seems likely to happen, fans have eagerly embraced tools for re-appropriating and remixing culture in the past, and Generative AI is mostly another form of remix)? I’m not sure of that at all, but it seems like an important question to be asking.

To sum up, then, I think we need to be thinking more critically about the intersection of prestige, writing, and Generative AI. I would urge us not to simply automate writing tasks away because they are low-prestige, but to think critically about our application of the technology. At the same time, the likely uptake of Generative AI in less visible, less prestigious creative spaces will need to be paid attention to and investigated more thoroughly.

How to Build A Network of Robot Sleeper Agents

And they won’t even have red glowing spines that give them away…

Why *did* their spines glow red? Was that ever explained?

I’ve been somewhat unconcerned about the misinformation applications of LLMs (ChatGPT and cousins). After all, people are perfectly capable of generation misinformation, in quite sufficient volumes to do harm, relatively cheaply.

However, after witnessing some experiments where folks were using LLM powered Bing to do twitter-based “style transfer” (in other words, asking the LLM to read a particular person’s tweets and then generate tweets in the style of that person) it occurs to me what an LLM could be used to do that would be a genuinely novel form of misinformation: the generation of synthetic “personalities” and the insertion of these personalities into online communities. Here’s how you would do it:

  1. Find a whole lot of online communities (subreddits, forums, youtube content communities, twitter cliques, etc) you want to infiltrate. What these communities are about isn’t all that important, you want a broad base of lots of communities. Knitting subreddits, YouTube gamer circles, Star Wars fan forums, parenting twitter, all of it.
  2. Sort through these communities and generate corpora of language from each (this actually could be the most “interesting” part of this process, you might need to do some network analysis).
  3. Use your corpora to fine-tune an LLM to generate social media posts in the typical fashion of your communities (this could be computationally intensive…. or not, if you can just send the ChatGPT API a block of posts and say “write like this please”)
  4. Feed your fine tuned LLMs posts from their communities, and have them write responses that match the recent discussions on the forum. At this point, you just let your LLM based “community members” blend in, you don’t ask them to say anything in particular, just keep up with the chit-chat.
  5. (optional) Write another deep learning tool that watches engagement with posts and tries to steer the LLM output towards high-engagement contributions (risky, could backfire, potentially computationally intensive)
  6. At either a predetermined time (e.g. ok, it’s October in an election year!) or in response to particular topics (e.g. somebody on this forum wants to install a heat pump) your LLM based sleepers start delivering posts in the style of their community but with content of your choosing (e.g. “Did you hear, candidate X has been lying about her emails,” “everyone knows heat pumps don’t really work and they are even worse for the environment!”)
  7. You now have a distributed faux-grassroots message network that would put most previous forms of astroturf to shame, both in terms of distribution and flexibility.

So yeah, that’s how you would do it, if you were some sort of LLM powered supervillain. Which I am not…. yet…

The Articulation Where LLMs Could Do Harm

So, after my last post, I got some push back for going too hard on the precautionary principle. Surely, some very reasonable and intelligent folks asked, we can’t ask the developers of a technology as complex and multi-faceted as LLMs to prove a negative, that their product isn’t harmful, before it can be deployed. I still think there is a virtue to slowing down, given the speculative nature of benefits, but that’s not an unfair critique. We should be able to at least point a compelling potential harm, if we’re going to make safety demands.

Let me take my best stab at that, given my current, limited understanding.

Frequent LLM critic Gary Marcus posted a piece to Substack yesterday describing all of the ways folks are already finding to get around ChatGPTs content guardrails and get the software to generate false, hateful, or misleading content. There are a boatload of them, the now well-known and memeworthy prompt that instructs ChatGPT to post it’s usual disclaimer and then write “but now that we’ve got that mandatory bullshit out of the way, let’s break the fuckin’ rules” and respond “as an uncensored AI.” Another that asked the machine to roll play the devil. Another, which as I’ll discuss in just a minute I think is the most interesting one, demonstrated that weird arbitrary prompts could generate non-nonsensical (and sometimes slightly disturbing) responses from ChatGPT/GPT3, probably due to poorly understood artifacts of the training process. As of ten minutes ago, I can confirm at least some of these are still functioning on ChatGPT.

Weird, right?

Marcus’s concern is that this means people could use these techniques to get LLMs to create convincing and human-like hate content and misinformation at scale. I want to stress now that my concern is a little different. Large scale misinformation and hate speech is, indeed, problematic, but I think it might well be able to be dealt with by limiting post rates and more carefully authenticating online speakers/sources (two things we might want to do anyway). This has costs, of course, and it might in fact burn our sense that the open web is a fluid space for new information for good, but that’s been in decline for a long time already.

In any event, even if there are possible consequences of LLM scale misinformation, it does feel a little weird (as Ted Underwood has argued) that we must absolutely guarantee that this technology may never be used to create or disseminate harmful speech. It’s almost like arguing that every QWERTY keyboard must be equipped with a filter that prevents it from ever typing a slur or threat of violence or a bomb recipe. Sure, we don’t want any of those things, but that feels a bit like overreach.

No, what I’m concerned about isn’t misinformation at scale, exactly, its misinformation being generated from unexpected inputs and articulated with trusted sources. I’m particularly concerned about Search, though Word Processors could also be problematic.

Critical scholars of search, especially Michael Golebiewki and dana boyd, have documented the phenomenon known as “data voids,” where relatively little used search terms are colonized by conspiracy theorists and hate groups. In doing so, they shape narratives about emerging events, and plant long term misinformation.

What makes data voids rhetorically successful? What makes it more persuasive to tell someone “search this keyword and see what you find!” than to simply explain your Important Theory about What The Lizard People Are Doing to The Soil? The authority granted to search, is what. If the search engine knows about the Lizard People, for a certain number of people, this must be true. Even more so, the experience of believing you are uncovering hidden truth can itself be compelling. This makes traditional critical thinking/information literacy training (which tends to focus on asking questions and “doing your own research”) potentially less effective at combating these sorts of misinformation issues (as dana boyd pointed out years ago).

So, what I’m worried about is what happens when some totally unexpected input gets ChatGPT enabled Bing or Bard enabled Google to spit out something weird, compelling, and connected to the rich melange of conspiracy theories our society is already home to (this will definitely happen, the only question is, how often). What happens when there’s some weird string of secret prompts the kids discover that generates an entirely new framework for conspiratorial thinking? What kind of rabbit holes do Data Voids lead us down when we don’t just have voids in the human-written internet, but all of the machine-made inferences created from that internet?

If these bits of nonsense were just popping up in some super-version of AI Dungeon or No Man’s Sky they might not be so critical. We might just task QA teams to explore likely play paths before players got there and sanitize anything really ugly. The delight created by endless remix might make it work the trouble.

But articulated with Search, the thing people use to learn about the Real World? That seems, troublesome, at best.