Tiptoeing Around Turing (Eventually We’ll Have to Talk About Qualia)

ChatGPT Answers Some of the Questions Proposed by Alan Turing in his essay “The Imitation Game”

We have, right now, machines that could probably pass the fabled Turing Test, but we’ve hard-wired them explicitly to fail.

What I mean by this is not that I believe, as a now fired Google engineer believed, that Large Language Models, or other related machine learning systems are capable of self-awareness or thought. Instead, I merely mean to suggest that these systems are capable of making a passable response to one of our culture’s long standing proxies for self-awareness/thought/sentience/call it what you will. That means that, if we aren’t going to accept these systems as sentient (and there’s good reason not to) we’re going to have to find another proxy. I’m not, personally, sure where we move the goalposts to.

One suggestive piece of evidence that the Turing Rubicon has been crossed is the story of that poor Google Lambda engineer. They knew as well as anyone they were dealing with software, yet they were still so convinced of the system’s self-awareness they decided to make the career-ending move of going public. This doesn’t prove sentience, but it does suggest a very compelling linguistic performance of sentience.

Here’s another suggestive little interaction I had with good ol’ ChatGPT. In, “The Imitation Game” Turing suggests a series of questions one might ask an unknown interlocutor on the other end of a (state-of-the-art) teletype terminal as part of his famous test. I don’t imagine he meant them as more than an illustrative example of what a test might look like, but they seem like as good a place to start as any:

Q: Please write me a sonnet on the subject of the Forth Bridge.
A : Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your
move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.

From Turing’s “The Imitation Game”

As you can see in my screenshot above, ChatGPT does not demure when asked to write a Sonnet about Forth Bridge, rather it promptly obliges. It also solves the chess problem in roughly the same way, but only after explaining that “As a language model, I do not have the ability to play chess or any other games.”

Turing then goes on to suggest the kind of discussion used in oral examination serves as a kind of already existing example of how we test if a student “really understands something” or has “learnt it parrot fashion.” He gives this example:

Interrogator: In the first line of your sonnet which reads “Shall I compare thee to a
summer’s day,” would not “a spring day” do as well or better?

Witness: It wouldn’t scan.

Interrogator: How about “a winter’s day,” That would scan all right.

Witness: Yes, but nobody wants to be compared to a winter’s day.

Interrogator: Would you say Mr. Pickwick reminded you of Christmas?

Witness: In a way.

Interrogator: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would
mind the comparison.

Witness: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

From Turing’s “The Imitation Game”

If I ask ChatGPT some follow up questions about it’s sonnet (adjusted to match the content of what it actually wrote), here’s how it replies:

Strike the hard-wired disclaimer “I AM A LANGUAGE MODEL” at the start of those answers, and those are some reasonable responses! Honestly, I don’t know the rules of sonnets well enough to say, off the top of my head, if the arguments based on those rules are accurate or BS.

Now, as I said before, I don’t think this is evidence of any kind of sentience or self-awareness. For one thing, just as ChatGPT helpfully tells us, it is a basically static model. It’s learned our language from it’s training loop, and the ChatGPT version has some kind of short-term memory that lets it adapt to an ongoing conversation, but the underlying model is basically static. Not an ongoing process of thought, a sort of frozen map of symbolic connections.

It should be emphasized, however, that the underlying “model” is not just a memorization of sources. What the model “learns” is stored in a matrix of information that’s informed by many uses of symbols, but does not reduce to any one symbolic expression. That’s not a signifier like you or I have, but it is something kind of analogous to that. (If you want a stronger, but still layperson friendly, explanation of that, check out Geoffery Hinton’s talk with Brooke Gladstone for On The Media a few days back.)

Furthermore, at some point in the near future, it seems somewhat likely we may have the computational power and mathematical methods necessary to have models that do update themselves in near-real-time. What will those things be doing? Will it be thinking? I’m not sure it will, but I’m also not sure how I justify that.

At some point, some combination of a firm being bold/unscrupulous enough to make big claims about “thought” and a technology flexible enough to give a very, very convincing performance of “thought” is going to force us to figure this out. We should get started now.

We’re Going Tinker With The Contours of IP When We Need to Do Automated Luxury Communism. Again.

Sweeping across the country with the speed of a transient fashion in slang or Panama hats, political war cries or popular novels, comes now the mechanical device to sing for us a song or play for us a piano, in substitute for human skill, intelligence, and soul.

John Phillips Sousa, “The Menace of Mechanical Music,” 1906

Taking the second revolution [that of information technology] as accomplished, the average human being of mediocre attainments or less has nothing to sell that its worth anyone’s money to buy.

The answer, of course, is to have a society based on human values other than buying and selling.

Norbert Wiener, “Cybernetics,” 1948

The copyright lawsuits targeting AI content generation have arrived. For example, Getty Images is suing Stability AI, and a set of small artists represented by Matthew Butterick (who designed my favorite font) is going after Stability AI and Midjourney, along with the CoPilot coding generator.

It’s easy to feel sympathy for the plaintiffs in these cases. The creators of AI image (and text) generators are large, well funded tech companies. They have created a potentially extraordinarily lucrative product by building on the work of millions of artists and writers, all without a cent of compensation. Common sense, and the larger legal framework of copyright which we’ve become accustomed to, suggests that can’t possibly be fair.

And yet, as someone who had a close eye on the legal and cultural ferment of the so-called “copyfight” some twenty years ago, I have my doubts about the ability of Intellectual Property (IP) as a tool to protect human creativity in the face of ever accelerating machine-aided reproduction (and now, perhaps, even a sort of machine-aided production) of culture.

First, lets just note that the threat to human creators from AI text/image/music generators isn’t really so different than the threat to human creators from the kind of image/music/speech recording that we now consider mundane. I don’t have to hire a band to play me music for my workout, I can just put in my earbuds and queue up what I want to listen to on the streaming music service of my choice.

Streaming music services are, in a sense, the final end state of the IP wars of the early twenty-first century. They represent a version of the “universal jukebox” that was the dream of the IP holders of the time. I pay a flat fee, and I get most of recorded music available to listen to at the time and place of my choosing. Rights holders still make money. Artists, in theory, still make money.

I say “in theory” because it’s been well-documented that it’s pretty damn hard for artists to make a living off of streaming services. Still, I would guess that’s something like the solution Getty would like to see for AI image generation. Fees paid to rights holders for AI image generation, just like Spotify pays rights holders for musical reproduction.

It’s not that simple, of course. The way Machine Learning models work makes any kind of payout to individual artists for the use of their images to generate AI images difficult to do. Machine Learning models are designed to “generalize” from their inputs, learning something about how people draw cats or take photographs of rainy streets from each piece of training data. Ideally, the model shouldn’t memorize a particular piece of training data and reproduce it verbatim. Thus, it becomes very tricky to trace which artist to pay for any particular image generated. A model like a streaming service, which pays out individual artists when their work gets streamed, doesn’t seem possible. About the best you could do is pay an institution like Getty to train the AI model, and then Getty could (in theory) make a flat pay out to everyone in the collection.

The alternative model we proposed twenty years ago was to loosen copyright protection, allow for much more fluid sharing of creative content, and trust that artists would find some way to get their audiences to support them. Give the CD away and sell a t-shirt or whatever. This model never flourished, though some big names made it work. That’s part of how we got streaming services.

In the end, neither strict intellectual property (in which every piece of training data is accounted for and paid for) nor loose intellectual property (in which AI can train however it likes for free) solve the problem of supporting creativity. This is largely because human creativity is naturally overabundant. People will create given even the slightest opportunity to. Recording (and now generating) technology makes this worse, but the use value and market value of creativity have always aligned spectacularly poorly.

If we want want human creativity to flourish, we should work on broadening social support for health care, for housing, for education. Build that, and people will create with AI, without AI, alongside AI. Leave it aside, and no exactly right IP protections will nourish creativity.