Please Stop Moving Fast and Breaking Things: AI Edition

Not a bad answer there, ChatGPT

So it looks like Microsoft is already rolling out ChatGPT based writing tools in Word, and the Bing integration has a wait list you can join. Both will likely be in full public release within months. Google’s Bard is likely not far behind. ChatGPT’s pay version is now available, and only $20 a month (it initially advertised at $45).

The machine writing revolution is happening very, very fast.

It recalls the infamous Facebook internal slogan “move fast and break things.” Social media certainly deployed very, very fast. We still don’t really understand everything it did, and we still don’t have any sort of Public that can really give the format any kind of meaningful oversight.

This is a symptom of Siva Vaidhyanathan’s notion of “public failure” (an idea that should have gotten more attention than it did, IMHO) but this is all happening too fast to even go into that right now. It’s a dismal diagnosis though, without trusted, shared, public institutions (which we really don’t have right now) it’s hard to see how we even develop a framework for what we want to happen with something like social media or LLMs, much less deploy a regulatory framework that would steer towards those wants.

In the meantime, what I want to know is, what’s the rush? It’s not clear to me that some of the failure modes of AI writing folks are worried about are really all they are cracked up to be. Yes, LLMs could produce misinformation (dramatic music) at scale but then, maybe we just need to rate limit things a bit more and confirm authorship a tad. Then again, maybe everyone in the world consulting an AI oracle for information that’s known to give bad health advice is not, like, ideal.

Honestly, though, I’m not sure we understand what happens when we encourage everyone who uses Microsoft Word or Google Search (i.e. everyone in the United States and most of Europe and a large percentage of people everywhere else) to outsource a big chunk of writing and thinking to an LLM to even predict how it might go wrong yet. I’m sure that, in the end, this is the sort of change that’s probably not good, probably not bad, and definitely not neutral.

Given that, I return to the question, what’s the rush? What harm could there be in slowing this down for a bit? What will be lost if we don’t roll out AI writing to everyone in the first quarter of 2023? Oh, there are costs to Microsoft and Google’s stock prices, perhaps… but who cares?

But that’s exactly what’s in the driver’s seat now. As I quipped on Mastodon “What we’re seeing now in the LLM space is wartime tech adoption. ‘The other side has it! Who cares what the long term implications are, just get it to the front!’ Thing is, it’s a war between Microsoft and Google, mostly over market share and stock price. Whoever wins, we don’t share in the spoils, and we definitely will have to clean up the mess.”

Some have called this an “iPhone moment” and I think that’s exactly right, in the sense that the iPhone made a giant pile of money for Apple, had exactly zero social benefit (as measured by say, productivity or similar metrics), and participated in a series of decidedly not neutral techo-social-media upheavals we still don’t understand.

Why not try to understand first, this time, at least a little? What harm could it do? What grand social problem will go unsolved without LLM writing to solve it? What social benefits will we deny people if LLMs are delayed in their mainstream adoption for a bit? Shouldn’t there be at least some affirmative duty to make that case before we push this out to most of humanity like a software patch?

Leave a Reply

Your email address will not be published. Required fields are marked *