Thoughts on generative AI

I'm going to argue in this piece that using AI models to generate for-human text is good professionally, but not creatively.

The most extreme interpretation of my opinion is that a liberal reliance on AI-generated text is bad for people at work and play.

I'm going to talk mostly about generating prose which is longer than a couple of sentences. I'm not really going to talk about AI models that generate software or code, images, movies, etc.

Lastly, I talk mostly about knowledge workers in the tech industry.

It's just autocomplete

People have dismissively called text generative AI "autocomplete". There's actually some truth to that, but it feels a little like calling a skateboard and a Land Rover four-wheeled vehicles. Technically true, sure.

When an AI model generates prose it is:

using its predictive powers to decide what the next word should be. When generating longer pieces of text, it predicts the next word in the context of all the words it has written so far; this function increases the coherence and continuity of its writing. (source)

I have heard a number of people praise text generative AI models, like ChatGPT or Claude, because the models allow them to produce more written content quicker within their professional/work context.

I've heard people talk about how embedding generative AI into more aspects of professional life (like giving/receiving feedback, developing product requirements, giving status updates on work in progress) will be fantastic.

My response ranges from very positive ("good, humans shouldn't spend their time doing that") to very negative ("you are describing a dystopia"). And I wanted to dig more into that.

The good: communicating between expertise and domains

Look, not all writing has to be creative.

Some kinds of writing are better when they're predictable.

For example, I use ChatGPT to write emails to customer support teams, for hotel and restaurant reservations, and for the hundred other 1-5 message interactions that accumulate over daily life.

If I am doing a transaction with a company, but talking with someone who only represents that company (but also is very busy) - having boring, concise text with tall the information necessary, is great for everyone.

I have a set pattern: greeting, problem, context, desired resolution, farewell.

For example: I had to book a hotel for immediately after my wedding. We really wanted a room with specific features and dimensions that only ~50% of the rooms at a hotel met. So I said to chat GPT something like:

I am booking a hotel for my honeymoon at Hotel X for N nights between $START_DATE and $END_DATE, and we want a room which has both $FEATURE_ONE and $FEATURE_TWO.  Their website says some (but presumably not all) rooms have both of these features.  Please write a polite but informative email to their booking team to enquire about the availability of these rooms on these dates, mention it's our honeymoon, and politely request a possible discount 

And you know what, reader? We got that hotel room, at a great price, with the room we wanted.


Anyway, back to building software.

Having a predictable format at the non/technical boundary is just as useful as being able to talk to a hotel.

Non-technical folk need to know the status and progress of technical work, and consequences for "what's getting done, and when?". This is a reasonable expectation, and I have seen incomplete or inadequate answers cause frustrations or, worse, blank-filling ("even if there are problems, it sounds like the initial deadline is going to be met").

Whereas technical folk need to communicate the various burps and gurgles of their current work to non-technical folk. There are endless reasons why work takes unexpectedly longer than planned. Communicating, or deciding to omit, each one in an appropriate and accurate way is hard work.

In situations where you don't have to persuade the reader that your claim is both true and justified, where you are simply broadcasting without expecting a response, boring text is great.

Use generative AI to:

  • take a list of bullet points
  • tell it about your audience (non-technical managers)
  • tell it about your desired formats (300-500 words, no introduction or preamble, no more than three sentences per paragraph)

Copy-paste your text into Notion, give it a proof-read, send a link to your team, go about your day.

Hope that no one asks AI to summarise the summary into a list of bullet points.

I think AI can help us give people what they actually want and justifiably need, with less human effort and attention.

The machine of efficiency grinds on, and we're all happy.

A Magic: The Gathering warning story

I know I said I was mostly talking about text, but I want to talk about art for a second.

In early 2024, there was outrage at Wizards of the Coast, the company who own Magic: The Gathering, a very popular trading-card game.

The company released some promotional art for some special releases of some cards in an upcoming set. The marketing/promotional art they used for these cards (not the cards themselves), used generative AI.

The impact of this situation was probably made greater by Wizards i) having a no AI art policy for their MTG cards, and ii) initially doubling-down on their "this was human human-created art" sentiment.

I think the impact, and backlash, was so great because people were primed with strong pro human-made art sentiment. People just got especially angry because they were lied to about using generative AI.

One of the strongest anti-AI points I hear bandied around is "I don't want content that wasn't written by a human". This is a viewpoint shared by Nerd Culture and Art Culture.

By having humans at the centre of the doing, the end-result is better. Or alternatively, when we take humans out of the doing, we are depriving humans of the chance to experience, and be paid for, their skilled creative endeavours.

The bad

Pragmatically speaking, people do not care about every bit of text. A robot wrote or translated the instruction document for my boiler? Cool. A customer-service agent used AI and/or text snippets to send me a detailed piece of text with less effort and time? Sounds great, cool.

In the above examples: you can swap out the person at either side of the transaction, and the end result is basically the same. This works great for transactional, probably professional, situations.

When you have a captive, generous, non-interrogative reader, who wants to accept what you're saying - this text is great.

But really, often can we assume that the thing we're writing or saying will generate no follow-up actions, no questions, no knock-on effects.

If the is the case, how much different is a world where the text was read versus a world where the text was never written in the first place?

Was the thing worth writing?

No, I'm serious. I think this is my barometer for when generative AI is cool: is it worthwhile to spend the time to have the text?

If you're doing a thing to have it done, for the end-result, you're doing a performative thing. Cal Newport might call this Pseudo Productivity:

The use of visual activity as the primary means for approximating actual productive effort (source)

Ideally we only do things that matter, and we engage with everything we do. I think that's a big ask of everyone, all of the time.

But I think the pure ease of using generative AI to write text, makes it much easier to think that everything is performative, or transactional, or un-interrogated.

Communicating effectively is hard work. I'm trying to do it now, and I think I'm doing a bad job.

Presenting truthful information concisely is a skill. See also: PowerPoints and data tables or charts.

It is hard work.

The only real way to improve this skill is to practice it.

Practice taking a thing you want to say, and make it legible to someone else.

Practice not over-simplifying.

These things simply don't matter when the stakes are low, when the the performance of doing is more important than the quality of the result, or when the text is a one-and-done interaction.

But for the important things, for long-lived projects or projects where there are a lot of different parties involved - I think the benefits of generative AI (easy to generate, very predictable text) are answers to unimportant questions.

The important questions are: do you understand what I just said? What does this mean for you or the people you represent?

But then what?

Let's imagine a world where all text is written by (and presumably for) AI models.

We will just give ourselves more work to do. We'll think we're clever having a machine write our 500 word updates in 10% of the time it used to take. But we'll fill that 90% up immediately. See also: kitchen appliances in the 1950s.

If you've invented a generative AI model that can say exactly what you want, exactly how you'd say it - and can instil that exact understanding in your reader then you have invented magic.

You have made a world-ending technology or a psychic.

That thing would have to be trained on a corpus of text which does precisely that. And that corpus does not exist.

Until then, I am going to think: how worthwhile and important is it that my humanness is spent making this thing understood?

See other articles