Why I don't use AI for communication

The motivation behind a personal policy I've adopted.

I have been excited by how AI has helped me since adopting GitHub Copilot back in 2023. Since then, it and other generative AI tools have become an indispensable part of my work.

But the more time I spend with these tools, the more uncomfortable I feel about certain classes of use case; to the point that in the last few months I’ve adopted a fairly hardline policy about how I use AI for personal communication.

What counts?

I will not use large language models to write or review:

  • Informal messages addressed to a person or group. Slack messages, whatsapps, DMs, emails, blog posts.
  • High-impact or sensitive formal content. Job applications, performance reviews, formal emails.

Why?

It’s not my bottleneck

In my day job, I look to AI for ways to optimise parts of my work that are sometimes the bottleneck, including some types of coding, debugging, log trawling, and testing. But I have never felt so snowed under by the amount of documentation I need to write, or the number of DMs I need to reply to, that I want to reach for an automation to help.

This would not necessarily be the case for other disciplines - say, a GP or an estate agent or a product manager - or even for another software engineer in a different working environment. As such, I will never judge someone who is making use of generative AI in their communication in a way that I’ve decided not to, if their unique circumstances mean they are seeing benefits that I don’t.

For me, though, I just don’t see a big personal upside from adopting AI for my written communication.

It’s too risky

Large language models work best when close to their training data. As such, they have a subtle but definite tendency to pull uniqueness or novelty out of writing. Either the resulting message ends up gradually becoming meaninglessly generic; or the message loses its nuance and becomes progressively more extreme.

In businesses with significant IP or highly specialised technology, or difficult and specific political circumstances, the type of communication occurring will be semantically “farther” from anything in a large language model’s training data and the cost of a miscommunication can be very high. Thus in this type of context the use of any potentially distorting filter to mediate communication creates a massive, huge, enormous risk. This cannot possibly be balanced by the payoff of a slight reduction in admin.

This effect is particularly bad when teams rely too much on AI summaries. For example: New ideas in an exciting R&D meeting are so bungled by the AI that they end up shelved despite their potential. A tactful and considered pushback is reduced to the point of sounding genuinely rude and setting off a big internal dispute.1

I want complete control over how I move through the space of communication. I don’t want my thoughts subtly pulled sideways, especially by something that is supposed to help, and especially by something whose behaviour we don’t fully understand, because it is inherently probabilistic and because most LLMs’ training data is closed-source. It’s hard enough working with my own biases!

It feels disrespectful

Most of us have an intuitive feel for AI-generated text. Sometimes there’s a smoking gun (bullet points, too many emoji, “It’s not just X–it’s Y”), but sometimes it’s just a feeling that we can’t put into words. Usually we can’t know for sure if someone used a GPT until we ask them.

It can feel accusatory to ask. The writer might be feeling guilty that they used it, even if they had no reason to feel that way. They might have a different educational or linguistic background, and worry that they won’t be taken seriously unless they use it (in which case it’s the environment’s fault not the writer!). Or they might not have used GPT at all, and then you’ve accused their own natural tone of sounding like a robot!

This is a whole area of etiquette that we haven’t really worked out yet, and the resulting confusion and suspicion erodes trust.

I also feel that if I outsource a personal message to a bot, it is implicitly saying that I didn’t value that person enough to take the time to write to them. It’s like bringing a shop-bought cake instead of baking one: in the right circumstances this is totally socially acceptable, but you need to read the room. I haven’t yet found a good way to discern how to do this with chatbot comms.

It misses the point

I often walk when I could drive or take a bus.

If my primary goal is to be as efficient as possible with my time, this behaviour makes no sense; I should save the time by driving, then use that time for other things. But I also value the exercise and the fresh air and the sights, the ancient oak tree on the corner, the positive impact on my local environment, the chance of running into someone I know, the warmth of the sun or the freshness of the wind. So I walk.

Generative AI offers linguistic shortcuts like the ones we are already familiar with in our physical world. And like in the physical world, it would be a mistake to assume that the final artifact (the words) is the only result of the process of writing something.

I prefer writing to be manual because I value the struggle of wrangling my chaotic, unstructured thoughts into a coherent argument. Putting my thoughts down in a way that really captures them is sometimes hellish (this blog post took me months) but once I’ve gone through the purgatory, my thoughts in that area become more structured, and that structure stays useful for much longer than whatever I actually wrote.

For me, using any software to automate this part of communication is like driving a route I’d normally walk. It misses the point; it abstracts away something I really valued.2

Will I ever change?

Probably. Models might lose the regression-to-the-mean effect, the decorum of bot use might be more standardised, and I might find better ways to structure my thoughts than writing.

Or, if communication through AI-mediated longhand becomes the new standard, it might become my bottleneck and I might be forced to use it to keep up.

This is not a hard-and-fast forever rule. I’ll keep reviewing it and seeing if it still makes sense from time to time. Maybe it will date very quickly, like everything else in this unbelievably fast-moving situation.


  1. This is not so much AI’s fault as an overblown idea of what is physically possible in language. Just because AI is a computer, and computers do some things better than humans, it does not follow that AI can accurately reduce a 5,000 word document to 500 words and preserve all its nuance. (The obvious joke about English essays is left as an exercise for the reader.)

  2. There is a counter-argument here. If my walk was to be down a motorway in a thunderstorm, or through a notorious back alley at 2am, I’d probably elect to drive. Where there are equivalents to this in the world of communication, it might indeed make sense to use AI. Though, as I explained above, I don’t really see them in my line of work.