There are two reasons people want to humanize AI writing. The boring reason is to slip past an AI detector. The interesting reason is that AI writing, even when it's technically correct, often reads slightly off. Bland. Over-symmetrical. Like a brochure for a city that doesn't quite exist.

Both problems have the same root cause and the same fix. So this guide is going to treat them as one problem.

First, why AI writing reads the way it does

Language models were trained to produce the highest-probability next token given everything that came before. Over billions of tokens, that math averages out into a specific aesthetic. Balanced. Symmetric. Tidy. Slightly elevated diction. Lots of hedging. Lots of parallel structure.

That's also, not coincidentally, exactly how a person writes when they're trying to sound smart and aren't sure of themselves. Which is why so much AI text reads like an undergrad term paper with very good vocabulary. The model is, in a sense, anxious. It wants to be correct, and correctness in writing means being neat. So everything gets neat.

Real human writing is much messier. People drop words. They repeat themselves. They start a sentence one way and finish it another. They put the verb in the wrong place because it sounded right. They use specific weird words instead of safe general ones. Their paragraphs are different lengths for no reason. None of this is a bug. It's how voice works.

The patterns to break

Here are the patterns we look for first when we're editing AI text. These are the ones that move the needle the most.

1. The "it's not just X, it's Y" construction

This is the single most overused move in AI writing. Some variant of "this isn't just about A, it's about B." The model loves it because it sounds insightful without committing to anything specific. Once you start noticing it, you'll see it in every other paragraph.

Cut it. Pick one of the two halves and say that. If A is the real point, say A. If B is, say B. The hedge is what's making the writing feel hollow.

2. Parallel triplets

"Faster, cheaper, better." "Discover, explore, transform." "Designed, built, and tested." The triplet rhythm is everywhere in AI text, partly because it's a real rhetorical pattern in good English, and partly because the model uses it as a default whenever it wants to fill space.

Humans use triplets too, but rarely. Maybe one every couple hundred words. If you're seeing them in every paragraph, that's the model. Break them into pairs or singles. Or rewrite the triplet as a short sentence with one specific image.

3. The em dash everywhere

Em dashes are not inherently AI-coded. Plenty of human writers use them. The difference is volume. A human writer who likes em dashes might use one or two per essay. AI text uses three per paragraph.

We wrote a whole post on this called Em Dashes, "Delve", and the Other Dead Giveaways of AI Writing. The short version: replace most of them with commas, periods, or just delete the clause they're separating.

4. The diction lift

Models reach for slightly elevated synonyms by default. "Utilize" instead of "use." "Leverage" instead of "use." "Navigate" instead of "go through." "Delve" instead of "look at." "Tapestry" for anything that has more than one part.

Run a find-and-replace pass for the worst offenders. Pick the simpler word almost every time. The exception is when the specific word adds something the simple word can't, which is rarer than you'd think.

5. The endless qualifier

"This may not work in every case." "It's important to note that." "While this approach has many benefits, it's not without its challenges." All of these are the model covering itself. Strip them out. Real writers commit. They say the thing.

If you're worried about being wrong, say "I might be wrong" in plain English once, at the start. Then say the thing.

6. The bullet listing reflex

Models love to convert prose into bullet lists. It's not always wrong, but it's usually a tell. If you have a passage that could be either prose or bullets, prose almost always reads more human. Bullets work in a knowledge-base article. They make a personal essay feel like a deck.

Convert at least half your bullet lists to prose. Make the connections explicit instead of letting whitespace do the work.

7. Symmetric paragraphs

If every paragraph in your draft is roughly four to six sentences, that's the model's default rhythm. Real writing breathes differently. A one-sentence paragraph in the right place changes the pace of the whole piece.

Like this.

Then a long one that wanders a little. Then a normal one. Vary it. The variance is what makes it feel written rather than generated.

What to actually do, in order

Here's the editing pass we run. It takes about ten minutes for a thousand-word draft.

Step one. Read it out loud.

This catches more AI tells than any tool. If a sentence feels like a pamphlet, it's the model. Mark the sentences that feel canned. Don't fix anything yet.

Step two. Cut the qualifiers and the "not X, but Y" lines.

Go fast. Delete or rewrite. The piece is going to get shorter and that's good.

Step three. Replace lifted diction.

"Utilize" to "use." "Leverage" to "use." "Delve into" to "look at." "Journey" to "process" or just delete. Do a real find-and-replace if it helps.

Step four. Break the rhythm.

Find two adjacent paragraphs that are roughly the same shape. Break one of them. Combine two short sentences into one long one. Cut a long sentence into three short ones. Add a fragment.

Step five. Inject one specific detail per section.

AI writing is abstract. Humans use specifics. A number. A name. A weird example from your own life. One specific image per section pulls the whole thing toward feeling real.

Step six. Read it out loud again.

If it still feels off, look for the patterns above one more time. Usually there's a stubborn paragraph that the model wrote and your edits didn't quite reach. Rewrite that whole paragraph from scratch.

What about humanizer tools?

There are a lot of so-called AI humanizers on the market. Most of them work by paraphrasing the text with another language model. Some of them are useful. Many of them swap out one set of AI tells for a different set.

The honest assessment is that no tool will fully replace the read-aloud editing pass. What a good tool can do is the first 70 percent of the work. It removes the obvious lifted diction, breaks the rhythm a little, and gets you to a draft that's worth editing rather than a draft that's beyond saving.

That's what we built Cloak to do. It scores your text on a 0 to 100 scale for how AI-flavored it reads, and then it rewrites with natural cadence. It's the tool we use ourselves on first drafts, and it's calibrated against the actual patterns above, not against a generic "make it sound human" prompt.

But even with Cloak, the rule still holds. If you care about the writing, do the read-aloud pass. The tool gets you 70 percent. You get the last 30. That last 30 is the part readers actually feel.

One more thing

Don't try to write badly to sound human. This is a trap people fall into. They add typos, they break grammar, they put random ellipses everywhere. It doesn't fool readers and it doesn't fool detectors. We wrote about why in Five ChatGPT Detection Myths That Refuse To Die.

Humans don't write badly. They write specifically. They write with a voice. They commit to a position. They use real examples from their actual life.

That's the thing the model can't fake well, and it's the thing that makes writing feel human even when you're working from an AI draft.