SEO & Blogging

AI writing tics impair engagement: A study

The web has strong opinions about what “AI-authored” content looks like, and even stronger ones about what is said to be wrong with it. Scroll through any content marketer’s LinkedIn feed, and you’ll find confident claims that dashes and other AI “tell” a bad sign, automated writing.

The problem with these arguments is that they tend to confuse taste with performance. Something as important as “bad writing” will always be consistent. But if the goal of content marketers is to communicate clearly and compete in the knowledge marketplace, the practical question should be: what LLM practices are actually turning students off?

To find out, we analyzed a large dataset of content marketing pages to find out which AI writing “tics” we see most often are called to understand which ones are turning off readers – and those we might call out for no reason.

How we built our ‘AI tics’ course

At this point, you’ve probably seen them all, too:

  • “In today’s fast-paced digital environment…”
  • “It is important to note that…”
  • “Not only… but also” (again, and again…)
  • “In conclusion” (even when there is no conclusion)

The second you notice them, it’s hard not to see them everywhere the LLM helped produce the copy. Many students report that they hate these LLM patterns. But how do they really affect user engagement?

To find out, we’ve compiled a list of the most common AI texts that we and others tell us they’ve seen. This includes:

  • “Not only… but also” properties: “UX not only does Y, but also Z.”
  • A sentence begins with “then,” “this,” or “that”: “Then you should…” “Then the plan…” “This shows…” “This means…”
  • Intro filler: “In this article,” “We will check,” and “Let’s look.”
  • “Conclusion” for starters: “In the end,” or some AI equivalent of clearing your throat.
  • em deshi: The most infamous punctuation mark in today’s content marketing.

From there, we created a dataset of:

  • 10 domains of various domain sizes and monthly traffic, across a wide range of industries including technology, ecommerce, health, education, math, and more.
  • 1,000+ URLs for content marketing, built with a mix of workflows including posts that were either fully human-written, co-written by humans and AI, or completely AI-generated.

We then scaled our dataset by:

  • Aligning short posts and cornerstone content by limiting every piece of writing to a 1,000-word event. Since long articles naturally contain a lot of, well, everything, a 3,000 word guide can look “uglier” than a 600 word post simply because it has more sentences.
  • Except for any page under 500 words. Very short pages don’t give enough room for stylistic patterns to emerge, and their engagement metrics are likely driven by intent rather than engagement alone.
  • Prioritizing engagement rate as a key performance metric. The level of engagement best captures the student’s first real decision: “Do I stay, or do I go?” GA4 registers a session session as any 10 seconds or longer. While 10 seconds may sound short to check if a post is AI, it’s long enough for a user to quickly glance at the introduction, notice incorrect or repetitive writing patterns, and scan the titles to determine if the content feels worth continuing.

Dive deep: A smart way to deal with AI information

Your customers are searching everywhere. Make sure it’s your product he appears.

The SEO toolkit you know, and the AI ​​visibility data you need.

Start a Free Trial

Start with

Semrush One Logo

Why tracking total AI tics wasn’t enough

Our first instinct was to measure the number of AI tics per 1,000 words and compare the performance of the pages.

At first glance, this seemed like a neat way to separate human writing from AI-influenced writing. But the picture was soon complicated by one tic in particular – the em dash – that dominated the dataset and greatly skewed the estimate.

Selling content on all 10 domainsSelling content on all 10 domains

The issue pointed to a bigger problem: AI tics are messy by definition. AI trained for human writing. So if certain patterns are seen more often, that doesn’t mean they are unique “AI”. It may mean that they are common in English prose.

For comparison, we used the same tic counter for two known controls: a novel I published in 2021 (which I can confirm was written without ChatGPT, Grammarly, or other AI-assisted tools). This earned an impressive 6.9 average tics per 1,000 words.

Next, we found “Hamlet,” a famous Shakespearean play, which scored a very high ≈11.4 tics per 1,000 words. Shakespeare, it turns out, is more “AI-coded” than an AI-generated blog post.

Finally, we checked that this is almost entirely due to the em dash, which can appear in abundance in the prose of many human writers and in the copy produced by AI.

With this in mind, we analyzed each “tell” one by one, still weighing each of the 1,000 words. The story became very clear – and very useful for writers trying to decide what they should really avoid.

Dive deep: How to make your AI-generated content sound human

Get the newsletter search marketers rely on.


AI tics impact performance

Not all posts are the same, and many different factors contribute to the success or failure of any content marketing page. Perhaps this is why our data showed that many AI “tells” did not correlate strongly with performance or inefficiency.

Anything less than a plus/minus .1 correlation is not statistically significant. However, there were a few worth noting that had a greater impact than others.

Picture 147Picture 147

‘Not only’ and ‘not only’ properties are likely to drive users away

Sentences built around “not only…” or “not only…but also” stood out with a greater negative correlation than average with engagement level. While these features, if used occasionally, can add stress, data shows that regular use is associated with higher user bounce rates.

AI-assisted writers and editors should be careful, as many of the AI-generated posts we reviewed stumbled upon these structures. In one instance, we found one blog post that didn’t use “only” and “but also” 12 different times.

The strongest negative correlation in the entire dataset was observed in sentences beginning with “Conclusion,” usually the headings of the paragraphs preceding the call to action. A very clear AI-style red flag we found, posts with titles starting with “Conclusion” have a significant negative correlation (≈ -0.118) with post engagement rate.

Since this tic usually comes at the end of a post, it’s obvious that readers may quickly skim through all of these posts before bouncing – or other posts with these last titles tend to be of lower quality on average.

Em dashes are less well correlated

Em dashes were the most common stylistic tic in the dataset. They also produced one of the most surprising results: a small positive correlation with the level of engagement.

Despite widespread online chatter that treats dashes as an “AI artifact,” this data suggests that they don’t hurt performance, and may even be aligned with better interactions. (As someone who really likes em dashes – this was deeply reassuring.)

A plausible explanation might be that writers who use em dashes tend to write descriptive, nuanced sentences rather than short, flat declarations. Those types of sentences often appear in long, thought-provoking content that many readers discuss.

That said, this does not mean that dashes cause marriage. Too much of a good thing is still too much of a good thing. But it challenges the notion that dashes are the bugbear content vendors make them out to be.

Dig deep: An AI-assisted content process that goes beyond human-only copy

3 effective takeaways for content groups

Here’s what content marketers can do today.

1. Don’t over-prepare for AI

Google does not issue SEO ratings as a monotonic punishment for “AI style.” Many of the phrases we looked at are not related to engagement at all.

Don’t rewrite content just because someone announced the phrase “AI writing.” Write for the reader’s convenience and clarity above all else.

2. Be careful how you fold

Clear conclusions are not bad – but familiar patterns, formulas are likely to drive readers away.

Consider combining the results in the analysis, using subtle changes, or adding a new value to the topics, instead of sending the obvious structure.

3. Use punctuation that makes sense

If your style calls for em-dashes? In this dataset, they were actually associated with better student engagement. Use them.

Don’t miss the forest of fake plastic trees

AI is likely here to stay in content workflows. But the issues with “bad” AI writing aren’t limited to grammar and punctuation. Although we all have our own ideas of style, we must be aware of the hot changing of style as a rule of thumb.

Write important text. Think of the students first. And don’t panic every time someone on Twitter or LinkedIn decides that “X phrase = AI.”

Contributing writers are invited to create content for Search Engine Land and are selected for their expertise and contribution to the search community. Our contributors work under the supervision of editorial staff and contributions are assessed for quality and relevance to our students. Search Engine Land is owned by Semrush. The contributor has not been asked to speak directly or indirectly about Semrush. The opinions they express are their own.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button