Introduction
Lots of people are starting to use AI more, and AI-generated-content is coming up in our social-media feeds more often. LinkedIn, threads, Facebook - basically any long form posts can be created with AI and because so many posts have suddenly started using the same AI patterns, it’s very easy for a reader to tell.
I think highlighting this is important because it can make the writer seem like they haven’t come up with the thoughts themselves, can make the text seems disingenuous and therefore readers won’t want to read the information.
There’s plenty of good reasons to use AI in long form writing; adding definitions or backgrounds for readers who aren’t clued in e.g., “dropping tables is a term used to describe…”, to create structure for a writer’s thoughts (which is what readers are looking for), to catch spelling mistakes and cultural gaffes.
If the text becomes too much “AI” in style and substance, readers won’t read, won’t engage and may not think the thoughts are actually the author’s. Why read your post if I can just get my local AI to tell me the same thing?
When using AI you need to make sure that it’s written in your style, that you are adding independent thought and that you don’t sound like an AI in order to keep people engaged. Here are some of the things to look out for, and some things to avoid using if you are using AI as a helper tool for your writing.
So, you have your traditional em-dash, and icons instead of bullet points but here are some of the other give aways (that I have observed) that you should be editing out and removing from your writing.
Example 1 – Paragraph in an article
Prompt
Create a paragraph of text that explains why people like seeing the Eiffel tower, be emotive
Response
People don’t just go to see the Eiffel Tower—they go to feel something. It rises out of the city in a way that feels almost unreal, something you’ve seen a hundred times in photos suddenly standing right in front of you, larger, heavier, more alive than you expected. There’s a kind of quiet awe in that moment, where reality catches up with imagination. It represents more than a structure—it’s history, art, romance, and memory all layered together. For many, it carries the weight of dreams: a first trip, a shared moment, a version of life they’d pictured before they arrived. And when you finally stand beneath it, looking up as it stretches into the sky, it’s not just about what you’re seeing—it’s about everything that moment means.
AI Indicators
1. Overly flowery wording in the post that don’t add value – “it carries the weight of dreams”, seems really disjointed in the flow of text and doesn’t really make sense.
2. AIs love a juxtaposition. That’s when we begin with one thing and then flip it on its head. “This isn’t just X. It’s Y.”, “Something is now X, not Y.”
“it’s not just about what you’re seeing—it’s about everything that moment means”
3. The rule of three is a classic writing and rhetoric technique where ideas are grouped into threes because people find them easier to process and remember and tend to find them more satisfying. In this example is “larger, heavier, more alive than you expected” which builds on emotional wording by using a group of three words or phrases.
You’ll see this repeated more explicitly in later examples.
Example 2: LinkedIn Post
Prompt
Make a concise and snappy LinkedIn in post about how middleware architecture is becoming a financial risk for banks
Response
Banks built middleware to connect systems.
But in the AI era, that same architecture is starting to look like a financial risk.
Layers of integrations, duplicated data, and delayed reporting create friction exactly where banks can least afford it:
→ liquidity visibility
→ risk monitoring
→ response time during a stress scenario
Traditional middleware was designed to move information. AI is designed to understand it. That changes everything.
Instead of routing data through endless integration layers, AI can interpret fragmented systems directly, surface insights in real time, and reduce the operational drag sitting between decision-makers and critical risk signals.
The banks that move fastest over the next decade may not be the ones with the biggest technology stacks.
AI Indicators
4. A sudden overuse of specific technical terminology “liquidity visibility” that is only really seen in reports, not by people day to day! This is the same behaviour as the “flowery wording” in Example 1.
5. Juxtaposition: In this example: “Traditional middleware was designed to move information. AI is designed to understand it”. This type of juxtaposition is attempting to show you the behaviour you are currently portraying and the behaviour you should be doing.
Another example of this type of phrasing might be “a worker sees the immediate customer need. A business owner sees the gap in the market.”
6. In this example, we see how AI pits topics with good and bad for over emphasis of a point. This style follows this format: Most X are treating Y as Z. That’s a mistake.
In this example we have “Traditional middleware was designed to move information. AI is designed to understand it. That changes everything.” Where “What changes everything” is that emphasis.
Another example I have seen is: “Most technical writers are using AI as a two-minute article writer to convey messages. That’s a mistake.”
7. AI tools can’t help but be overly verbose with the same points repeated. The paragraphs just go on and on and on and on.
Paragraphs like “Instead of routing data through endless integration layers, AI can interpret fragmented systems directly, surface insights in real time, and reduce the operational drag sitting between decision-makers and critical risk signals.” are overly verbose. Readers get to the end and don’t really see a point to what is being said.
8. The rule of three in this example is the bullet point list.
Example 3: Write me an email
Prompt
Write an email explaining how IBM MQ can be implemented on containers over virtual machines. This should be no more than three short sections and help make a decision on which deployment type to use. Explain the best approach to moving
Response
Subject: IBM MQ Deployment Approach: Containers vs Virtual Machines
Hi [Name],
As we evaluate the future deployment model for IBM MQ, there are clear advantages to moving from traditional virtual machines (VMs) to container-based deployments, particularly around scalability, resilience, and operational efficiency.
Running IBM MQ on containers enables faster deployment, dynamic scaling, improved portability, and greater automation through platforms such as Kubernetes/OpenShift. Compared to VM-based deployments, containers significantly reduce infrastructure overhead and align more effectively with cloud-native operating models and CI/CD delivery pipelines. While VMs continue to provide stability for legacy workloads, they are typically slower to provision, harder to scale, and more operationally intensive to maintain.
Why this works well
- Faster deployment and environment provisioning
- Automated scaling and service recovery through Kubernetes/OpenShift
- Improved high availability and resilience across environments
- Reduced infrastructure overhead compared to dedicated VMs
- Better alignment with cloud-native and AI-driven operational models
- Easier integration into CI/CD pipelines and modern development workflows
- Greater portability and consistency between development, testing, and production
The recommended approach is a phased migration rather than a full replacement. Existing queue managers can remain on VMs initially, while new services, non-production environments, and lower-risk workloads are introduced onto containers first. This creates a controlled transition path, allowing performance, HA/DR, security, and operational processes to be validated incrementally before wider adoption.
Best regards,
[Your Name]
AI Indicators
9. Another giveaway is that random parts of the text are made bold for emphasis in sentences
10. Headings that are very formulaic, if I wrote you an email asking your opinion on something, and then you add a paragraph after your response that says something like “Why this works well”, “why this is problematic”, “Why this is…” then I know you’ve used AI. Real people don’t talk to each other like that.
This type of heading gives the impression, whether true or not, that the writer hasn’t thought through the problem themselves but have jumped straight to AI. It feels to the reader and equivalent of “I googled it and this is what I saw on google.”. I can do that myself.
11. AI doesn’t understand the level of formality or tone for a given situation. An email that is very formal and business like, with caveats all over comes across as completely missing personal relationships especially if you know the customer or have been speaking to them for a while.
For example, an email someone sent recently had very formal text, even though previous emails were not formal and we’ve been working with the customer for the last 6 months. It felt inauthentic.
12. We again see consistent use of the rule of three throughout e.g., “scalability, resilience, and operational efficiency”
How to avoid sounding like AI
Let’s ask AI shall we…
Prompt
What advice would you give to avoid sounding like an AI? Condense the output into a bullet point list
Response
· Pick one clear point instead of listing three vague ones
· Avoid “not X, but Y” phrasing—say it directly
· Use concrete language (things you can picture) instead of abstract terms
· Write to one person, not a generic “audience”
· Vary sentence length—don’t let everything flow too smoothly
· Cut repetition—remove sentences that don’t add new meaning
· Dial down certainty—write like you would speak (“I’d suggest…”, “In most cases…”)
· Match tone to context (email vs post vs report)
· Use real examples instead of high-level summaries
· Allow slight imperfection—don’t over-polish every sentence
New Prompt
I have rerun the prompt “Create a paragraph of text that explains why people like seeing the Eiffel tower, be emotive.” But added a “Constrains: …” section with the bullet list included
New Response
When you finally stand in front of the Eiffel Tower, it hits you how solid it is. You’ve seen it a hundred times before on screens, but being there is different—the metal, the scale, the way it fills your view when you look up. I remember standing underneath it at dusk, watching the lights start to flicker on while people around me just stopped and stared. No one was rushing. It’s that moment people are really there for, I think. You’re not doing anything special, just standing in a busy city, but it sticks with you because it feels real in a way the photos never quite capture.
My advice
The AI advice focuses on hiding the fact it was written by AI. I think this is harder because it assumes a good understanding of what is being written, and that is hard if you have started directly from an AI written post.
The new response is much better and definitely feels less like AI. Possibly I would have read it and not assumed it was an AI post at all.
My own personal views are:
· Don’t use AI to write the content. Use it for researching, clarifying, or rewording what you have already written.
· Use AI where it makes sense e.g., to add a technical statement that is relatively well defined “What is a message bus”
· Always start with a: structure, content per heading, the audience and the purpose of what you are trying to write.
· Always edit an AI written post
· Feed the AI tooling examples of your own written work that you are proud of. Ask it to mimic your style, tone, phrasing. (But make sure your AI tool is not trained on your data).
· Make sure you check for localisms e.g., calling crisps “chips” in the UK or vice versa as someone from the US
Conclusion
Whilst AI can be used to help write posts, content and technical details. As long as you have something worth saying and reading.
AI can be a great tool, but if your post sounds like AI there is a good chance you won’t have any readers or inauthentic engagement.
Postscript
I asked Copilot and ChatGPT to turn this article into a LinkedIn post. Look if you can see the AI Indicators in the text and let me know if either would have fooled you! All the text below is AI generated.
Prompt
Write this Article as a LinkedIn post...
ChatGPT
If a post sounds like AI, people stop reading.
You can usually tell within a few lines now.
The phrasing is too polished.
The structure is too perfect.
Everything sounds like “thought leadership” but somehow says very little.
You start seeing the same patterns everywhere:
• “This isn’t just X. It’s Y.”
• endless rule-of-three sentences
• dramatic wording that doesn’t really mean anything
• oddly corporate phrases no real person says out loud
• paragraphs that keep going without making a point
AI-generated writing has developed a style of its own, and readers are getting very good at spotting it.
The problem isn’t using AI.
AI is genuinely useful for:
- structuring ideas
- clarifying technical explanations
- fixing wording
- catching mistakes
- helping research topics quickly
The problem is when people let AI do all the thinking and all the writing.
At that point, the post stops sounding like a person and starts sounding like every other generated post on LinkedIn.
A lot of AI writing also struggles with tone.
You see formal consultant-style language in casual emails.
Over-explained LinkedIn posts.
Technical summaries pretending to be personal opinions.
It feels off because humans don’t naturally communicate like that.
One thing I’ve noticed:
the more generic the writing becomes, the less people engage with it.
Because readers subconsciously think:
“If this sounds like AI, why wouldn’t I just ask AI directly instead?”
The posts that still work tend to have:
- specific opinions
- real examples
- slightly imperfect phrasing
- shorter, clearer sentences
- actual personality behind them
My own approach is:
- use AI for support, not authorship
- always rewrite outputs in my own voice
- remove repetitive phrasing
- cut anything overly dramatic or vague
- make sure it sounds like something I’d actually say
AI is a great tool.
But if your writing sounds like everyone else using the same prompts, people will scroll past it instantly.
Copilot
There’s been a noticeable shift in LinkedIn posts over the past year.
You can tell when something’s been written by AI.
Not because it’s wrong. But because it feels off.
Same structure. Same phrasing. Same patterns. Once you see it, you can’t unsee it.
And I think it matters more than people realise.
When a post reads like AI:
- it feels less genuine
- it’s harder to connect with
- and you start to question whether the person actually believes what they’re saying
At that point, why read the post at all? You could just ask your own AI and get something similar.
To be clear — AI is useful.
I use it.
It’s great for:
- tightening structure
- explaining technical concepts more clearly
- spotting mistakes you’ve gone blind to
But there’s a difference between using AI as a tool… and letting it do the thinking.
After seeing a lot of this (and testing it myself), there are a few dead giveaways that something is AI-heavy:
1. Overly “polished” language that says very little
Phrases that sound nice but don’t actually add meaning. You read them and think, “what did that actually say?”
2. The constant contrast trick
“This isn’t X. It’s Y.”
“Not just this — but that.”
Once you notice it, it’s everywhere.
3. Everything comes in threes
Lists of three. Descriptions in three parts. Points grouped neatly into three.
Good writing technique… until it’s overused.
4. Randomly formal or corporate wording
You’ll see phrases that belong in reports, not conversations.
Things people don’t actually say day-to-day.
5. Long sentences that go nowhere
You get to the end and realise the point could’ve been said in half the words.
6. Sudden shifts in tone
Part thought leadership post, part technical summary, part sales pitch… all in one.
7. Formulaic headings and structure
“Why this matters”
“Key takeaways”
“Why this works well”
Real people don’t write messages like that to each other.
The tricky bit is that AI can also help you hide these patterns.
But that only really works if you already understand the topic and have something to say.
So my rule of thumb is pretty simple:
- Don’t use AI to write your content from scratch
- Start with your point, your structure, your audience
- Then use AI to sharpen it — not replace it
- Always edit (properly edit, not just skim)
- And make sure it still sounds like you
One thing that helps: feed it your own writing and ask it to stay close to that style. Otherwise you end up sounding like everyone else.
I tried a small experiment with this.
Same prompt, but adding constraints like:
- use concrete language
- avoid contrast structures
- write as if speaking to one person
The result was noticeably better. Less polished, a bit imperfect… but more real.
And that’s kind of the point.
AI is a great tool.
But if your post sounds like AI, people will skim it, disengage, or just move on.
Because ultimately, people are here to hear your thoughts — not a slightly reshaped version of something they could generate themselves.
Curious if others are noticing the same patterns, or if I’ve just been looking at too many posts lately.