Monday, 11 May 2026

The Subtle Signs Your Content Feels AI-Generated

Introduction

with thanks to Peter Reeves and Margarita Lozian for their reviews

Lots of people are starting to use AI more, and AI-generated-content is coming up in our social-media feeds more often. LinkedIn, threads, Facebook - basically any long form posts can be created with AI and because so many posts have suddenly started using the same AI patterns, it’s very easy for a reader to tell.

 

I think highlighting this is important because it can make the writer seem like they haven’t come up with the thoughts themselves, can make the text seems disingenuous and therefore readers won’t want to read the information.

 

There’s plenty of good reasons to use AI in long form writing; adding definitions or backgrounds for readers who aren’t clued in e.g., “dropping tables is a term used to describe…”, to create structure for a writer’s thoughts (which is what readers are looking for), to catch spelling mistakes and cultural gaffes.

 

If the text becomes too much “AI” in style and substance, readers won’t read, won’t engage and may not think the thoughts are actually the author’s. Why read your post if I can just get my local AI to tell me the same thing?

 

When using AI you need to make sure that it’s written in your style, that you are adding independent thought and that you don’t sound like an AI in order to keep people engaged. Here are some of the things to look out for, and some things to avoid using if you are using AI as a helper tool for your writing.

 

So, you have your traditional em-dash, and icons instead of bullet points but here are some of the other give aways (that I have observed) that you should be editing out and removing from your writing.

 

Example 1 – Paragraph in an article

Prompt

Create a paragraph of text that explains why people like seeing the Eiffel tower, be emotive

 

Response

People don’t just go to see the Eiffel Tower—they go to feel something. It rises out of the city in a way that feels almost unreal, something you’ve seen a hundred times in photos suddenly standing right in front of you, larger, heavier, more alive than you expected. There’s a kind of quiet awe in that moment, where reality catches up with imagination. It represents more than a structure—it’s history, art, romance, and memory all layered together. For many, it carries the weight of dreams: a first trip, a shared moment, a version of life they’d pictured before they arrived. And when you finally stand beneath it, looking up as it stretches into the sky, it’s not just about what you’re seeing—it’s about everything that moment means.

AI Indicators

1.     Overly flowery wording in the post that don’t add value – “it carries the weight of dreams”, seems really disjointed in the flow of text and doesn’t really make sense.

 

2.     AIs love a juxtaposition. That’s when we begin with one thing and then flip it on its head. “This isn’t just X. It’s Y.”, “Something is now X, not Y.”


“it’s not just about what you’re seeing—it’s about everything that moment means”

 

3.     The rule of three is a classic writing and rhetoric technique where ideas are grouped into threes because people find them easier to process and remember and tend to find them more satisfying. In this example is “larger, heavier, more alive than you expected” which builds on emotional wording by using a group of three words or phrases.

 

You’ll see this repeated more explicitly in later examples.

Example 2: LinkedIn Post

Prompt

Make a concise and snappy LinkedIn in post about how middleware architecture is becoming a financial risk for banks

 

Response

Banks built middleware to connect systems.

 

But in the AI era, that same architecture is starting to look like a financial risk.

 

Layers of integrations, duplicated data, and delayed reporting create friction exactly where banks can least afford it:

→ liquidity visibility

→ risk monitoring

→ response time during a stress scenario

 

Traditional middleware was designed to move information. AI is designed to understand it. That changes everything.

 

Instead of routing data through endless integration layers, AI can interpret fragmented systems directly, surface insights in real time, and reduce the operational drag sitting between decision-makers and critical risk signals.

 

The banks that move fastest over the next decade may not be the ones with the biggest technology stacks.

 

AI Indicators

4.     A sudden overuse of specific technical terminology “liquidity visibility” that is only really seen in reports, not by people day to day! This is the same behaviour as the “flowery wording” in Example 1.

 

5.     Juxtaposition: In this example: “Traditional middleware was designed to move information. AI is designed to understand it”. This type of juxtaposition is attempting to show you the behaviour you are currently portraying and the behaviour you should be doing. 

 

Another example of this type of phrasing might be “a worker sees the immediate customer need. A business owner sees the gap in the market.”

 

6.     In this example, we see how AI pits topics with good and bad for over emphasis of a point. This style follows this format: Most X are treating Y as Z. That’s a mistake. 

 

In this example we have “Traditional middleware was designed to move information. AI is designed to understand it. That changes everything.” Where “What changes everything” is that emphasis. 

 

Another example I have seen is: “Most technical writers are using AI as a two-minute article writer to convey messages. That’s a mistake.”

 

7.     AI tools can’t help but be overly verbose with the same points repeated. The paragraphs just go on and on and on and on.

 

Paragraphs like “Instead of routing data through endless integration layers, AI can interpret fragmented systems directly, surface insights in real time, and reduce the operational drag sitting between decision-makers and critical risk signals.” are overly verbose. Readers get to the end and don’t really see a point to what is being said.

 

8.     The rule of three in this example is the bullet point list.

Example 3: Write me an email

Prompt

Write an email explaining how IBM MQ can be implemented on containers over virtual machines. This should be no more than three short sections and help make a decision on which deployment type to use. Explain the best approach to moving

 

Response

Subject: IBM MQ Deployment Approach: Containers vs Virtual Machines

Hi [Name],

As we evaluate the future deployment model for IBM MQ, there are clear advantages to moving from traditional virtual machines (VMs) to container-based deployments, particularly around scalability, resilience, and operational efficiency.

Running IBM MQ on containers enables faster deployment, dynamic scaling, improved portability, and greater automation through platforms such as Kubernetes/OpenShift. Compared to VM-based deployments, containers significantly reduce infrastructure overhead and align more effectively with cloud-native operating models and CI/CD delivery pipelines. While VMs continue to provide stability for legacy workloads, they are typically slower to provision, harder to scale, and more operationally intensive to maintain.

Why this works well

  • Faster deployment and environment provisioning
  • Automated scaling and service recovery through Kubernetes/OpenShift
  • Improved high availability and resilience across environments
  • Reduced infrastructure overhead compared to dedicated VMs
  • Better alignment with cloud-native and AI-driven operational models
  • Easier integration into CI/CD pipelines and modern development workflows
  • Greater portability and consistency between development, testing, and production

The recommended approach is a phased migration rather than a full replacement. Existing queue managers can remain on VMs initially, while new services, non-production environments, and lower-risk workloads are introduced onto containers first. This creates a controlled transition path, allowing performance, HA/DR, security, and operational processes to be validated incrementally before wider adoption.

Best regards,
[Your Name]

AI Indicators

9.     Another giveaway is that random parts of the text are made bold for emphasis in sentences

 

10.  Headings that are very formulaic, if I wrote you an email asking your opinion on something, and then you add a paragraph after your response that says something like “Why this works well”, “why this is problematic”, “Why this is…” then I know you’ve used AI. Real people don’t talk to each other like that.

 

This type of heading gives the impression, whether true or not, that the writer hasn’t thought through the problem themselves but have jumped straight to AI. It feels to the reader and equivalent of “I googled it and this is what I saw on google.”. I can do that myself.

 

11.  AI doesn’t understand the level of formality or tone for a given situation.  An email that is very formal and business like, with caveats all over comes across as completely missing personal relationships especially if you know the customer or have been speaking to them for a while.

For example, an email someone sent recently had very formal text, even though previous emails were not formal and we’ve been working with the customer for the last 6 months. It felt inauthentic.

 

12.  We again see consistent use of the rule of three throughout e.g., “scalability, resilience, and operational efficiency”

How to avoid sounding like AI

Let’s ask AI shall we…

 

Prompt

What advice would you give to avoid sounding like an AI? Condense the output into a bullet point list

 

Response

·  Pick one clear point instead of listing three vague ones 

·  Avoid “not X, but Y” phrasing—say it directly 

·  Use concrete language (things you can picture) instead of abstract terms 

·  Write to one person, not a generic “audience” 

·  Vary sentence length—don’t let everything flow too smoothly 

·  Cut repetition—remove sentences that don’t add new meaning 

·  Dial down certainty—write like you would speak (“I’d suggest…”, “In most cases…”) 

·  Match tone to context (email vs post vs report) 

·  Use real examples instead of high-level summaries 

·  Allow slight imperfection—don’t over-polish every sentence

 

New Prompt

I have rerun the prompt “Create a paragraph of text that explains why people like seeing the Eiffel tower, be emotive.” But added a “Constrains: …” section with the bullet list included

 

New Response

When you finally stand in front of the Eiffel Tower, it hits you how solid it is. You’ve seen it a hundred times before on screens, but being there is different—the metal, the scale, the way it fills your view when you look up. I remember standing underneath it at dusk, watching the lights start to flicker on while people around me just stopped and stared. No one was rushing. It’s that moment people are really there for, I think. You’re not doing anything special, just standing in a busy city, but it sticks with you because it feels real in a way the photos never quite capture.

 

My advice

The AI advice focuses on hiding the fact it was written by AI. I think this is harder because it assumes a good understanding of what is being written, and that is hard if you have started directly from an AI written post.

 

The new response is much better and definitely feels less like AI. Possibly I would have read it and not assumed it was an AI post at all.

 

My own personal views are:

·      Don’t use AI to write the content. Use it for researching, clarifying, or rewording what you have already written.

·      Use AI where it makes sense e.g., to add a technical statement that is relatively well defined “What is a message bus”

·      Always start with a: structure, content per heading, the audience and the purpose of what you are trying to write.

·      Always edit an AI written post

·      Feed the AI tooling examples of your own written work that you are proud of. Ask it to mimic your style, tone, phrasing. (But make sure your AI tool is not trained on your data).

·      Make sure you check for localisms e.g., calling crisps “chips” in the UK or vice versa as someone from the US

Conclusion

Whilst AI can be used to help write posts, content and technical details. As long as you have something worth saying and reading. 

 

AI can be a great tool, but if your post sounds like AI there is a good chance you won’t have any readers or inauthentic engagement. 

 

 

Postscript

I asked Copilot and ChatGPT to turn this article into a LinkedIn post. Look if you can see the AI Indicators in the text and let me know if either would have fooled you! All the text below is AI generated.

Prompt

Write this Article as a LinkedIn post...

 

ChatGPT

If a post sounds like AI, people stop reading.

You can usually tell within a few lines now.

The phrasing is too polished.
The structure is too perfect.
Everything sounds like “thought leadership” but somehow says very little.

You start seeing the same patterns everywhere:

• “This isn’t just X. It’s Y.”
• endless rule-of-three sentences
• dramatic wording that doesn’t really mean anything
• oddly corporate phrases no real person says out loud
• paragraphs that keep going without making a point

AI-generated writing has developed a style of its own, and readers are getting very good at spotting it.

The problem isn’t using AI.

AI is genuinely useful for:

  • structuring ideas
  • clarifying technical explanations
  • fixing wording
  • catching mistakes
  • helping research topics quickly

The problem is when people let AI do all the thinking and all the writing.

At that point, the post stops sounding like a person and starts sounding like every other generated post on LinkedIn.

A lot of AI writing also struggles with tone.

You see formal consultant-style language in casual emails.
Over-explained LinkedIn posts.
Technical summaries pretending to be personal opinions.

It feels off because humans don’t naturally communicate like that.

One thing I’ve noticed:
the more generic the writing becomes, the less people engage with it.

Because readers subconsciously think:
“If this sounds like AI, why wouldn’t I just ask AI directly instead?”

The posts that still work tend to have:

  • specific opinions
  • real examples
  • slightly imperfect phrasing
  • shorter, clearer sentences
  • actual personality behind them

My own approach is:

  • use AI for support, not authorship
  • always rewrite outputs in my own voice
  • remove repetitive phrasing
  • cut anything overly dramatic or vague
  • make sure it sounds like something I’d actually say

AI is a great tool.

But if your writing sounds like everyone else using the same prompts, people will scroll past it instantly.

Copilot

There’s been a noticeable shift in LinkedIn posts over the past year.

You can tell when something’s been written by AI.

Not because it’s wrong. But because it feels off.

Same structure. Same phrasing. Same patterns. Once you see it, you can’t unsee it.

And I think it matters more than people realise.

When a post reads like AI:

  • it feels less genuine
  • it’s harder to connect with
  • and you start to question whether the person actually believes what they’re saying

At that point, why read the post at all? You could just ask your own AI and get something similar.


To be clear — AI is useful.

I use it.

It’s great for:

  • tightening structure
  • explaining technical concepts more clearly
  • spotting mistakes you’ve gone blind to

But there’s a difference between using AI as a tool… and letting it do the thinking.


After seeing a lot of this (and testing it myself), there are a few dead giveaways that something is AI-heavy:

1. Overly “polished” language that says very little
Phrases that sound nice but don’t actually add meaning. You read them and think, “what did that actually say?”

2. The constant contrast trick
“This isn’t X. It’s Y.”
“Not just this — but that.”
Once you notice it, it’s everywhere.

3. Everything comes in threes
Lists of three. Descriptions in three parts. Points grouped neatly into three.
Good writing technique… until it’s overused.

4. Randomly formal or corporate wording
You’ll see phrases that belong in reports, not conversations.
Things people don’t actually say day-to-day.

5. Long sentences that go nowhere
You get to the end and realise the point could’ve been said in half the words.

6. Sudden shifts in tone
Part thought leadership post, part technical summary, part sales pitch… all in one.

7. Formulaic headings and structure
“Why this matters”
“Key takeaways”
“Why this works well”
Real people don’t write messages like that to each other.


The tricky bit is that AI can also help you hide these patterns.

But that only really works if you already understand the topic and have something to say.

So my rule of thumb is pretty simple:

  • Don’t use AI to write your content from scratch
  • Start with your point, your structure, your audience
  • Then use AI to sharpen it — not replace it
  • Always edit (properly edit, not just skim)
  • And make sure it still sounds like you

One thing that helps: feed it your own writing and ask it to stay close to that style. Otherwise you end up sounding like everyone else.


I tried a small experiment with this.

Same prompt, but adding constraints like:

  • use concrete language
  • avoid contrast structures
  • write as if speaking to one person

The result was noticeably better. Less polished, a bit imperfect… but more real.

And that’s kind of the point.


AI is a great tool.

But if your post sounds like AI, people will skim it, disengage, or just move on.

Because ultimately, people are here to hear your thoughts — not a slightly reshaped version of something they could generate themselves.


Curious if others are noticing the same patterns, or if I’ve just been looking at too many posts lately.

Friday, 17 April 2026

Building MeetMiddle: How I Created a Journey Planning Tool in a Couple of Hours with IBM Bob

Introduction 

In this article I’ll talk about how I built MeetMiddle with IBM Bob (an AI coding assistant). I'll cover how it turned out, what worked, what didn't, and my thoughts that AI acceleration is valuable for personal productivity tools.

 

Each year my family do long journeys up and down the country, and with young kids you can't drive for hours and hours. Finding somewhere to have a rest and stretch your legs is useful. I can do this today, but I have to find search on google maps, then find the mid point and then look for cafes, activities etc.

 

So ,I decided to use IBM Bob to help build a tool to help me! Some colleagues and I had previously written the idea up together but had never implemented it. We wanted to know the best mid-point for multiple friends to meet.

 

With the help of IBM Bob, I built this in just a couple of hours leaving it to get on with its changes with occasional jumping back on to see how it was doing, provide some more prompts and test the output. 

 

I think this is a great demonstration that AI tooling can help people build the tools they need to speed up the monotonous tasks of life and work.

Setting Expectations

I deliberately did not boil the ocean when creating my prompts and instructions. I was investigating effort vs speed - how quickly could I get something working with minimal input? I let Bob choose the technologies and components and waited to see how it did and what it did.

The Foundation

 

Luckily, we had already designed the application, the method, the benefits and that gave the specification to IBM Bob. So now you can see that I have click-baited you with the 'couple of hours' headline, because a lot of the real effort went into the idea and the method write up that Bob could follow!

 

My first prompt to Bob:

```

Come up with a solution design for this idea.

Design should be one document

Second document should be a plan of what you would like to do to deliver it

```

The solution design was a 625 line MD file which was very comprehensive. I'll be honest, I didn't read it all but it looked and sounded right. It had the four architecture layers: presentation, application, integration, and data.

 

Yeah, I know that this is a little dangerous - letting the AI off the leash without reading everything it produces. But I was doing this as a fun test to see how far I could get with minimal oversight. In a production environment, you'd absolutely want to review everything carefully. This was more about exploring the "effort vs speed" trade-off for personal tools that maybe could be less rhobust or should be to get some cost benefit if its only me using it.

 

Bob then overlayed data models and their mappings. It also went a little crazy and was doing sophisticated algorithms for multi-stage filterings, scoring etc. That's my bad for not being clearer in my prompt, so I wasted some tokens on that.

 

The second document was a 1000 line todo list. It was a detailed list of what I would need to do to deliver the solution. It was a detailed list of tasks, with estimated completion dates. Again, useful but at the moment I just wanted something really basic for some holiday planning I'm doing!

 

Even though I wasn't planning to spend 9 months or £800K on this! Having that comprehensive plan meant I could cherry-pick the MVP features and know exactly what to build first.

Building the Backend

 

Alright, so we had the blueprint. Time to start building!

 

Bob setup a project using a python framework called FastAPI with folders for models, API endpoints, services, etc. It used SQLAlchemy models for users, events and participants! It also wrote a bunch of security models.

 

I didn't use most of these though, because it was still running from the main solution design it built. I should have told it to strip it back but I didn't want to mess up the context.

 

Bob told me I needed to fetch a bunch of API keys from places like Google Maps and Google Places. Not sure why... ahh ok, it's because that's the example we wrote in the idea we gave it! So it did a few mocks instead, which is nice.

 

Also, Google Maps API are expensive and I don't want to pay for it for a test app, so next prompt was whether I could build a backend from scratch.

 

My prompt:

```

Can we do the backend without the google API? could we build from scratch?

```

 

Bob created a service with over 100 UK postcode areas built in. Each postcode area (like M1 for Manchester, SW1A for Westminster) has its real geographic coordinates hardcoded.

 

When you enter a postcode like "M1 2SA", the service:

- Extracts the area code (M1)

- Looks up the base coordinates

 

It did the same with Restaurants, Cafes, Bars, Hotels and Petrol Stations. But it completely made it all up which makes sense if it's starting from scratch rather than fetching data from a service.

 

It did some clever formulas to calculate routes and travel times e.g., Haversine formula. I'm not sure what that does but I've added it to my research list. It also had an optimal meeting point between the two locations but it was as the crow flies not via roads.

 

Then Bob told me it wrote a bunch of tests that passed! Nothing like marking your own homework. There was also no UI, boo!



The UI

My prompt:

```

lets run the demo in the UI

```

Bob created a single-page application with HTML, CSS, and JavaScript. It had the app title "MeetMiddle", and used something called Leaflet.js as the mapping library - start point (green A), end point (red B), and midpoint (blue M), a blue line for the route and then interactive pop ups if you click a place it has found.

 

As I mentioned earlier, I let Bob choose these technologies. I don't have experience with FastAPI or Leaflet.js, I do know javascript so I might do a follow up article reviewing how easy it is for me to understand what is happening and how good the code itself is.

 

Users - me - enter the start postcode like NE1 4ST and the end postcode DE1 2PY, the type of place they are looking for e.g., "Cafe" or "Petrol station" and the search radius and select go. The map shows the route with markers and places to go.


The Quest for Real Data

 

The UI is good, and so is the map for my two endpoints. I tried some new postcodes and the markers were off in the sea! What's happening here?

 

Ahh, I'm using mocked and fixed data so it only works with specific postcodes.

 

My prompt:

```

Can we replace the estimated and generated data by doing free internet lookups?

```

 

Bob had a look and found OpenStreetMap, Royal Mail Postcode Address Finder and Ordnance Survey Open Data which has 2.5 million free to use postcodes. It also found an Open Source Routing Machine which calculates driving, walking and cycling routes with travel times and directions with no API Key requirement.

 

So the map and route is now sorted! And if I type different post codes in it actually picks it up. The only down side is that it has 1 transaction per second on the free tier, so this isn't production ready.

 

I also have real places like costa, mcdonalds, pubs, esso's and what not.

 

Bob created two new JavaScript services to handle the real data.

 

I then asked Bob to add a 7-day cache for postcodes, places and routes and it also removed all the mock data, straight line routes and added some error handling.

 

Now when I run for NE1 4ST and the end postcode DE1 2PY I get real location on the map, real places in the radius distance from the midpoint.

Results


So that's what I can get with Bob in a couple of hours with very little input from me. I think I did around 4 general prompts, and 3 "this didn't work" style prompts and I'm quite happy with the results. I can now use this when planning holidays to make sure we're not driving too long at a time.

 

I would also say, that my prompts could have been much better and there was a LOT of extra code that I had to ask to get cleaned up. The documentation was overly verbose and the plan was way too detailed. But all in all, not bad for 2 hours work!

 

I can hear some naysayers at the back saying that you can already do this with google maps, and I agree, but I do think this project showcases how much an individual can achieve in a short space of time, with limited technical knowledge to become more productive.

 

And this is a trend I'm seeing more of. Managers creating customer dashboard, sellers creating sales plans and learning plans, techies applying their knowledge to completely new coding styles. This localised value add means we don't always need production ready services, as little apps like these can be shared between 1 or 2 people.

 

This is Unix philosophy stuff - reduce human time by writing programs. I'd compare Bob to writing little helper scripts - many that don't see the light of day. The personal tools I can now build with Bob for solving specific problems wouldn’t have been made before because I wouldn’t have had the time to turn my ideas into real tangible things.

 

I’m excited to see how individuals use AI to build tooling that improve productivity and the incremental benefits this will have on organisations.