This is a personal blog, postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
I blog about some of my day to day work findings in integration, programming and project competitions. I also blog about personal projects such as painting, the allotment, robot building, node modules or the history of IT.
In this article I’ll talk about how I built MeetMiddle with IBM Bob (an AI coding assistant). I'll cover how it turned out, what worked, what didn't, and my thoughts that AI acceleration is valuable for personal productivity tools.
Each year my family do long journeys up and down the country, and with young kids you can't drive for hours and hours. Finding somewhere to have a rest and stretch your legs is useful. I can do this today, but I have to find search on google maps, then find the mid point and then look for cafes, activities etc.
So ,I decided to use IBM Bob to help build a tool to help me! Some colleagues and I had previously written the idea up together but had never implemented it. We wanted to know the best mid-point for multiple friends to meet.
With the help of IBM Bob, I built this in just a couple of hours leaving it to get on with its changes with occasional jumping back on to see how it was doing, provide some more prompts and test the output.
I think this is a great demonstration that AI tooling can help people build the tools they need to speed up the monotonous tasks of life and work.
Setting Expectations
I deliberately did not boil the ocean when creating my prompts and instructions. I was investigating effort vs speed - how quickly could I get something working with minimal input? I let Bob choose the technologies and components and waited to see how it did and what it did.
The Foundation
Luckily, we had already designed the application, the method, the benefits and that gave the specification to IBM Bob. So now you can see that I have click-baited you with the 'couple of hours' headline, because a lot of the real effort went into the idea and the method write up that Bob could follow!
My first prompt to Bob:
```
Come up with a solution design for this idea.
Design should be one document
Second document should be a plan of what you would like to do to deliver it
```
The solution design was a 625 line MD file which was very comprehensive. I'll be honest, I didn't read it all but it looked and sounded right. It had the four architecture layers: presentation, application, integration, and data.
Yeah, I know that this is a little dangerous - letting the AI off the leash without reading everything it produces. But I was doing this as a fun test to see how far I could get with minimal oversight. In a production environment, you'd absolutely want to review everything carefully. This was more about exploring the "effort vs speed" trade-off for personal tools that maybe could be less rhobust or should be to get some cost benefit if its only me using it.
Bob then overlayed data models and their mappings. It also went a little crazy and was doing sophisticated algorithms for multi-stage filterings, scoring etc. That's my bad for not being clearer in my prompt, so I wasted some tokens on that.
The second document was a 1000 line todo list. It was a detailed list of what I would need to do to deliver the solution. It was a detailed list of tasks, with estimated completion dates. Again, useful but at the moment I just wanted something really basic for some holiday planning I'm doing!
Even though I wasn't planning to spend 9 months or £800K on this! Having that comprehensive plan meant I could cherry-pick the MVP features and know exactly what to build first.
Building the Backend
Alright, so we had the blueprint. Time to start building!
Bob setup a project using a python framework called FastAPI with folders for models, API endpoints, services, etc. It used SQLAlchemy models for users, events and participants! It also wrote a bunch of security models.
I didn't use most of these though, because it was still running from the main solution design it built. I should have told it to strip it back but I didn't want to mess up the context.
Bob told me I needed to fetch a bunch of API keys from places like Google Maps and Google Places. Not sure why... ahh ok, it's because that's the example we wrote in the idea we gave it! So it did a few mocks instead, which is nice.
Also, Google Maps API are expensive and I don't want to pay for it for a test app, so next prompt was whether I could build a backend from scratch.
My prompt:
```
Can we do the backend without the google API? could we build from scratch?
```
Bob created a service with over 100 UK postcode areas built in. Each postcode area (like M1 for Manchester, SW1A for Westminster) has its real geographic coordinates hardcoded.
When you enter a postcode like "M1 2SA", the service:
- Extracts the area code (M1)
- Looks up the base coordinates
It did the same with Restaurants, Cafes, Bars, Hotels and Petrol Stations. But it completely made it all up which makes sense if it's starting from scratch rather than fetching data from a service.
It did some clever formulas to calculate routes and travel times e.g., Haversine formula. I'm not sure what that does but I've added it to my research list. It also had an optimal meeting point between the two locations but it was as the crow flies not via roads.
Then Bob told me it wrote a bunch of tests that passed! Nothing like marking your own homework. There was also no UI, boo!
The UI
My prompt:
```
lets run the demo in the UI
```
Bob created a single-page application with HTML, CSS, and JavaScript. It had the app title "MeetMiddle", and used something called Leaflet.js as the mapping library - start point (green A), end point (red B), and midpoint (blue M), a blue line for the route and then interactive pop ups if you click a place it has found.
As I mentioned earlier, I let Bob choose these technologies. I don't have experience with FastAPI or Leaflet.js, I do know javascript so I might do a follow up article reviewing how easy it is for me to understand what is happening and how good the code itself is.
Users - me - enter the start postcode like NE1 4ST and the end postcode DE1 2PY, the type of place they are looking for e.g., "Cafe" or "Petrol station" and the search radius and select go. The map shows the route with markers and places to go.
The Quest for Real Data
The UI is good, and so is the map for my two endpoints. I tried some new postcodes and the markers were off in the sea! What's happening here?
Ahh, I'm using mocked and fixed data so it only works with specific postcodes.
My prompt:
```
Can we replace the estimated and generated data by doing free internet lookups?
```
Bob had a look and found OpenStreetMap, Royal Mail Postcode Address Finder and Ordnance Survey Open Data which has 2.5 million free to use postcodes. It also found an Open Source Routing Machine which calculates driving, walking and cycling routes with travel times and directions with no API Key requirement.
So the map and route is now sorted! And if I type different post codes in it actually picks it up. The only down side is that it has 1 transaction per second on the free tier, so this isn't production ready.
I also have real places like costa, mcdonalds, pubs, esso's and what not.
Bob created two new JavaScript services to handle the real data.
I then asked Bob to add a 7-day cache for postcodes, places and routes and it also removed all the mock data, straight line routes and added some error handling.
Now when I run for NE1 4ST and the end postcode DE1 2PY I get real location on the map, real places in the radius distance from the midpoint.
Results
So that's what I can get with Bob in a couple of hours with very little input from me. I think I did around 4 general prompts, and 3 "this didn't work" style prompts and I'm quite happy with the results. I can now use this when planning holidays to make sure we're not driving too long at a time.
I would also say, that my prompts could have been much better and there was a LOT of extra code that I had to ask to get cleaned up. The documentation was overly verbose and the plan was way too detailed. But all in all, not bad for 2 hours work!
I can hear some naysayers at the back saying that you can already do this with google maps, and I agree, but I do think this project showcases how much an individual can achieve in a short space of time, with limited technical knowledge to become more productive.
And this is a trend I'm seeing more of. Managers creating customer dashboard, sellers creating sales plans and learning plans, techies applying their knowledge to completely new coding styles. This localised value add means we don't always need production ready services, as little apps like these can be shared between 1 or 2 people.
This is Unix philosophy stuff - reduce human time by writing programs. I'd compare Bob to writing little helper scripts - many that don't see the light of day. The personal tools I can now build with Bob for solving specific problems wouldn’t have been made before because I wouldn’t have had the time to turn my ideas into real tangible things.
I’m excited to see how individuals use AI to build tooling that improve productivity and the incremental benefits this will have on organisations.
Over the last 9 months (September 2025 – April 2026) I’ve been using AI tooling in a more institutional way as various work approved applications came online.
In part, that is because AI tools have begun to become ubiquitous in our professional and personal lives. According to a recent BCG study, whilst AI usage has surged, the measurable impact hasn’t kept pace with expectations (BCG, 2025). This disconnect raises an important question: What are the best ways to actually use these tools?
To try to answer this question, I’ve been trying out various AI tooling (IBM Bob, Copilot, ChatGPT) to see what work works well and what doesn’t. This article explores what I have discovered and hopefully those with more experience than me can let me know if you have seen any of these issues, have managed to solve them or if some of these are just plain user error.
The AI Tools
In a work setting, I’ve been utilising IBM Bob which is trained on IBM-specific expertise. It’s very good for technical troubleshooting and product guidance, planning work and compiling data together. It’s got different modes depending on what you are trying to do and it’s a tool that we in IBM Expert Labs has been integrating into our delivery practices as discussed by the IBM Expert Labs UKI Automation Platform Delivery Damian (LinkedIn - Damian Boys).
I’ve been evaluating Microsoft Copilot which I’ve found to be better at internet facing data summaries, like understanding how companies are operating, and how real-world scenarios map to some of the technical features I’m working on with products. This is good for things like campaign ideation, exploring alternative implementations and getting rapid information on product features.
I have found from my small amounts of usage, that IBM Bob is better for project work, lots of files and information from different sources, building code to transform the data into what I want. Microsoft Copilot is better for one window context and does seem easier to make general questions that consumes general internet sites.
In a personal capacity, I’ve been using ChatGPT for my football simulation engine https://github.com/GallagherAiden/footballSimulationEngine, advice on gardening, financial reasoning. I already had a ChatGPT account which I had rarely used, so decided to use this for more of my day to day queries. For example, when I was exploring mortgage renewals and overpayment savings, it was much easier to ask ChatGPT then to run multiple queries on MoneySavingExpert and I could then ask questions about financial advice, and war-game different interest rates and all the context was already there.
The thing I have to keep reminding myself is that AI tools are sophisticated reference material reviewers, not sources of independent intelligence or unique thought. Once you internalize this, you can properly contextualize what to expect in responses or chained responses over time.
I now like to think of AI as an incredibly well-read sound board, that can give you ideas about how others achieve things or what data is telling you. But, the value really comes from the innovative thoughts, ideas and analysis or putting the pieces it gives you together.
Even when you provide all your files, AI tools struggle to piece them together properly. It misses connections between files, fails to maintain consistency, and from my playing seems to bolt things on. Now I appreciate I might not be giving enough context to the tool and may not be providing it with my development style, but some of that should be obvious from the code I provide.
Example: Working on my football simulation engine, I provided all data structures, game logic, and statistical models. Yet AI would suggest changes to one component without considering ripple effects on others and would make coding stylistic choices that were the complete opposite of what I had done elsewhere, for example, semi colons at the end of lines in Node JS (let’s not debate that decision here!)
AI treats code generation tasks independently, even when it should consider broader system architecture. It might generate a beautiful function but change the signature in ways that break existing callers even though those callers were in the context provided.
Note: Whilst you can define explicit files to touch and not touch it requires additional overhead, and obviously the world is moving so fast on AI tooling this is probably already out of date thinking!
What I found more and more, is that the AI tool would constantly push me to copy how other tools do things. “This is how FIFA’s engine works and football manager does it this way too”. Great, but I’m building something different, a composition iteration-based engine that can be amended by users per iteration, not an end-to-end simulation of a match.
I get that this is subtle, but the constant attempts to move me towards known solutions really highlighted the lack of innovation that we might face in AI generated applications and services.
That’s great if you want to be quick, not so great if you want to get the edge and build something novel, faster, more streamlined, more secure etc.
One of the most infuriating quirks of AI, is that is doubles down when wrong. When you point out mistakes, it reveals “assumptions” it supposedly made or claims you didn’t provide information for, even when you did, possibly because its context window loses it.
Example: The AI generated code that broke my application. When I fed back the error, it responded: “Well, of course that won’t work you did X, which is completely wrong.”… But YOU gave it to me.
It’s annoying, condescending and can make you doubt your own judgement. I know you can change your profile and ask for it to speak and interact in a new way, but I haven’t tried this yet and obviously I can switch tools if I don’t like how one is speaking.
I find AI is excellent at reviewing large amounts of data and highlighting concerns from the data but often can be wrong about what the root cause of the concerns are. Yes, it can spot patterns but lacks the domain knowledge and intuition to understand them properly.
Even with IBM Bob, which is trained on IBM-specific domain knowledge, there's still a gap between pattern recognition and true understanding. While it performs better within its training domain, it still lacks the intuition and contextual judgment that comes from real-world experience. This shows this is a generic problem across all AI tooling.
Where, the tools work really well is for grouping issues, identifying areas for improvement. However, one time when I provided some data to CoPilot that wasn’t quite right – a mislabeled API Connect issue as an IBM Liberty issue, completely skewed the summary and analysis by the tool. What this shows is that there is still, at least for now, a need for human-AI collaboration, as we’re likely to always see some errors in the data whether human or otherwise.
The disconnect can then make the summary seem wrong and for me (the person using the summary), it makes me worried about my integrity in presenting the findings.
I also found that AI doesn’t ask for more context. A person would ask clarifying questions if they didn’t know the answer to something. Existing AI tools almost never ask and will instead makes assumptions leading down a wrong path.
Think of the wasted resources used instead of just asking! Some tools have started to ask more and I think this will be something solved in not-too-distant iterations of the software.
Another time, I was quickly trying to answer a customer question and wanted to send them the relevant documentation link with the technical information.
The AI tools half-guessed a viable link that then didn’t work and, at its naughtiest, completely made up a link! It will also use generic information rather than specific about a product, for example, I was looking for a compatibility matrix for java support in a product, the responses was that the “application supports Java 12”, but it didn’t tell me that information was from documentation of another product and was wrong for the product I was discussing..
Example: Whilst trying to find some specific Maximo v9 documentation, the AI confidently provided links that didn’t work and blended general software best practices with supposedly specific product guidance. When pressed, it couldn’t provide actual documentation sources and finally admitted the advice was generic.
Whilst I checked all the links ahead of time, because I like to see AI citations in person still, this was caught. But if you were in a rush or it got enough right you could become complacent.
All of this stems from AI tooling wanting to be helpful, and if it can’t it begins to hallucinate, which is worse than not answering at all. Maybe my prompts should include “hey, its ok to not know everything little guy”.
Token limits in a context window are finite, or where there is a very high limit the more context can reduce the quality of the response. When you hit limits, the tools can lose context, and “forget” important details from earlier in the conversation. I saw various ignored requests myself, and like the cautionary tale of Summer Ye at Meta this led to a deletion of important context. That wouldn’t wash in an MQ design managing millions of payments.
It also overwrites with no real sense of version control. It regenerates entire sections rather than making edits, overwrites files and doesn’t do any form of version control on documents, although it works well with code because of the underling code commit infrastructure already provided by Git. If I ask for a rewrite trying a different prompt, it takes away the old one, I can review before I approve but I might want to keep elements of both.
Note: Hybrid models are emerging that claim to address these processing limitations, potentially making this observation less relevant in the near future.
Some of this is obviously just how AI works:
Pattern Recognition, Not Reasoning: AI models are trained on existing data and excel at pattern matching. They can interpolate between known solutions but can’t extrapolate to genuinely new ones. They generally lack the ability to reason from first principles.
Local vs. Global Understanding: The transformer architecture processes information through attention mechanisms focused on local context windows. While they can “see” all your files, they struggle with maintaining global state and understanding complex interdependencies. The further apart two pieces of information are, the harder it is to connect them.
No Self-Awareness: AI has no mechanism for recognizing what it doesn’t know or identifying gaps in information. It’s trained to generate complete, helpful responses, not to engage in true collaborative problem-solving. It can’t assess its own understanding.
Models are trained using Reinforcement Learning from Human Feedback (RLHF), rewarded for being helpful. But “helpful” often means “always providing an answer with confidence.” The model has no actual uncertainty quantification—it doesn’t “know” when it’s wrong. The confident tone is learned from training data.
The main way to “fix” this is to have multiple AI windows or even different tools review each other’s answers and highlight concerns.
No Truth Verification: It can’t distinguish between information from actual documentation versus inferred patterns, and has no access to real-time information.
With MCP (Model Context Protocol) tools enabled, referencing and link verification has improved significantly. However, users who disable these features to reduce token consumption may still experience hallucinations and fabricated links and even then the documentation can be old or not correct.
Limited Human Verification: If you're not familiar with the coding language the AI tools builds, how could you notice that it is wrong? How do you ensure that security is being met? In some cases, people will ask other AI tools to verify another tools output. Again, the cost of this makes me shudder.
Computational Constraints: Token limits exist because the attention mechanism’s computational complexity scales with input length. The model has no persistent memory beyond the current context window. There is also the impact of large context windows, for example I noticed browser interrupts, more frequent null responses, slow response times which are directly impacted by both the client and the server compute resources.
AI tools are powerful and useful, but they’re not magic. The gap between AI usage and impact that BCG identified exists because we’re still learning how to use these tools effectively.
The key is understanding what AI actually is: a sophisticated pattern matcher that excels at applying known solutions to familiar problems. Once you have hammered this home to yourself then you can start to get the most out of the AI tools available to you.
Let me know if you have had any of your own pain points, or solved any of mine already.
I am currently on: podman version 4.3.0, Apple M1 MAC Ventura 13.3.1.
[Solution] - changer the DockerFile pull 'FROM' tag to be the 12.0.0.7 or the podman image id e.g., ae6dcfdd3e9d. subsequent FROMs also work e.g., FROM level-1 worked after doing a FROM 09d860f27b68 for the ace base.
Errors:
podman build -f level-1.dockerfile . -t level-1
STEP 1/2: FROM acebase
Resolving "acebase" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf)
Trying to pull docker.io/library/acebase:latest...
Error: creating build container: initializing source docker://acebase:latest: reading manifest latest in docker.io/library/acebase: requested access to the resource is denied
podman build -f level-1.dockerfile . -t level-1
STEP 1/2: FROM acebase:v1
Resolving "acebase" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf)
Trying to pull docker.io/library/acebase:v1...
Error: creating build container: initializing source docker://acebase:v1: reading manifest v1 in docker.io/library/acebase: requested access to the resource is denied
1. Pick your two teams and click select teams. Hint: If you don't pick any teams or only pick one, just click "select teams" and it'll randomly pick two teams.
This dynamically generates 11 selection boxes with each countries squad displayed in each box, ready to pick your team. You will also need to select your formation or it won't work!
2. Pick your players. Not sure who to pick? Just click random select and it will pick a formation. This will fill in each position with either a Goalkeeper, a Defender, a Midfielder or a Forward depending on the formation and the players preferred position.
Hint: Not all player of a certain type will be put in their best position i.e. an RB in a CB position!
3. Hit simulate match. After you hit "Simulate Match" you'll see a screen with all the players selected and the positions they were put in. At which point you can either do a "Quick Match" which simulates the match with updates to the statistics or you can "Watch Match" which will give a graphical view.
Technology has undergone a period of growth and expansion since the 1970s. With this growth has come a wide diversity in languages, platforms and protocols used to develop and execute functionality on systems and to connect the systems.
Integration technologies were and still are the industry’s answer to this connectivity demand and companies have invested heavily in developing and marketing products that supply the capability to integrate systems, whether in house or through dedicated commercial integration software products.
But do commercial integration products still have a place in the modern industry, or are home grown integrations — using custom, bespoke software and Open-source implementations — a more flexible and efficient way for organisations to integrate their disparate systems?
In this article we will discuss why greenfield software development might be better suited to home-built software rather than using commercial integration software products, but why, as they become more established and stable, there is a necessity to use an out-of-the-box commercial software solution to address ongoing business needs.
The Evolution of Integration
Business systems and machines began as silos, completing their independent functions in isolation. Integration is the process of connecting one or more of these systems together, but as system integration was adopted, the communication was found to be unreliable and slow, and it could be difficult to interface between them.
Different integration software techniques were developed to solve these problems including messaging, file transfer, remote procedure calls and shared databases. Companies that had previously been dedicated to their primary business goals slowly began to adopt IT and integration systems, which expanded and grew to consume larger amounts of the business’ resources.
This led to the creation of specialised commercial integration software created and maintained by companies like IBM to provide simpler and easier integrations for companies, to allow them to focus on their core aims.
Organisations that have evolved with very-early IT (1960s-1990s), tend to have large IT estates and have a large amount of commercial integration products embedded within then. They have gained knowledge of how to safely manage, maintain and build systems that meet diverse business, regulatory and consumer requirements, through long standing processes and governance procedures that ensure integrity. Many are recognised for this ingrained culture and slow response to market demand.
How is the integration landscape changing?
The IT world has been changing. Computing has become part of everyday life and the prominence of mobile devices mean companies are at the beck and call of their customers, day and night; as they wish to use their applications, interact on social media, make queries and raise issues.
For example, Daisy is shopping on Etsy at 2am and wants to make a payment through their bank. If there is an issue, Daisy uses the live chat support where a chatbot tries to help before passing Daisy on to a person. The payment needs some form of authentication which needs to be setup and used to complete the transfer right away. Anything less will be a pain and Daisy might be tempted to switch to another bank.
The systems involved in meeting Daisy’s requirements, require backend integration with Daisy’s account and payment data which might be spread across disparate systems. The resolution to Daisy’s problem may also require the calling of multiple different services to complete a business process with responses integrated between the systems.
To deliver for people like Daisy and maintain a competitive advantage, companies have the need to be able to rapidly build solutions that access, synchronize and combine data from disparate systems across the organization in order to react to new innovation. This is done through integration applications and software which in turn, must be highly available, secure, able to keep up with market needs, able to innovate at speed and where possible, able to reduce costs.
The types of changes are shown in Figure 1 — Changing IT Landscape Factors (2021) but can be summarised as such:
· Lightweight runtimes through container technologies are helping to deploy smaller, more flexible applications that can be owned by teams and deployed more easily.
· API Led Approach is making it easier and simpler to access data from other systems such as databases. This can be used to provide additional value or can be monetised for organisation gain.
· Cloud Native application deployments built for the cloud which are more portable (saving costs), scalable to meet changing demands and utilising implied resiliency of cloud deployment. (http://ibm.biz/cloudnativedefined)
· Advanced Tooling to automate all aspects of the application, system and integration life. Create, build, deploy, monitor, alert and debug using automated tooling, improving security, access control and time to market in the process.
Figure 1 — Changing IT Landscape Factors (2021)
These changes have allowed existing companies to provide better services, but many struggle to fully capitalise on the benefits because of their ways of working. Greenfield companies are building new applications, software and features from scratch based on these principles, and are not encumbered by existing processes, governance, maintenance of legacy systems. They also do not have an enforced requirement to utilize existing commercial integration solutions already embedded in the organisation.
Greenfield Software Solutions
Greenfield companies have emerged to disrupt existing markets, in almost every industry, from the way we socialise (Facebook, Twitter, Instagram, LinkedIn), the way we bank (Monzo, Revolut), the way we shop (Amazon, Etsy), travel (Skyscanner, Uber), order our food (just eat, Deliveroo), manage our finances (PensionBee, Moneybox, eToro, Freetrade) and much more.
When building these new companies there is a decision whether to use commercial integration software already in the market, Open-source technology that can be adapted by the company or their partners, or integrating systems using custom software. Whilst commercial Integration software solutions have been widely utilised and relied upon across industries, they are constantly compared to custom, in-house (or Open-source), code-only integration solutions.
One criticism of commercial integration software solutions is the lack of flexibility. For example, a business may want to take advantage of a new feature such as Open API v3, but commercial products may take a while to catch-up with the specification. In-house code could be updated and the feature prioritised in a much simpler and quicker process that goes straight from developer to the live system.
Commercial integration products do provide a low-code approach, but could be seen as a “black-box” in terms of their functionality, which can prove important when it comes to debugging. Byars (2021) argues that code-based solutions are easier to “diff” for changes [2]. However, if the volume and complexity of code is scaled out to match the breadth of functionality that commercial integration products provide, this debugging process could become equally or more challenging.
Businesses may also be hesitant to implement commercial integration products to avoid vendor lock-in. The counter balance to this is the reduced effort for businesses by utilising existing, commercial integration products which are backed by documentation and a wide support structure of product experts built up over many years.
Greenfield companies often use Open-source because it provides them with the ability to quickly adapt software to meet their needs. This assumes they have the skills and time to build software as well as running their businesses, and that they can keep those skills once the initial project is complete. For implementing leading edge capabilities, this may be the only way to create the functionality sufficiently quickly, because making adaptations to commercial integration products requires communication, waiting for changes to be discussed, prioritised, architected, coded, tested and eventually released. This is before the Greenfield company runs its own testing that these new features meet its requirements.
This lack of control when using commercial integration products is the focus when high-impact issues affect systems. The Akamai CDN outage (July 2021) and Log4J library vulnerability (December 2021) are prime examples, companies felt unable to deal with the issues due to a reliance on external entities’ support and communication. In-house software can be worked on internally with additional resources, temporary workarounds and fixes in the hands of internal teams.
When to keep on developing
Teams building in-house integration software and developing on top of Open-source can continue to do so providing they are capable of maintaining and managing the code base long term. This means ensuring there is large team of good developers with the knowledge and ability to support all the tools and code used within the systems, and that talent can be retained going forward which can often mean ensuring the challenges are exciting and the company a good place to work.
If internal development teams are more agile and able to deliver good quality fixes to bugs and issues, faster than the commercial integration software company, it lends the team a competitive advantage especially if competitors are using commercial integration products that are taking a while to catch up with the latest trends.
In-house software allows direct and rapid communication with partners, consumers and communities who can request new features directly (e.g., through forums) and the support teams can provide more direct support to those requests and in some cases communicate live updates.
The key reasons to continue internal development is to meet unique use cases and rapidly changing requirements, to ensure software growth can meet the pace of company growth and identify where new features and experiments are still needed to provide a good quality product that can compete on the market.
Using commercial integration products
Commercial integration products have been honed and enhanced over many years with a well-established knowledge base, progressive evolution, and subject matter experts who specialise in integrating systems and working through industry wide problems. The primary reasons why these might be used over In-house software include:
· Saving time and cost by not re-solutioning common use cases that have been solved before
· Established support models from consultancy firms or contractors to build on the tools using previous experience to accelerate time to market
· Reduce complexity of integrating with others
Companies that use commercial integration products do not need to start from scratch, from day one they can integrate point A to point B, with mandatory support mechanisms in place for issues found, a breadth of experience for common solutions and no necessity to understand deep level integration concepts such as protocol implementation. The more unique the functions, features and skills that are required to integrate, the less cost-effective commercial integration products become, because of the commercial consequences of supporting edge case scenarios.
Growth is a key element of success for any company, especially greenfield, but at some point, there is a need to adopt more professionalised and established methods of management, with a renewed focus on existing customer retention. As products and companies become more established, government and regulatory requirements might be imposed to protect consumers. For example, General Data Protection Regulations (GDPR) implementation cost US Fortune 500 and UK FTSE 350 companies a combined $8.9 billion and was an additional cost of unplanned business and continued maintenance [3]. Commercial integration products provide this for a range of customers reducing the individual cost of supporting long term regulatory costs.
For commercial integration product vendors, creating and maintaining software products is their core business. For other businesses, software creation is a distraction from their core business. This is especially true when supporting business as usual systems and applications.
By Contrast, Open-source integration software has a support model that relies on community good will. There is no liability to fix problems, and security breaches can be easier to exploit due to the openness of the source code. There are obvious benefits of participating in Open-source communities to work on and improve key industry features as a collective. However, Open-source can become outdated, the community can move to other projects and active members in an organisation can move on to other challenges. This increases dependencies on the remaining active members and at times a need to pivot entirely to another tool which can be costly.
Commercial integration software companies have a business driver to produce integration products that work over a large proportion of the industry, over a long period of time. Development teams have expansive experience of integrating systems and have evolved with the market over product lifetimes which for some, are now reaching three decades of life. These products have the experience of past failure, resolving issues, enhancements to meet changing needs, they are also liable for the support mechanisms they have in place to resolve issues as they arise and patch security bugs as quickly as possible.
Established products are incentivized to evolve with changing IT landscapes to retain business and often work with customers to ensure continued positive relationships. When the company grows, so too does the integration product servicing it.
If a greenfield company begins to hit turbulent times, it is difficult to justify the cost of managing and maintaining In-house integration solutions. Focus needs to be on customers, sales/retention and becoming profitable. The cost of creating and maintaining integration software can increase if there are inexperienced staff, if supporting existing products becomes untenable because it has grown too big or because employees have moved elsewhere, then costs can balloon, and projects stagnate.
Whilst there are advantages to greenfield software solutions in terms of control of resolving security vulnerabilities, commercial integration software products have had many years to consider and enhance the security hardening on the product. Utilisation of these pre-built Integration products reduces the duration of effort required for security planning on a brand-new product, the same can be said for networking, high availability etc.
It is important to consider the cost of integrating with other organisations and business. When producing code internally, companies have an element of control. But this can become an unwieldy when integrating externally and is exacerbated the more integration points that are involved. There may also be greater support and documentation available for integrating an out-of-the-box solution with an external product, with pre-built connectors and accelerators to integrate with well-known systems. For example, no-one today would consider writing an EDI integration solution as that is so well established.
Choosing an integration model
Commercial integration software and products provide evident benefits when integrating using well-known systems through established mechanisms. They provide a strong support through experienced integration developers and specialists providing companies the ability to focus on their primary aims rather than the integration of internal and external systems.
However, building in house integration software has its own clear advantages. It provides flexibility and easy adaptation to meet edge case scenarios and innovate ahead of the industry and competitors.
This is not a binary decision; it is possible to switch models although there is some initial outlay cost for the transition. Applications using commercial integration software can expand to start using custom code for specific purposes to meet rapidly changing requirements, before adopting commercial integration products once the new features are incorporated.
At the growth stage, challenges are exciting with a pool of interested and experienced developers. As a project becomes more stable with a requirement to support customers and business, the key driver is maintaining the system and its integrity which might be better met by the support model of buying commercial integration products.
Should a business driver or disruptive feature impact a ‘stable’ element of companies’ applications, then it may be moving back into the ‘growth’ phase requiring greater flexibility that can be provided by In-house integration solutions.
A real-life example is at LinkedIn, which was a first of a kind professional networking and career development social network. It started as a home-grown monolith but as growth escalated there was a need to better cache and store data in streams in a way that wasn’t readily available in commercial integration products.
Where existing products did meet requirements, such as databases, these commercial products were utilised. But the need to rapidly meet new requirements drove In-house software solutions. Over time, as user numbers stabilised many of these tools were release as Open-source including Kafka, which today stands as a commercial integration product utilised by other organisations looking to achieve the same functionality [1].
Conclusion
Some Greenfield companies have fast overtaken their legacy competitors or are closing in on the market providing features and innovation faster than established companies could manage. To achieve this, greenfield companies have custom built integration software and/or built on top of Open-source allowing them to rapidly change course, experiment with new features and be flexible in how they do business as systems and processes evolve.
Over time, these custom-built systems become more fixed and stable to support a large customer base with an expectation of stability. To meet these requirements, the support mechanism of commercial integration products is attractive. As are the out-of-the-box solutions that allow accelerated integration with other systems, drawing on the experience of the commercial integration products resource pool for business-as-usual requirements like upgrades, security and basic integration methods and techniques.
By using commercial integration software whose focus is on integrating and who have mastered the methodology over decades to meet diverse industry needs, these companies can focus on their primary business aims i.e., travel, finances etc. and not information technology and its integration.
When needing to be flexible and rapidly grow, building custom code and applications can allow a company to disrupt the market, expand its client base and become more profitable. As the code base stabilises and peak market share has been gained, the ability to continue to service those customers and businesses, requires an integration product that is supportable.
For many, they will transition (at some point) from an internally managed code base to an imported product produced and supported by a commercial integration software.
Special thanks to Kim Clarkfor his review and advice