CMOtech Asia - Technology news for CMOs & marketing decision-makers
Adrian randall
Thu, 9th Apr 2026

The AI conversation in most boardrooms and project briefs right now starts and ends with chatbots. A business identifies a problem, someone proposes an AI solution, and within a few meetings the discussion has narrowed to a bot that sits on the website or answers staff questions through a familiar interface. It is a comfortable destination. It is also, in most cases, the wrong one.

That comfort is not accidental. Chatbots are easy to sell and easy to buy. The people delivering them understand how they work, the risks are manageable, and the concept is simple enough for any stakeholder to grasp. On the client side, the idea of a bot that speaks with company knowledge and answers questions feels like a real step forward. The technology is legible, which makes it feel safe.

The cost of that comfort is significant, and it is not always obvious until the project is delivered.

The Problem Chatbots Do Not Solve

A chatbot that helps an employee gather information for a monthly report does not change the way that report gets done. The employee still logs into multiple systems, still tracks down figures from HR, finance, marketing, and operations, still formats and assembles it all. The chatbot might answer a question along the way. It does not do the work.

This is the operational efficiency gap. Narrow AI tools fix specific, minor friction points while leaving the underlying operational workflow intact. The fundamental problem is untouched.

Internal use cases accelerate this problem. For public-facing applications where the goal is education or assisted decision-making, a well-built chatbot can be a reasonable tool. For internal operations, the bar is different. Staff are not choosing between your chatbot and a competitor's; they are choosing between your chatbot and just going directly to ChatGPT or Claude. The behaviour is already there, and a siloed internal bot rarely beats it.

What the Question Should Actually Be

When a client comes to us asking for a chatbot, the first move is to ask where it fits in the bigger picture. The answer to that question is almost always the real brief.

Take the monthly report example. Steve comes in each morning, pulls data from the HR platform, the ERP, the finance system, some marketing metrics, and an operations dashboard. He formats it, checks the numbers, writes the summary, and sends it. 

The chatbot version of this is Steve asking questions and getting faster answers during that process. The actual solution is connecting to each of those platforms directly, aggregating the data, analysing it as a whole, and generating the output Steve needed anyway. Steve does not want a chatbot. Steve wants not to spend half his day doing that report.

The difference between those two framings is the difference between a chatbot and a systems intelligence approach.

Systems Intelligence and the Middleware Layer

Systems intelligence operates at the level of the organisation rather than the task. An agent platform with visibility across departments and data sources can understand operational status at any time, surface insights across disconnected systems, and generate outputs without human assembly. 

A trucking company can ask how many trucks are on the road, which ones are running late, and where they are, drawing from multiple systems simultaneously rather than switching between them manually.

The mechanism that makes this work is agentic AI middleware. It is a layer that sits adjacent to existing systems, connects to them via APIs or direct data collection, stores and processes that information securely, and drives outputs through a purpose-built interface. It does not replace the tools already in place. It is the plumbing that makes them work together.

This distinction matters enormously for change management, which is where most technology projects fail. Replacing a system requires retraining, adjustment, and a period where things get worse before they get better. 

Adding a middleware layer that preserves existing interfaces and data structures means the learning curve is close to zero. The new system looks like the old one, works alongside it, and gradually extends what it can do.

The design philosophy that follows from this is 90% best practice and 10% innovation. A transport management interface rebuilt to look familiar but with additional capability gets adopted. One rebuilt to look beautiful but different does not. Adoption is worth more than elegance.

Where the Output Difference Shows Up

The gap between a chatbot approach and a systems intelligence approach becomes stark when measured in output.

A report that takes eight hours to assemble manually takes three minutes when the data aggregation and generation is automated. Route optimisation for a truck fleet that takes one person an hour a day takes one minute with AI, a 60-fold increase in throughput on that task alone. Design files that control sheet-cutting robots, built from specifications in four hours, are generated in two minutes. These are not projections; they are in production.

When those time differentials compound across departments and workflows, the difference in what an organisation can produce and deliver in a year becomes impossible to ignore.

That compounding effect is also what makes timing relevant. Businesses that build this capability now will be operating at a fundamentally different velocity to those that do not within six months. 

By early 2027, the output gap between organisations with genuine systems intelligence and those running a collection of SaaS tools and chatbots will be wide enough to determine competitive outcomes. It will not be a differentiator by then. It will be a baseline requirement.

How the First Conversation Goes

Most organisations come into early AI conversations already underestimating what is possible. They have seen a video, tried a tool, heard something at a conference, and they arrive with a specific idea that turns out to be a small part of a much larger picture. The first task is educational.

Understanding what AI can actually do for a business requires understanding how that business actually works, at a level of operational detail that most technology conversations never reach. When that picture comes into focus, the common response is something close to surprise. The question shifts from "can we have a chatbot" to "where do we start."

The answer to that second question is always to start with three to five projects, not eighty. The instinct when the possibilities become clear is to want to solve everything at once. That instinct reliably produces nothing. Three to five well-chosen projects get into production, prove their return, and create the foundation for everything that follows.

Current projects in this space range from warehouse picking and packing optimisation to last-minute route adjustments based on real-time weather and road conditions, from new home build handover platforms that automatically route defect rectification jobs to appropriate trades, to research analysis for ETF providers processing complex market data. 

The common thread is not the industry or the task. In every case, the work involves aggregating the data, connecting the systems, generating the output, and removing the human time spent on assembly.

That is what the chatbot conversation was always really about.