When Technology Runs Ahead of Responsibility

In the race to scale AI, there is a recurring moment of friction where commercial ambition meets human consequence. I encountered this recently when I was approached by a group of investors who had acquired an AI-driven HR platform.

Their mandate was clear. They believed they had purchased an "amazing" B2B product, and they wanted a CEO to lead its pivot into a high-scale B2C launch. However, as the discussions progressed, a fundamental misalignment emerged; they weren’t looking for someone to lead the product’s evolution or vet its logic. Instead, they were looking for a merchant to lead its distribution.

This situation serves as a vital case study for the industry; it highlights the growing trend of treating AI as a "finished" asset to be pushed, rather than a profound responsibility that requires ongoing, often uncomfortable, scrutiny.

The owners repeatedly pointed to a single metric as proof of the platform's success; in a client organisation of several thousand people, users averaged seven minutes a day on the platform. To the investors, this settled the question of value.

In reality, "time spent" is often a hollow metric in enterprise software. In most organisations, employees use what is rolled out to them. They don't have a choice. Seven minutes a day can reflect compliance or monitoring just as easily as it reflects genuine utility. It tells us the software is present; it says nothing about whether people are actually being supported, or if their work is improving.

When we prioritise distribution over development, we treat these shallow metrics as truth, ignoring the context of the human beings on the other side of the screen.

The platform followed a familiar HR-tech pattern. Employees were profiled, modules were assigned, and progress was tracked. On dashboards, individuals were rendered as composite scores – competency levels, job-fit indicators, and readiness markers.

What was missing, as is so often the case, was context.

A competency score implies that human capability can be captured in a single digit. In truth, performance is a messy, unpredictable interplay of management quality, team dynamics, and personal circumstances. When these are flattened into "objective" metrics, the picture isn't just partial; it’s distorted.

When a system like this is pushed into a B2C context, the stakes escalate. The individual is no longer buffered by an internal HR department; they are left alone with the algorithm. The system’s "judgement" becomes a direct, unmediated influence on their career trajectory and their self-worth.

Underlying this case was a broader incentive. The owners were building with an "exit" in mind. Because they had purchased the technology rather than built it, there was little incentive to slow down and examine how the product would shape behaviour once embedded at scale.

I shared the reality of the situation – the foundations were weak, and that scaling this into a B2C environment was an exercise in distributing human risk rather than human value.

The truth is uncomfortable. The appeal of AI-driven dashboards is often the distance they provide. They offer a buffer between decision-makers and the consequences of their decisions; responsibility is outsourced to the "objective" logic of the system. In this environment, a tool doesn't need to be accurate. It only needs to be plausible enough to stand in front of.

Choosing to walk away from that opportunity was a reminder of what product leadership actually entails. It is not just about driving adoption; it is also about deciding where a system’s authority should end.

If a product’s foundations are unclear and its metrics are misleading, scaling it is not growth. It is the industrialisation of human risk.

At Blue Banyan, we prefer to work with organisations prepared to think in long horizons; often founder-led or family-owned businesses that expect to live with the systems they build. They approach technology as a way to solve real problems, not as a shortcut to a valuation.

When technology outpaces responsibility, harm spreads at scale. When responsibility leads, technology has a chance to earn legitimacy. That difference is not abstract; it is felt every day by the people on the other side of the dashboard.

Previous
Previous

Why Strategy Sometimes Needs to Be Built Before It Can Be Recommended

Next
Next

The Strategic Question Behind “Let’s Do More Marketing”