I've spent most of my career in and around digital delivery. Different organisations, different sectors, different technology stacks. Across all of it, I've had a version of the same conversation more times than I care to admit.
A client comes in with a clear ask - build us a Power App, automate this process, give us a dashboard. The brief is confident, the budget is approved, and everyone's ready to go.
Then, somewhere between kickoff and go-live, it falls apart.
Not the technology - the technology is usually fine. What falls apart is that the people who were supposed to use the thing don't, or won't, or can't figure out why they'd want to.
When you trace it back, you almost always end up at the same place: a conversation that should have happened weeks earlier and didn't.
These patterns are part of why I joined Circyl two months ago. I wanted to work somewhere that was trying to solve this properly, rather than just accepting it as the cost of doing business. This article is partly what I've learned, and partly the journey we're on.
Technology-Led Sales
Sometimes it happens before a project even starts. A customer asks for a Power App, Sales writes a proposal for a Power App, everyone signs off, and three months later the Power App exists but the problem doesn't feel especially solved.
Sometimes the technology requested just isn't the best fit. Sometimes it's right for a slightly different problem than the one the client has - close enough that nobody flags it, far enough that it matters. Because the presales conversation was about what to build rather than what to achieve, nobody caught it until it was too late to change without an awkward conversation about scope.
It's an easy trap when you're trying to be responsive and get things moving. "What are you trying to achieve?" has to come before "what should we build?" every single time. When it doesn't, you're optimising a solution to a problem you haven't fully understood yet.
Technology-Led Discovery
Even when we do get that sequencing right, there's another failure mode: asking the right questions, only of the technical stakeholders.
Technical discovery is necessary - you need to know where the data lives, how the systems connect, what the constraints are. A purely technical conversation, though, shifts the frame in ways that are easy to miss until it's too late.
"Where does this data currently sit?" is a very different question from "what decision is a user expected to make when they look at this report?"
Both matter, but we've historically been much better at the first one.
I've watched technically excellent work - well-architected, genuinely impressive ouput - land badly because that second question never got asked properly. Reports that were complex in ways that didn't help anyone decide anything, and simple in ways that meant they missed what the user actually needed. The system worked perfectly and nobody used it.
Isolating End Users
A few years ago I worked with a client who refused to involve their end users in discovery. Their logic wasn't unreasonable - the users were known to be sceptical, and the fear was that surfacing concerns too early would kill the project before it had a chance to prove itself. The thinking was to build something first, get momentum, then bring people in.
So we built it without them.
When the users were eventually looped in - at acceptance testing, with the system essentially finished - the feedback was brutal. This was not because the technology had failed, but because the complexity of their actual day-to-day roles had never made it into the brief - edge cases that would have been obvious in a single workshop, workflows that didn't match how decisions were actually made on the ground, constraints that lived entirely in people's heads – were all missing but had a massive impact on the usability of the solution.
The project never recovered - what should have been a cheap, awkward discussion at the start became a very expensive lesson at the end.
I still think about that one a lot.
When the Problem Isn't Technical at All
This is the hardest one to raise and probably the most important.
Sometimes a client comes to us with a technology problem that is actually a process problem dressed in a technology costume. The current way of working is broken, and the hope is that new technology will fix it. Sometimes that's true but putting new technology on top of a broken process doesn't fix the process - it just makes the broken thing faster, more expensive and harder to unwind.
We've learned to have this conversation early, even when it's uncomfortable. Not to turn down work - the technology project often still happens, just in the right order - but because we've seen too many systems end up on a shelf.
A client whose underlying problem didn't get solved doesn't come back. More than that, it's just not a good outcome for anyone. If the core issue is process, at Circyl, we'll say so. It's not always a popular message.
Why AI Makes All of This More Urgent, Not Less
AI has made building cheaper and faster than it's ever been. Development costs are falling, speed is increasing, and what took months can now take just weeks. What took weeks can take days. That's genuinely exciting.
It changes the shape of the risk, though. When building was slow and expensive, there were natural points of friction that forced alignment conversations. Budgets made people pause. Long timelines created pressure to get the brief right before committing. The inefficiency was frustrating, but it also caught things.
When building is cheap and fast, those friction points disappear. You can go further in the wrong direction, faster, with less resistance before anyone realises there’s a problem. AI doesn't know whether the brief was right - it will build whatever you point it at – brilliantly and at speed, without stopping to qualify. If you point it at the wrong problem, you get a brilliant solution to the wrong problem, delivered quickly, at scale.
The constraint has shifted. It's no longer how fast you can build - it's how clearly you can define what to build.
Staying Human
None of this is an argument against AI - quite the opposite, really.
The more capable the technology gets, the more valuable the human work upstream becomes. Understanding the real problem, talking to the people who'll actually live with the solution, asking what decision needs to be made, not just where the data lives and being honest when the issue is process, not technology. Making sure the brief reflects reality rather than what someone hoped reality would be now becomes fundamental to the success of the build.
It's not glamorous work, but it's the work that determines whether everything after it lands or doesn't.
In a world where the machines keep getting better, the most valuable thing we can offer is clarity about what to point them at. That's the human work, and, I think it's the most important thing our industry needs to get serious about right now.
If any of this sounds familiar - or if you've got your own version of the UAT ambush story - I'd genuinely like to hear it.
Related Reading
Like what you read? Share it!
Get in touch
If you'd like to know more, get in touch using the form below, call 03333 209 969 or email enquiries@circyl.co.uk.





