Twelve months ago, the conversation in most boardrooms was about getting AI into the business. Today, the better question is whether anyone still knows what the business has got.
Copilot licences have now been rolled out, developers are leaning on tools like ChatGPT and Claude daily for code generation to allow them to move faster and marketing departments all over the world are using AI to draft content - meanwhile finance teams are busy experimenting with automated analysis - all within a matter of months and most of the output looks right.
The problem is that looking right and being right are not the same thing and the gap between the two is widening faster than most organisations realise.
We’ve started to call this phenomenon AI Haze - the gradual loss of visibility that comes when AI-generated work builds up across an organisation without proper governance, understanding or ownership. It doesn't arrive as a crisis and there's no single failure that triggers an alert. It settles in slowly, one piece of ungoverned output at a time, until the people responsible for running the business can no longer see clearly through what they've built.
What AI Haze actually looks like
The difficulty with AI Haze is that it rarely presents itself as a technology problem. It shows up in the places where people make decisions.
A finance team runs a monthly board report using AI-assisted analysis, the numbers look plausible and the formatting is clean but the person presenting it didn't design the underlying logic and doesn't fully understand the assumptions behind the figures. Nobody asks, because the output looks professional. Three months later, a variance is queried and nobody can trace how the calculation was derived. The institutional knowledge - the blend of hard won experience and human intuition - that would have caught the error was never involved in the first place.
It's not unique to finance, that’s merely an example. The same pattern plays out in development teams releasing code nobody fully understands, in operations teams running automated workflows nobody has documented, in fact, in any part of the business where AI-generated output moves faster than the understanding behind it.
These aren't edge cases anymore - they're fast becoming the normal pattern of work in organisations that assumed governance would catch up with adoption. It rarely does.
The governance gap
Most organisations have governance frameworks for their core systems - such as change control for ERP platforms, approval workflows for financial reporting or security policies for data access. These exist because businesses learned the hard way that ungoverned technology creates risk.
What's uncomfortable to admit is that many of those frameworks were already showing cracks before AI arrived. The undocumented process, the spreadsheet nobody can fully explain - these aren't AI problems, they're organisational ones that AI has simply made harder to ignore.
What AI has done is massively increase the volume of output passing through whatever governance culture already existed, and critically, it's done so in a way that feels like progress rather than risk. Shipping fast with AI has become something people take pride in. The pace becomes the story, and when pace is what gets celebrated internally, the scrutiny that should accompany it tends to get quietly dropped - not out of negligence, but because slowing down to ask "is this actually right?" starts to feel like the wrong instinct when everyone else is moving.
For organisations that already had strong foundations this hasn't been much of an issue; the existing discipline absorbed the new pace. For those where the foundations were always thinner than they looked, the problem is more fundamental. Governance gaps that previously just added some friction to delivery have become the dominant factor in how long anything actually takes - because the AI can generate far faster than a poorly governed organisation can validate, review or take ownership of what it produces.
Most organisations have governance frameworks for their core systems - such as change control for ERP platforms, approval workflows for financial reporting or security policies for data access. These exist because businesses learned the hard way that ungoverned technology creates risk.
What's uncomfortable to admit is that many of those frameworks were already showing cracks before AI arrived. The undocumented process, the spreadsheet nobody can fully explain - these aren't AI problems, they're organisational ones that AI has simply made harder to ignore.
What AI has done is massively increase the volume of output passing through whatever governance culture already existed, and critically, it's done so in a way that feels like progress rather than risk. Shipping fast with AI has become something people take pride in. The pace becomes the story, and when pace is what gets celebrated internally, the scrutiny that should accompany it tends to get quietly dropped - not out of negligence, but because slowing down to ask "is this actually right?" starts to feel like the wrong instinct when everyone else is moving.
For organisations that already had strong foundations this hasn't been much of an issue; the existing discipline absorbed the new pace. For those where the foundations were always thinner than they looked, the problem is more fundamental. Governance gaps that previously just added some friction to delivery have become the dominant factor in how long anything actually takes - because the AI can generate far faster than a poorly governed organisation can validate, review or take ownership of what it produces.
Understanding versus output
The concept that sits at the heart of this issue is the difference between output and understanding. AI is extraordinarily good at producing output. It can generate a plausible first draft of almost anything - code, analysis, correspondence, strategy documents, process designs. The quality of that output has improved rapidly and will continue to do so.
What AI cannot do is understand the context in which that output will be used. It doesn't know why a particular business rule exists, what happened last time the process was changed, which stakeholders need to be consulted or what the commercial consequences of a wrong assumption look like in practice. That understanding must always sit with people and it's the crucial missing piece that makes the output valuable rather than merely plausible.
The organisations getting this right are the ones treating AI as a tool that makes knowledgeable people faster. A developer who understands the system architecture can use code generation to accelerate a build and still maintain control of the design. Similarly, a finance professional who knows the reporting framework can use AI to speed up analysis without losing sight of the assumptions. The expertise comes first.
The organisations heading into trouble are the ones where AI has started to replace understanding rather than support it - where the gap between what the business thinks it has its arms around and what it’s actually in control of grows quietly wider.
What healthy AI adoption looks like
The good news is that none of this requires slowing down the pace of change but it does require an organisation to self-reflect - to be honest about where responsibility for governance and control over output sits.
Using AI to write code, analyse data, draft content or design a process is entirely reasonable - the tools are already very good and are only getting better and the person who uses them is still the author. AI assistance doesn't transfer ownership any more than using a calculator makes an accountant less responsible for the numbers. If the output is wrong, incomplete or built on a flawed assumption, that sits with the person who put their name to it - or should have.
This matters because the current culture around AI tends to blur that line. The assumption, rarely stated but often present, is that if the AI produced it then the usual burden of understanding doesn't apply.
It does.
The same developer who uses AI-generated code, now owns that code. A finance professional who presents AI-assisted analysis has to own the numbers produced. The manager who approves an AI-drafted process is now accountable for the process. The tool changes how quickly something gets produced - it mustn’t change who is responsible for it.
Practically speaking, this means that AI-generated work should go through exactly the same review and validation as anything else - not as a bureaucratic exercise, but because the person signing off needs to genuinely understand what they're signing off on. If they can't explain the assumptions or context behind the output, challenge the results or maintain it going forward, it isn't ready. That's true regardless of how polished it looks.
The businesses that get this right won't necessarily be the ones that adopted AI earliest or used it most. They'll be the ones where people understood that the technology was working for them, not instead of them.
Keeping the skies clear
Nobody is arguing against AI adoption, quite the opposite. The tools are genuinely impressive and the productivity gains are real - the equivalent of handing a skilled builder a nail gun when they've spent years working with a hammer. The work gets done faster, to a higher standard, with less effort. That's unambiguously a good thing.
But the nail gun doesn't know what it's building - the builder does. They understand the structure, the load, what the finished thing needs to do and why certain decisions were made along the way. That understanding is what makes the tool useful rather than just fast - and it's what ensures that when something needs changing six months down the line, someone actually knows where to start and understands why it was built that way in the first place.
AI is no different. Used well, by people who understand the work they're applying it to, it's a genuine force multiplier. The risk isn't in using it - it's in allowing the pace it enables to gradually hollow out the understanding that makes it valuable in the first place. When that happens, you don't have a faster organisation, you just have a faster way of producing things nobody fully owns.
The haze clears when people hold onto their craft - when the developer who uses code generation still understands the architecture, when the analyst who uses AI still owns the numbers and when the manager who approves AI-assisted work can actually stand behind the policies they used AI to create. The tools are there to be used - we all just need to make sure the understanding comes with them.
Related Reading
Like what you read? Share it!
Get in touch
If you'd like to know more, get in touch using the form below, call 03333 209 969 or email enquiries@circyl.co.uk.






