Why AI Adoption fails when people, habits and confidence are ignored

The tools have arrived. The value hasn’t. Most leaders know it, but few are calling it what it is.

Every leader is asking the same question

If you run a regulated SaaS business right now, you’ve probably been asking yourself the same question your peers are asking. Where is AI actually creating value in our business, and where is it creating risk?

The licences are live and training videos exist. People nod in the right meetings. But value isn’t showing up in the numbers, and risk isn’t being measured.

The data agrees. Deloitte’s 2026 Global Human Capital Trends survey found 42% of workers report their organisation rarely evaluates the impact of AI on people. 34% of organisations recognise their own culture as a direct inhibitor to AI transformation. Only 5% are making great progress on managing how AI affects culture, trust, and collaboration.

The EU AI Act becomes fully applicable on 2 August 2026. Boards, regulators, and customers are about to ask even harder questions than the ones leaders have been asking themselves. The gap between AI being deployed and AI being adopted is becoming a commercial and a regulatory issue.

Why I keep seeing the same mistake in regulated SaaS

I work on the people side of AI adoption. Fifteen years across retail transformation, fintech innovation, and partnering with leaders responsible for 65,000-person workforces. The mistake I see most often in regulated SaaS today is consistent.

Most rollouts are tech-focused, not human-centric. Deloitte’s research also found 59% of organisations are taking a tech-focused approach to AI. Human-centric organisations are 1.6 times more likely to outperform their AI ROI expectations. Layering AI on top of existing processes without redesigning how humans and machines actually work together is the most common, and most expensive, error executive teams are making right now..

The Deloitte 2026 report names this directly. It is called cultural debt. The accumulating cost of failing to address questions like “is it cheating to use AI for this,” “what counts as effort now,” “who is to blame when AI is wrong,” “if AI does my work, am I next?” Workers answer these on their own when leaders don’t, and the answers are rarely the ones that build trust.

The numbers back it up. 80% of leaders, managers, and workers are concerned their colleagues are using AI to appear more productive than they are. 65% believe their culture needs to change significantly because of AI. Trust is eroding in both directions.

This is a culture problem, not a technology problem. In regulated SaaS, culture is now at risk of becoming the bottleneck.

What it actually takes

Most rollouts try to fix this with another tool, another training course, another framework. None of those are the answer.

What it takes is a proper diagnosis first. Where is AI creating value in your business, where is it creating risk, and which initiatives should be scaled, fixed, or stopped. Those questions cannot be answered from a slide deck. They get answered with evidence and conversation.

Then a build phase that designs human and AI interactions deliberately, instead of bolting AI onto old workflows. Deloitte’s research shows organisations that prioritise this work redesign are twice as likely to exceed their expected AI ROI. A European telco that added an AI assistant without redesigning roles got a 5% productivity lift. The same company, redesigning how humans and AI worked together, got 30%.

Then an operationalise phase that embeds the behaviours, tracks the value, and keeps going until they become routine. Save the Children doubled weekly AI usage from 36% to 71% by investing in training, leadership engagement, and an ambassador network. Not by buying another tool.

This is full-service AI capability building. People, technology, governance, and commercial outcomes designed as one connected effort, not five workstreams that never meet.

Start with knowing

Before you fix anything, you need to know what is actually happening across your AI estate.

The AI Risk & Value Scorecard does that. It is a structured, evidence-based assessment of how AI is performing across your business right now. Eight core areas. Governance and risk. Strategy and prioritisation. Commercial alignment and value design. Technology and data. Culture and capability. Execution and delivery. Adoption and integration. Measurement and value realisation.

Inside those, 64 sub-areas. Each scored 0-4 against evidence, not opinion. Not what your CTO thinks is happening. What is actually going on.

You walk away with three things. A breakdown of your AI Value Readiness Score, with strengths, gaps, and risk exposure visible across all eight areas. Bespoke recommendations with critical next steps. A 100-day value realisation roadmap that turns the diagnosis into action.

If you cannot honestly answer the question “where is AI creating value, and where is it creating risk,” that is the place to begin. The answer changes every conversation that comes after it.

Read more about our AI Risk & Value Scorecard.

Previous
Previous

Your buyers have stopped Googling. Has your content caught Up?

Next
Next

The silent sabotage in your codebase: Why shadow AI is a live governance risk