Decision accountability for AI use

AI decisions you can
stand behind

I help organisations understand whether the way they are using AI would actually hold up if they had to explain it — to clients, regulators, or internal leadership.

Kim Stevens, Accountable AI Kim Stevens
The problem

Most organisations aren't struggling to use AI.
They're struggling to explain it.

When something goes wrong, the questions come quickly — and most teams aren't ready for them.

Who checked this output? What data was involved? What would happen if it was wrong — and who is responsible?

These aren't hypothetical questions. They are the questions clients, regulators, and leadership ask when something doesn't go as expected.

"Human-in-the-loop" sounds like a safeguard.

In practice, it often means a quick glance and a click. That's not oversight — it's exposure you haven't yet recognised.

The gap between using AI and being able to defend those decisions is where the real risk sits.

Primary Service

AI Defensibility Review

A focused working session for organisations using AI in real decisions. Not a full audit. Not a policy exercise. A practical review of whether your AI use can be explained, justified, and defended when it matters.

We look at 2–3 live use cases from your organisation and assess:

  • what the AI is actually doing,
  • what decision or action it influences,
  • who owns the outcome,
  • what happens if it is wrong,
  • and what review or approval happens before action is taken.

Most AI governance talks about frameworks. We focus on the reality of day-to-day use.

The result is a clear view of where your AI use is defensible, where it is weak, and what needs to change.

Who it's for

This is for teams that are already using AI, or about to, in workflows where accuracy, accountability, and explainability matter. It is especially relevant for organisations that need to answer questions from clients, leadership, auditors, or regulators about how AI-supported decisions are made.

Why this matters

AI is no longer just a technology issue. It is becoming an operational and governance issue, especially where AI influences decisions that affect clients, money, compliance, or reputation.

What you get

A clear view of where your AI use is defensible — and what needs to change.

01
A review of 2–3 real AI use cases.
02
A clear view of decision points, ownership, and review steps.
03
Identification of where defensibility is strong and where it breaks down.
04
Practical recommendations for strengthening control, oversight, and accountability.

No long report. No framework overload. A clear, honest view of your exposure — and the practical steps to address it.

Introductory pricing: from £350
Arrange a Defensibility Review
The method

The Five-Question
Defensibility Test

We use a simple five-question test to assess whether your AI use would stand up under scrutiny. This is not a policy review or a generic governance workshop. It is a practical way to examine real AI use cases and understand where decision-making is clear, where accountability sits, and where the process may break down.

Clarity

What is the AI actually doing?

Impact

What decision or action follows from the output?

Ownership

Who is accountable?

Risk

What happens if it is wrong?

Review

How is it checked before action is taken?

The result is a clearer view of what is defensible, what is exposed, and what needs to change first.

Kim Stevens

My background is in regulated environments — clinical research, data governance, and the management of complex technical projects where accountability is not optional.

That background shapes how I approach AI. I do not see it as a technology problem first, but as a decision and accountability problem. Who owns what. What happens when something is wrong. Whether the use of AI in a process would actually hold up if examined.

I have also worked directly with AI tools in professional settings — including supporting the evaluation and rollout of Microsoft Copilot — and seen firsthand how the gap between everyday use and genuine oversight develops.

Successful AI adoption is rarely just about the technology. It is about understanding workflows, ownership, data, and the expectations around how tools should be used.

That is what the Defensibility Review is built to address.

"I help organisations understand whether the way they are using AI would actually hold up if they had to explain it."

Ideal for

UK-based organisations in regulated or higher-stakes environments — biotech, healthcare, finance, professional services.
Teams using Copilot, ChatGPT, or internal AI tools in everyday work who have not yet mapped the decisions those tools are influencing.
Leaders who want clarity on how AI is being used — not just reassurance that it is.
Get started

Ready to understand your exposure?

A Defensibility Review takes 90 minutes and gives you a clear view of where your AI use would and would not hold up. Get in touch to arrange one.

Prefer to message first? WhatsApp  ·  Get in touch
Privacy

Simple, transparent handling of your information

I only collect the information you choose to share when you get in touch. Your details are used solely to respond to your enquiry and are never shared with third parties.

I do not use your information for marketing unless you explicitly ask me to stay in touch. If you have any questions about how your information is handled, feel free to ask.