I help organisations understand whether the way they are using AI would actually hold up if they had to explain it — to clients, regulators, or internal leadership.
When something goes wrong, the questions come quickly — and most teams aren't ready for them.
Who checked this output? What data was involved? What would happen if it was wrong — and who is responsible?
These aren't hypothetical questions. They are the questions clients, regulators, and leadership ask when something doesn't go as expected.
"Human-in-the-loop" sounds like a safeguard.
In practice, it often means a quick glance and a click. That's not oversight — it's exposure you haven't yet recognised.
The gap between using AI and being able to defend those decisions is where the real risk sits.
A focused working session for organisations using AI in real decisions. Not a full audit. Not a policy exercise. A practical review of whether your AI use can be explained, justified, and defended when it matters.
We look at 2–3 live use cases from your organisation and assess:
Most AI governance talks about frameworks. We focus on the reality of day-to-day use.
The result is a clear view of where your AI use is defensible, where it is weak, and what needs to change.
Who it's for
This is for teams that are already using AI, or about to, in workflows where accuracy, accountability, and explainability matter. It is especially relevant for organisations that need to answer questions from clients, leadership, auditors, or regulators about how AI-supported decisions are made.
Why this matters
AI is no longer just a technology issue. It is becoming an operational and governance issue, especially where AI influences decisions that affect clients, money, compliance, or reputation.
No long report. No framework overload. A clear, honest view of your exposure — and the practical steps to address it.
We use a simple five-question test to assess whether your AI use would stand up under scrutiny. This is not a policy review or a generic governance workshop. It is a practical way to examine real AI use cases and understand where decision-making is clear, where accountability sits, and where the process may break down.
What is the AI actually doing?
What decision or action follows from the output?
Who is accountable?
What happens if it is wrong?
How is it checked before action is taken?
The result is a clearer view of what is defensible, what is exposed, and what needs to change first.
My background is in regulated environments — clinical research, data governance, and the management of complex technical projects where accountability is not optional.
That background shapes how I approach AI. I do not see it as a technology problem first, but as a decision and accountability problem. Who owns what. What happens when something is wrong. Whether the use of AI in a process would actually hold up if examined.
I have also worked directly with AI tools in professional settings — including supporting the evaluation and rollout of Microsoft Copilot — and seen firsthand how the gap between everyday use and genuine oversight develops.
Successful AI adoption is rarely just about the technology. It is about understanding workflows, ownership, data, and the expectations around how tools should be used.
That is what the Defensibility Review is built to address.
"I help organisations understand whether the way they are using AI would actually hold up if they had to explain it."
Ideal for
A Defensibility Review takes 90 minutes and gives you a clear view of where your AI use would and would not hold up. Get in touch to arrange one.
I only collect the information you choose to share when you get in touch. Your details are used solely to respond to your enquiry and are never shared with third parties.
I do not use your information for marketing unless you explicitly ask me to stay in touch. If you have any questions about how your information is handled, feel free to ask.