

A few months after his appointment as Avantia’s Head of AI, we sat down with Dan Hirlea to talk about progress to date and priorities for the next phase, including claims transformation, secure internal tooling, and our participation in the FCA’s AI Live Testing programme.
You joined Avantia in December 2025. What drew you to the company, and how have your first few months gone?
“The mindset. Avantia treats AI as a strategic accelerator, not a threat. That lets us move quickly on two fronts: first, building internal capability – evolving our claims platform from deterministic workflows to agentic, end‑to‑end processes with human oversight; and second, adopting best‑in‑class external tools where they compound value. AI will be embedded as a core pillar this year, with a focus on measurable customer and commercial outcomes.”
How will Avantia aim to stay ahead of competitors?
“By pointing AI at the Profit and Loss (P&L). Claims is the growth engine – tightening indemnity control, improving fraud detection and enabling straight‑through processing where it’s safe. Internally, we’re enhancing our internal-use generative model “Alfred” with PII protection and role‑specific “personas” so teams can securely ground the model in their work and data. And we’re pragmatically piloting external tools like Claude to quantify productivity gains, especially where deep Office integrations matter.”
There’s buzz around ChatGPT apps from comparison sites and direct insurers. What’s real and what’s hype?
“Early versions look more like meta‑search front ends than fully integrated quoting journeys. Deep, secure integration with pricing engines is a bigger build, and moving too fast risks exposing pricing logic to reverse‑engineering if guardrails are weak. We’re monitoring closely while investing where we can create secure, durable value.”
Avantia was recently announced as being part of the FCA’s first AI live testing programme. How is it going so far?
“We’re the only insurer in the cohort, working alongside much larger FS firms. The FCA’s north star is customer outcomes – same standards whether a decision is human or AI‑assisted. Practically, that means robust evaluation (not just averages), looking at distribution and stability, stress‑testing edge cases, and including vulnerable customers. You can grant a level of autonomy to AI where it’s demonstrably safe to do so, with periodic reviews and clear audit trails. But keeping a human in the loop is really important. I believe this trial helps give us confidence and a compliance edge as we scale.”
Which AI uses are working today, and what’s on the roadmap?
“Our claims AI has proved it can speed decisions and improve quality; next is scaling agentic capabilities with enhanced controls to directly improve loss ratio and cycle time. We’re also running a Claude pilot with 15 internal ‘AI champions’ to measure time savings across documents, slides and spreadsheets, and to inform deeper integrations.”
In your opinion, which adjacent technologies will drive AI forward?
“Hardware matters. The field is still NVIDIA‑centric. I think progress in alternative accelerators will cut costs and speed training and inference. Model quality increasingly beats sheer size – strong instruction tuning, alignment and multi‑modal capability are what move the needle. Full alignment across modalities (text, vision, code, reasoning) is the big research frontier.”
Do you think that local models have a role in the stack?
“Local hosting gives control and clear data boundaries, but maintaining performance and freshness is hard. Retrieval-Augmented Generation (RAG) helps, yet leading cloud models still outperform on most high‑stakes tasks and are evolving fastest. For now, cloud models with strong retrieval and governance typically win on quality‑to‑effort.”
What do you think is the biggest misconception about AI?
“That it fixes everything. It’s best to use the simplest, effective solution. My view would be to reserve advanced AI for problems where it clearly outperforms and where we can measure impact. Manage real risks through evaluation design, reproducibility and process documentation – making sure that it’s fully aligned with FCA expectations.”
Are you concerned about the data big providers use to train models?
“State‑of‑the‑art models blend public web data with curated private sources, but the differentiator is increasingly instruction tuning and alignment. Smaller challengers with better tuning can outperform larger incumbents. Quality beats quantity. Some open families that scaled quickly without robust guardrails became easier to misuse; those that invested in quality alignment deliver safer, more relevant outputs.”
Do you have a bold AI prediction for the next few years?
“AI will be embedded across most tech‑enabled businesses. Watch Anthropic – they’re moving beyond chatbots to application‑integrated services, which is where real enterprise utility compounds.”
Anything else you would like to add?
“AI is only valuable if it drives better customer outcomes and stronger unit economics. I believe our edge at Avantia is with disciplined execution: agentic-based claims with controls, specialised internal tooling like Alfred, pragmatic use of best‑in‑class providers and building with the regulator. That’s how we turn AI from buzz to durable advantage.”
