A bank thinks in counterparties, exposures and books of business. A hospital thinks in patients, encounters and pathways. An insurer thinks in policies, claims and lives. The entities, the relationships, the regulatory constraints, all sector-specific. Below is the map we use to position every conversation.
The seductive trap in our space is the universal data model. A “customer” entity that works for everyone. The reality is that it works for nobody. A retail bank customer has accounts, products, exposures and a credit decision. A pharmaceutical customer is a hospital, a clinician, a payer, a patient depending on the moment. Forcing all of those into one model loses the meaning that makes the AI useful.
We start every engagement with the sector ontology the industry already lives by, then fit it to your specific business. The shape of the work is similar across sectors. The vocabulary is not.
Sectors with deep data culture, real budgets, and AI in production. Their problem is not adoption. It is governance, third-party risk, and the gap between scale and transformation.
75% of UK regulated firms now use AI. The hard problem is the third-party model risk graph: who built the model, what data trained it, what decisions it touches, who owns the residual risk.
95% adoption. The ontology problem is acute: a single life can be a policyholder, a claimant, a beneficiary and a contact, often through multiple legacy core systems.
GxP-validated AI requires GxP-validated data. The ontology must express the full clinical-to-commercial chain: trial, study, drug, indication, patient population, market.
Decades of OSS and BSS that were never designed to be queried together. The semantic graph between subscriber, service, network element and bill is the new battleground.
~78% adoption and the only sector approaching genuine transformation. The pain is sovereignty stack debates, GPU access, and the talent it takes to keep an ontology current at the speed of the model market.
Sectors where pilots have multiplied but the operating model has not caught up. Foundations work pays back fastest here.
Hallucinations in front of clients. The billable hour under structural pressure. The ontology of “matter”, “engagement”, “client team” needs to be modelled before the agents can be trusted.
Personalisation versus GDPR. The shopper graph (basket, lifetime value, channel, segment) is the differentiator and the compliance risk in one breath.
Copyright, training data IP, deepfakes. Every AI release decision needs an ontology of rights, talent, and provenance the lawyer can sign off on.
OT and IT that have never integrated. The asset-product-process-quality graph is the foundation Industry 4.0 always needed and rarely built.
Legacy TMS and WMS. The shipment, vehicle, driver, route ontology is mostly still in spreadsheets. eFTI compliance is forcing it to grow up.
OT and IT data extraction problems. NIS2. The grid asset, customer, and demand graph is what AI-induced load forecasting actually needs.
Sectors where the foundation problem is most acute and the political will to fix it is highest. We do meaningful work here.
Legacy IT, procurement constraints, transparency demands. The citizen-service-entitlement graph is what the AI Opportunities Action Plan is really asking for.
EPR integration at 30%. Clinician trust low. The patient-encounter-pathway-outcome ontology is what every clinical AI use case actually needs.
Fragmented value chains. Building safety AI needs a property-element-defect-responsibility ontology before the model can do anything useful.
Assessment integrity. The learner-cohort-attainment-pathway graph is the core ontology and almost no institution has built it.
Our maturity benchmark takes about three minutes. It estimates your stage on the Gartner / IDC / BCG ladder and compares you to your sector peers.
Each industry has its own foundation problem. We will share what we are seeing in yours, anonymised.
Book a meeting →