01 / The shape of it

A risk-based framework.

The EU AI Act classifies AI systems into four risk tiers: prohibited, high-risk, limited risk and minimal risk. Most enterprise AI sits in either limited or high-risk. The obligations scale with the tier. The fines scale with the obligations.

Prohibited (already in force)

Some uses of AI are banned outright. Real-time biometric identification in public spaces (with limited law enforcement exceptions). Social scoring. Emotion recognition in workplaces and schools. Manipulation of vulnerable groups.

High-risk (the August 2026 deadline)

This is where most of the work lives. Annex III lists the application areas: critical infrastructure, education, employment, access to essential services, law enforcement, migration, justice. If your AI sits in one of these, you are inside scope.

Limited risk

Systems that interact with humans (chatbots), generate or manipulate content (deepfakes), or use emotion recognition in non-prohibited contexts. The obligations are mostly transparency: tell the user they are interacting with AI, label generated content.

Minimal risk

Everything else. Most enterprise AI: spam filters, recommendation systems, games. No specific obligations beyond existing law.

↳ The Act, as a graph
EU AI ACTProhibitedHigh-riskLimitedMinimalART. 11Tech docsART. 12LoggingART. 13TransparencyART. 14OversightEach Article maps to a node in your governance graph.THIS IS NOT A METAPHOR. THE REGULATOR WANTS A GRAPH.
02 / The deadlines

A phased rollout.

The Digital Omnibus simplification proposal may shift parts further into 2027 and 2028. The planning horizon does not change. Building a model registry and an Article 11 documentation pack takes 9 to 18 months. If you have not started, you are already behind.

03 / The thesis

The Act is asking for a knowledge graph.

Read Articles 11 to 15 in sequence and a pattern emerges. Article 11 wants technical documentation per system. Article 12 wants traceable logs of every event. Article 13 wants transparency information per system. Article 14 wants documented human oversight. Article 15 wants robustness and accuracy evidence.

What is being asked for is a typed, queryable, machine-readable inventory of every AI system in your organisation, with relationships to its inputs, decisions, owners, testing artefacts, and risk classification. That is the definition of a knowledge graph.

You can produce these artefacts as separate Word files and PDFs and call it compliance. Many will. They will struggle through every audit. Or you can build the graph once and generate every artefact from it. The work is similar. The repeatability is night and day.

04 / The penalties

What enforcement actually looks like.

Three tiers of fines:

For most enterprise clients the financial exposure is significant but not catastrophic. The reputational exposure is the real driver. Being named in a public enforcement action is the worse outcome.

05 / What to actually do

Six steps. In this order.

STEP 01

Inventory every AI system as graph nodes

You cannot govern what you cannot see. Build the model registry as the seed of your governance graph. Most enterprises discover 2 to 3 times more AI in production than they thought.

STEP 02

Classify by risk tier as a typed property

For each system node, attach a typed risk classification per Annex III. The graph engine enforces consistency. Get legal sign-off, link the legal opinion to the node.

STEP 03

Gap analysis as graph traversal

For every high-risk node, traverse to its required Article-aligned artefacts. Missing edges become your remediation backlog. The query is ten lines of SPARQL or Cypher.

STEP 04

Build artefacts as nodes, not files

Article 11 docs, Article 13 transparency, Article 14 oversight design. Each is a node linked to the model node. The PDF is generated from the graph, never edited separately.

STEP 05

Instrument logging into the graph

Article 12 traceability. Every input, every inference, every override. Logged as triples linked to the model node. Retained, exportable, queryable.

STEP 06

Quality management ontology

Article 17 QMS. The governance ontology that ties owners, reviewers, controls and incidents to model nodes. Owned at C-level. Reviewed quarterly through a graph-driven dashboard.

Aug 2026 is the deadline

Want a readiness review?

We will look at your high-risk inventory, run the gap analysis as a graph traversal, and tell you honestly what needs to happen between now and August 2026.

Book a meeting →