Governance — how our AI actually works

Governed AI is not a marketing word. It is a ledger, a preview/apply/rollback workflow, and a refusal-to-fabricate policy enforced in code.

This page exists because a compliance or legal screener should be able to read one document, ask their questions, and walk away with answers. Every claim below maps to shipped behaviour in the platform — verifiable in a 30-minute review with the operator who wrote it.

1. Pre-flight credit metering

Every AI call runs through a workspace balance check before it executes. If the workspace balance is below the operation's pre-flight floor, the call is refused — no partial generation, no soft-failure that bills you for a half-broken draft. The check happens at the API route level, not at the UI; you cannot work around it from a browser.

2. Append-only audit ledger in millicents EUR

Every metered AI call writes a row to a per-workspace ledger: timestamp, operation, reason code, real provider price + platform fee, balance before, balance after. The ledger is append-only — entries are never edited or deleted. Admins see the full history. For an enterprise-embedded engagement, this is the document the finance team gets when they ask 'what did we spend on AI last quarter?' — not an estimate, the actual transactional record.

3. Preview · apply · rollback on every AI edit

The Blog SEO Enhancement orchestrator stages every proposed change. Each proposal includes: a diff (what specifically changes), a rationale (why the AI suggests this), risk flags (e.g., changes a heading hierarchy, touches a published external link), an atomic apply button, and a rollback button. Internal-link recommendations, external-citation insertions, paragraph paraphrases, meta-tag refreshes, heading audits — all five proposal types pass through the same gate. There is no autonomous edit path. The senior operator approves every change.

4. Tavily-verified facts + domain-trust tiering

Before generating content, the orchestrator queries Tavily for source material relevant to the topic. Sources are tiered by domain trust (an authority domain like a ministerial site outranks a commercial blog). Claims that fail to find a verified source are not generated. The platform refuses to fabricate facts that aren't in its sources — this is enforced at the prompt + retrieval layer, not as a post-hoc disclaimer.

5. Role-gated mutations + RLS workspace isolation

Every write operation checks the caller's role (admin / manager / contributor / portal-client) before mutating data. Cross-workspace data access is blocked at the database layer via Supabase Row Level Security policies — not at the application layer where a bug could leak. A workspace's data is invisible to any other workspace, including the platform admin's own workspace, except via explicit cross-workspace operations (which are audited).

6. Anti-abuse logging on every public form

Public forms — booking submissions, newsletter signups, contact intake, popup captures — log honeypot triggers and form-start timing. A submission that fills in 12 seconds with a honeypot field touched is logged with its event signature. Operators see the anti-abuse event stream alongside their reservations / contacts; obvious bots are filtered out without manual triage.

7. GDPR settings + DSR tracking + retention windows

Every workspace has its own GDPR settings page: DPO contact, privacy policy URL, sub-processor list, retention windows per data class, consent posture, data region. Data-subject requests are tracked with statuses (received / verifying / fulfilled / refused-with-reason). The platform supports your compliance posture; the workspace operator still has obligations. We do not claim 'fully GDPR compliant by default' — that phrase is meaningless. We provide the controls a serious operator needs.

8. Authenticated automation endpoints

Cron-driven jobs (Market Monitor scans, scheduled newsletter dispatch, Opportunity Engine sweeps) hit endpoints protected by sticky bearer-token authentication. The cron secret is stored in environment variables, never in client code, never in committed config. Webhook receivers (Resend tracking events) verify Svix signatures before accepting payloads — an unsigned or forged webhook is rejected.

What's shipped vs. what's roadmap

Wave 1 of the SEO feedback loops is shipped: the internal-link graph persists every accepted edit, learned-authority domains accumulate per workspace, proposal events are stored. Wave 2 — read-side rankers that act on the persisted history — is roadmap, not product. The honest framing: the system already remembers; soon it'll act on what it remembers. We do not ship 'autonomous AI agents acting without review' — that language doesn't match what's built. The ledger and review workflow are the entire point.

If your compliance team wants to audit any of the above against shipped code, schedule a 30-minute review with Hossam. We open the workspace, show the ledger, walk through the orchestrator, and answer specific questions on the record.

Plan a 30-min governance review with Hossam — we walk through the ledger and the workflow on real data