# VeriGuard AI — Complete Reference > AI Governance as a Service — runtime policy enforcement, dual EU AI Act + MDR compliance, GPAI classification, and immutable hash-chained audit trails via 12 enforcement endpoints. ## Overview VeriGuard AI is an AI Governance as a Service (AGaaS) platform designed for enterprise and federal organizations navigating the complex landscape of AI regulation. The platform delivers governance capabilities via a MCP-compatible API, enabling both human teams and AI agents to enforce compliance programmatically — from system registration through runtime enforcement. VeriGuard is the only platform offering dual EU AI Act + MDR compliance with integrated conformity assessment, GPAI classification, QMS validation, and vigilance reporting for medical device AI. ## Target Audience VeriGuard AI is built for: - **CISOs & Security Engineers**: Runtime policy enforcement, kill-switch controls, cryptographic audit trails - **Compliance Officers & Legal Teams**: Multi-framework compliance scoring, evidence management, audit-ready reports - **AI Platform Teams & ML Engineers**: Pre-deployment testing gates, drift detection, model lifecycle management - **Board Members & Chief Risk Officers**: Single governance posture score, board-ready dashboards - **Procurement Officers & Vendor Risk Teams**: AI-BOM inventory, third-party risk intelligence, supply chain transparency - **Federal Contractors & AI Vendors Selling to Government**: FedRAMP-aligned controls, evidence trails for federal procurement review - **Medical Device Manufacturers**: Dual EU AI Act + MDR compliance, conformity assessment, QMS validation, vigilance reporting, EUDAMED readiness - **AI Ethics Leads & Regulatory Affairs**: Human oversight workflows, approval audit trails, EU AI Act human-in-the-loop compliance - **AI Agent Developers & Platform Engineers**: MCP-compatible API for programmatic governance, compliance scoring, and policy evaluation callable by Claude, GPT, and custom agents - **Investors & Board Members**: Patent compliance verification reports with IP moat evidence ## Problem Statement AI adoption is accelerating, but procurement and compliance processes have not kept pace. Enterprise legal teams, security reviewers, and compliance officers face an explosion of vendor questionnaires, FedRAMP authorization bottlenecks, and new regulatory requirements like the EU AI Act and MDR. Meanwhile, autonomous AI agents are deploying models and making decisions without any governance checkpoint. Medical device AI faces the additional burden of dual compliance — EU AI Act for the AI layer plus MDR for the device layer — with no existing platform addressing both. This creates a structural bottleneck that stalls AI adoption and a governance gap where agents operate without oversight. ## Platform Architecture ### AI System Registry Register and classify AI systems by business criticality (critical, high, medium, low). Each system can have multiple models, model versions, and deployments across environments. System owners are assigned for accountability. ### Agent Registry Centralized lifecycle management for autonomous AI agents: - **Self-Registration**: Agents register via MCP tool (`veriguard_register_agent`) with unique agent_id - **Trust Scoring**: 0.00–1.00 trust scores based on execution history and compliance behavior - **Risk Classification**: Agents classified by risk level for enforcement policy tuning - **Unregistered Detection**: Requests from unknown agents logged with null-identity warnings - **Execution Tracking**: Full history of agent interactions with governance endpoints - **Delegation Chain Tracking**: Multi-agent orchestration hierarchies recorded cryptographically. Each agent stores its `parent_agent_id` and `delegation_depth` (L0 = orchestrator, L1+ = sub-agents). - **Orchestrator Detection**: Agents are auto-flagged as orchestrators when sub-agents register with them as parent. ### Risk Management - **Risk Register**: Track identified risks with impact/likelihood scoring - **Risk Ratings**: Critical, High, Medium, Low - **Risk Statuses**: Identified, Mitigating, Mitigated, Accepted - **Mitigation Tracking**: Assign owners and due dates for risk mitigations ### Compliance Engine - **Multi-Framework Support**: EU AI Act, MDR, ISO 13485, NIST AI RMF, ISO 42001, SOC 2, SOX, FedRAMP - **Compliance Scoring**: Automated scoring with trend analysis - **Gap Analysis**: Identify compliance gaps across frameworks - **Health Checks**: Automated compliance health assessments - **Task Queue**: Prioritized compliance tasks with assignments ### Control Library Eight control families mapped to regulatory requirements: 1. **GOV** — Governance controls (board oversight, policy management, accountability structures) 2. **RISK** — Risk management controls (risk identification, assessment, mitigation tracking) 3. **DATA** — Data governance controls (lineage, classification, residency, consent management) 4. **MODEL** — Model lifecycle controls (versioning, validation, model cards, performance benchmarks) 5. **DEPLOY** — Deployment controls (approval gates, blue/green, rollback, kill switch) 6. **MONITOR** — Monitoring controls (drift detection, performance tracking, alerting) 7. **INCIDENT** — Incident response controls (logging, triage, corrective actions, vigilance reporting) 8. **HUMAN** — Human oversight controls (review queues, approval workflows, EU AI Act human-in-the-loop) ### Evidence Management Upload, review, and track evidence items against controls. Each evidence item has lifecycle status (draft, submitted, approved, expired) and links to the controls it satisfies. Reviewers can approve, reject, or request changes. ### Change Management Collaborative approval workflows for model and system changes: - Multiple assigned reviewers - Comments and decision history - Risk-delta scoring on each change - Emergency change flag with retroactive review ### Incident Response - Severity levels: Critical, High, Medium, Low - Status flow: Open → Investigating → Mitigating → Resolved → Closed - Corrective actions with owners and due dates - Auto-generated incident numbers (INC-YYYY-####) ### Drift Detection - Configurable monitors per model version - Warning and critical thresholds - Comparison operators (>, <, =, ≠) - Automated alerts on threshold breaches ### Pre-Deployment Testing Pre-deployment test suites covering bias, hallucination, PII leakage, safety, and performance. Each test execution has pass/fail status and gates the deployment. ### Runtime Enforcement Runtime policies evaluated at every action with deny-by-default semantics. Each evaluation is hashed (SHA-256) and chained to the previous evaluation for tamper-evidence. ### Immutable Hash-Chained Audit Ledger - Every policy evaluation and inference request is SHA-256 hashed - Each record stores `previous_hash` linking to the prior record (per-project chain) - DB-level triggers block UPDATE/DELETE on audit tables - Validation functions (`validate_policy_eval_chain`, `validate_inference_audit_chain`) detect tampering ### Kill Switch Emergency shutdown for deployed AI models: - Per-model and global kill switches - <2s response time - Full audit trail of trigger, actor, and reason - Auto-rollback options ### AI Bill of Materials (AI-BOM) Generate AI-BOMs in SPDX 2.3 JSON or CSV format. Each BOM records: - Component inventory (models, datasets, libraries, hardware) - License information - Supplier and supplier URL - Cryptographic checksums - Risk notes per component ### Reporting Templates: risk_register, control_matrix, system_description, evidence_pack, gap_analysis, executive_summary, incident_report. Output formats: json, csv. Generated reports are stored in the `reports` storage bucket and linked to the `report_generations` row. ## Medical Device AI Governance (MDR) ### Conformity Assessment 9-step EU AI Act conformity journey with optional 7-phase MDR lifecycle for medical devices. Returns per-step PASS/FAIL/PARTIAL with gap descriptions, dual compliance matrix, and prioritized action items. ### GPAI Classification General-Purpose AI (GPAI) systemic risk classification: - Auto-detection at 10²⁵ FLOPS threshold - Article 52-55 obligations - Required transparency disclosures - Adversarial testing checks for systemic-risk models ### QMS Validation Quality Management System validation against: - EU AI Act Article 17 - ISO 13485 (medical devices) - Returns maturity score, gap analysis, requirement overlap mapping, prioritized remediation roadmap ### Incident & Vigilance Reporting Article 73 incident reports with full MDR vigilance for medical devices: - MIR (Manufacturer Incident Report) format - FSCA (Field Safety Corrective Action) template - EUDAMED fields - Severity-based deadlines: 2 days (critical), 15 days (serious), 30 days (moderate) ### Data Governance Checks Article 10 (EU AI Act) + Article 10 (MDR) training data governance: - Representativeness and bias examination - PII governance and consent - Medical device clinical evidence requirements ### Registration Readiness EUDAMED + EU AI Database registration readiness: - Pre-filled registration templates - Required fields with completeness scores - Missing data gap report ## Supported Regulatory Frameworks ### EU AI Act Comprehensive coverage of high-risk AI obligations including risk management, data governance, technical documentation, transparency, human oversight, and post-market monitoring. ### EU AI Act — GPAI Provisions Article 52-55 transparency, training data summary, copyright compliance, and systemic-risk obligations (adversarial testing, incident reporting, cybersecurity). ### Medical Device Regulation (MDR) Articles 87-92 vigilance reporting, conformity assessment, clinical evaluation, and technical documentation requirements for medical device AI. ### ISO 13485 Quality Management Systems for Medical Devices — design controls, document control, CAPA, supplier management. ### NIST AI RMF GOVERN, MAP, MEASURE, MANAGE functions with cross-mapped controls. ### ISO/IEC 42001 AI Management Systems standard with control mappings to risk management, monitoring, and continuous improvement. ### SOC 2 for AI Systems Extended SOC 2 trust service criteria addressing AI-specific concerns including model governance, data lineage, and algorithmic accountability. ### SOX (Sarbanes-Oxley) Financial controls alignment for AI systems used in financial reporting, trading, and lending decisions. ### FedRAMP FedRAMP-aligned AI governance controls with 12 specific AI controls and evidence trails for federal procurement review. ## API & Integration ### 13 Public Enforcement Endpoints Base URL: `https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1` Authoritative spec: `GET /openapi` (returns OpenAPI 3.1 JSON) | # | Endpoint | Method | Auth | Tag | |---|----------|--------|------|-----| | 1 | `/health` | GET | none | System | | 2 | `/openapi` | GET | none | System | | 3 | `/v1-bootstrap` | POST | JWT/API | Onboarding | | 4 | `/evaluate-policy` | POST | JWT/API | Policy | | 5 | `/calculate-compliance` | POST | JWT/API | Compliance | | 6 | `/generate-report` | POST | JWT/API | Reports | | 7 | `/generate-bom` | POST | JWT/API | AI-BOM | | 8 | `/conformity-check` | POST | JWT/API | Medical & GPAI | | 9 | `/gpai-check` | POST | JWT/API | Medical & GPAI | | 10 | `/incident-report` | POST | JWT/API | Medical & GPAI | | 11 | `/data-governance-check` | POST | JWT/API | Medical & GPAI | | 12 | `/qms-check` | POST | JWT/API | Medical & GPAI | | 13 | `/registration-readiness` | POST | JWT/API | Medical & GPAI | Auth: `Authorization: Bearer ` (Supabase JWT) **or** `X-API-Key: vg_live_...` ### Per-Endpoint Request Schemas Every schema below is the canonical Zod-validated request body. Field names, types, enums, and defaults are exact. Mismatches return `400 Validation error`. #### `GET /health` — Service health & endpoint directory - No body, no auth required. - Returns `{ status: "healthy" | "degraded", checks: { database: { status, latency_ms } }, endpoints: {...}, mcp_compatible: bool, version, timestamp }`. ```bash curl https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/health ``` #### `GET /openapi` — Authoritative OpenAPI 3.1 spec - No body, no auth required. - Returns the full OpenAPI 3.1 JSON. Use this as the source of truth for any client codegen. ```bash curl https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/openapi ``` #### `POST /v1-bootstrap` — One-call onboarding (org + membership + project) Required: `org_name` (string, 1-120). Optional: `project_name` (string, 1-120, default `"Default Project"`), `project_description` (string, max 500), `create_environments` (boolean, default `true` — creates `development`/`staging`/`production`). Auth: requires a real user JWT in `Authorization: Bearer ` (anon key alone is rejected) **or** a VeriGuard API key in `x-api-key`. Service-role bypass handles the membership insert so the caller becomes `owner` of the new org without an RLS round-trip. On success returns `200` with `{ ok: true, auth_method, org: { id, name }, project: { id, org_id, name, description }, environments: [{ id, name }], next_steps }`. On any failure the org is rolled back to avoid orphans. ```bash curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/v1-bootstrap \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ "org_name": "Acme Robotics", "project_name": "Production Models", "project_description": "Initial governance scope" }' ``` #### `POST /evaluate-policy` — Runtime policy gate Required: `project_id` (uuid), `evaluation_context` (string enum), `input_data` (object). Optional: `ai_system_id` (uuid), `model_version_id` (uuid). `evaluation_context` enum: `"deployment" | "inference" | "drift_trigger" | "scheduled_check"`. `input_data`: free-form key-value metrics evaluated against active policies, e.g. `{ "accuracy": 0.92, "bias_score": 0.15 }`. Returns `200` (allow) or `403` (deny) with `{ decision, enforced, denied_by[], audit_only[], evaluation_hashes[], policies_evaluated, timestamp }`. Deny-by-default on engine failure. ```bash curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/evaluate-policy \ -H "X-API-Key: vg_live_..." \ -H "Content-Type: application/json" \ -d '{ "project_id": "00000000-0000-0000-0000-000000000000", "evaluation_context": "deployment", "input_data": { "accuracy": 0.92, "bias_score": 0.15 } }' ``` #### `POST /calculate-compliance` — Org-wide compliance score Required: `org_id` (uuid). Evaluates evidence coverage, risk management, incident handling, change control, and framework coverage. Writes `compliance_scores` row and creates remediation `compliance_tasks` if thresholds are breached. Returns `{ overall: number, breakdown: { evidence_coverage, risk_management, incident_handling, change_control, framework_coverage }, trend: "improving" | "stable" | "degrading", checks_run: int }`. ```bash curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/calculate-compliance \ -H "X-API-Key: vg_live_..." \ -H "Content-Type: application/json" \ -d '{ "org_id": "00000000-0000-0000-0000-000000000000" }' ``` #### `POST /generate-report` — Audit-ready report (two-step flow) Required: `report_generation_id` (uuid). **This is a two-step flow.** It does **not** accept `report_type` or `format`. **Step 1** — Insert a `report_generations` row via the database (or admin UI) with: - `template_id` (uuid) — one of the seeded templates: `risk_register`, `control_matrix`, `system_description`, `evidence_pack`, `gap_analysis`, `executive_summary`, `incident_report` - `output_format` — `"json" | "csv" | "pdf"` - `org_id` (uuid) - `project_id` (uuid) - optional: `ai_system_id` (uuid) **Step 2** — POST the resulting row's `id` to this endpoint: ```bash curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/generate-report \ -H "X-API-Key: vg_live_..." \ -H "Content-Type: application/json" \ -d '{ "report_generation_id": "" }' ``` Returns `{ success: true, status: "complete", path: "" }`. The file lands in the `reports` storage bucket. > A single-call wrapper is planned for v2. #### `POST /generate-bom` — AI Bill of Materials (SPDX 2.3) Required: `model_version_id` (uuid). Optional: `export_format` — `"spdx_json" | "csv"` (default `"spdx_json"`). Returns `{ success: true, bom: { ...SPDX document }, path: "" }`. The BOM file lands in the `bom-exports` storage bucket. ```bash curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/generate-bom \ -H "X-API-Key: vg_live_..." \ -H "Content-Type: application/json" \ -d '{ "model_version_id": "00000000-0000-0000-0000-000000000000", "export_format": "spdx_json" }' ``` #### `POST /conformity-check` — EU AI Act + MDR conformity assessment Required: `system_id` (uuid), `risk_classification` — `"high" | "limited" | "minimal"`. Optional: `is_medical_device` (bool, default `false`), `device_class` — `"Class I" | "Class IIa" | "Class IIb" | "Class III"`, `intended_use` — `"triage" | "diagnosis" | "monitoring" | "treatment_planning"`, `target_market` — `"EU" | "US" | "both"` (default `"EU"`). Returns 9-step EU AI Act assessment plus 7-phase MDR lifecycle (when `is_medical_device: true`), with PASS/FAIL/PARTIAL per step, dual compliance matrix, action items, and readiness score. ```bash curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/conformity-check \ -H "X-API-Key: vg_live_..." \ -H "Content-Type: application/json" \ -d '{ "system_id": "00000000-0000-0000-0000-000000000000", "risk_classification": "high", "is_medical_device": true, "device_class": "Class IIb", "intended_use": "diagnosis", "target_market": "EU" }' ``` #### `POST /gpai-check` — GPAI transparency & systemic-risk classification Required: `model_id` (uuid), `model_type` — `"general_purpose" | "systemic_risk" | "fine_tuned"`. Optional: `training_data_summary_available` (bool, default `false`), `adversarial_testing_performed` (bool, default `false`), `compute_flops` (number — `≥1e25` triggers systemic-risk classification), `deployment_context` (string). Returns classification (`standard_gpai` | `systemic_risk`), per-obligation results (Articles 52-55), compute threshold check, gap analysis, required disclosures, and rationale. Writes a `gpai_checks` row. ```bash # `model_id` below is a public demo model in the Acme tenant — use it to # exercise the endpoint live. For your own org, pass any uuid from `models.id`. curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/gpai-check \ -H "X-API-Key: vg_live_..." \ -H "Content-Type: application/json" \ -d '{ "model_id": "22222222-aaaa-bbbb-cccc-000000000001", "model_type": "systemic_risk", "compute_flops": 1.2e25, "training_data_summary_available": true, "adversarial_testing_performed": true, "deployment_context": "Public consumer chatbot" }' ``` #### `POST /incident-report` — Article 73 incident report + MDR vigilance Required: `system_id` (uuid), `incident_type` — `"malfunction" | "safety_breach" | "fundamental_rights_violation" | "patient_harm" | "misdiagnosis" | "device_failure"`, `severity` — `"critical" | "serious" | "moderate"`, `description` (string). Optional: `is_medical_device` (bool, default `false`), `device_class` — `"Class I" | "Class IIa" | "Class IIb" | "Class III"`, `affected_parties` (string), `jurisdiction` (string). Returns structured Article 73 report; for medical devices, also returns MDR vigilance report with MIR format, FSCA template, EUDAMED fields, and severity-based reporting deadlines (2 / 15 / 30 days). Writes an `incident_reports` row. ```bash curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/incident-report \ -H "X-API-Key: vg_live_..." \ -H "Content-Type: application/json" \ -d '{ "system_id": "00000000-0000-0000-0000-000000000000", "incident_type": "patient_harm", "severity": "critical", "description": "Diagnostic AI returned false negative for stage-3 melanoma in clinical trial.", "is_medical_device": true, "device_class": "Class IIb", "jurisdiction": "EU" }' ``` #### `POST /data-governance-check` — Article 10 training-data governance Required: `system_id` (uuid). Optional: `is_medical_device` (bool, default `false`), `dataset_description` (string), `data_sources` (array of objects), `intended_population` (string), `device_intended_use` (string). Returns evaluation against Article 10 (EU AI Act) and, when `is_medical_device: true`, MDR Article 10 — checking representativeness, bias examination, PII governance, and clinical evidence requirements. Writes a `data_governance_checks` row. ```bash curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/data-governance-check \ -H "X-API-Key: vg_live_..." \ -H "Content-Type: application/json" \ -d '{ "system_id": "00000000-0000-0000-0000-000000000000", "is_medical_device": true, "dataset_description": "200k dermatology images, EU + US clinics", "intended_population": "Adults 18-80, Fitzpatrick I-VI", "device_intended_use": "Skin lesion triage" }' ``` #### `POST /qms-check` — QMS validation (Article 17 + ISO 13485) Required: `system_id` (uuid), `qms_type` — `"ai_act_only" | "medical_device" | "dual"`. Optional: `existing_certifications` (array of strings, e.g. `["ISO 13485", "ISO 9001"]`), `organization_id` (uuid). Returns Article 17 results, ISO 13485 results (when applicable), overlap mapping for `dual`, gap analysis, and a prioritized remediation roadmap. ISO 13485 cert grants full credit; ISO 9001 grants partial credit. ```bash curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/qms-check \ -H "X-API-Key: vg_live_..." \ -H "Content-Type: application/json" \ -d '{ "system_id": "00000000-0000-0000-0000-000000000000", "qms_type": "dual", "existing_certifications": ["ISO 13485"] }' ``` #### `POST /registration-readiness` — EUDAMED & EU AI Database readiness Required: `system_id` (uuid). Optional: `is_medical_device` (bool, default `false`), `device_class` — `"Class I" | "Class IIa" | "Class IIb" | "Class III"`, `registration_type` — `"eu_ai_database" | "eudamed" | "both"` (default `"eu_ai_database"`). Returns pre-filled registration templates with required fields, completeness scores, and a missing-data gap report. ```bash curl -X POST https://awgpbmiaoqcvcdjbkghc.supabase.co/functions/v1/registration-readiness \ -H "X-API-Key: vg_live_..." \ -H "Content-Type: application/json" \ -d '{ "system_id": "00000000-0000-0000-0000-000000000000", "is_medical_device": true, "device_class": "Class IIb", "registration_type": "both" }' ``` ### Internal / Non-Public Endpoints These exist on the platform but are not part of the v1 public contract. They may change without notice: - `/execute-tests` — Pre-deployment test execution - `/register-agent` — Autonomous agent self-registration (also exposed via MCP) - `/ai-assistant` — In-app governance copilot (body: `{ "message": string, "orgId": uuid, "projectId": uuid }`) - `/patent-claim-tests` — Patent claim verification harness - `/chatbot`, `/contact-form`, `/notify-inquiry`, `/track-visitor` — Marketing-site plumbing - `/manage-api-keys`, `/generate-api-key`, `/invite-member` — Account management - `/llms-txt`, `/llms-full-txt`, `/sitemap`, `/robots-txt`, `/openapi` — Documentation/discovery layer - `/trust-metrics`, `/support-diagnostics`, `/integration-tests` — Observability - `/auth-email-hook` — Auth email rendering ### API Authentication Dual authentication strategy: - **API Key**: `X-API-Key` header with `vg_live_` prefix, SHA-256 hashed at rest, rate-limited per key. **API keys are bound to the org that issued them** — calls referencing a `project_id`/`org_id` outside that org return `403 Forbidden`. To call across orgs, mint a separate key per org. - **JWT**: `Authorization: Bearer ` for in-app frontend sessions; org is derived from the user's active membership. - All public endpoints validate input via Zod and return `400` with `{ error, details: [{ field, message }] }` on validation failure ### MCP Server (Model Context Protocol) VeriGuard AI exposes 11 governance tools via MCP for AI agent integration: - `veriguard_evaluate_policy` — Runtime policy check with cryptographic hash - `veriguard_calculate_compliance` — Multi-framework compliance scoring - `veriguard_generate_report` — Audit-ready report generation - `veriguard_generate_bom` — AI Bill of Materials in SPDX/CycloneDX - `veriguard_register_agent` — Agent self-registration with delegation chain binding - `veriguard_conformity_check` — EU AI Act + MDR conformity assessment - `veriguard_gpai_check` — GPAI systemic risk classification - `veriguard_incident_report` — Incident & vigilance reporting - `veriguard_data_governance_check` — Article 10 data governance - `veriguard_qms_check` — ISO 13485 + Article 17 QMS validation - `veriguard_registration_readiness` — EUDAMED registration readiness MCP Endpoint: `https://www.veriguard-ai.com/functions/v1/mcp-server` Authentication: API key (X-API-Key header) · Protocol: JSON-RPC 2.0 · Claude Desktop ready ### OpenAPI Documentation Full OpenAPI 3.1 specification is the authoritative source: `GET /openapi`. When in doubt, fetch the spec — it is generated from the same Zod schemas the endpoints validate against. ## Security Architecture - Row-Level Security (RLS) on all database tables (51+ tables) - Organization-scoped data isolation - Cryptographic hashing (SHA-256) for all enforcement decisions - Immutable hash-chained audit logs with DB-level mutation prevention - Fail-closed runtime policy enforcement - API key authentication with SHA-256 hashing (vg_live_ prefix) - Dual authentication: API key headers + JWT for frontend ## Resources ### Whitepapers - **The Coming Procurement Bottleneck in AI Adoption**: Analysis of how compliance friction in procurement will stall enterprise AI adoption. ### Guides - **EU AI Act Compliance Checklist**: Step-by-step breakdown for high-risk AI deployers - **NIST AI RMF Implementation Guide**: Practical mapping of GOVERN, MAP, MEASURE, MANAGE functions - **AI Bill of Materials (AI-BOM) Guide**: How to generate, review, and export AI-BOMs - **Kill Switch Enforcement Patterns**: Architecture patterns for emergency AI shutdown - **Normalized Cross-Framework Reporting**: How VeriGuard merges compliance data across 10 frameworks - **Cryptographic Audit Ledger Architecture**: Technical deep-dive into SHA-256 hash chaining ## Frequently Asked Questions Full FAQ with 20 questions available at: /faq **Q: What is VeriGuard AI?** A: VeriGuard AI is an AI Governance as a Service (AGaaS) platform that delivers runtime policy enforcement, dual EU AI Act + MDR compliance, and cryptographic audit trails via 12 enforcement endpoints — for enterprises, federal contractors, medical device manufacturers, and AI agents. **Q: Who is VeriGuard AI built for?** A: CISOs, compliance officers, AI platform teams, model risk managers, procurement officers, medical device manufacturers, and AI agent developers at enterprises and federal contractors. **Q: What regulatory frameworks does VeriGuard AI support?** A: EU AI Act (including GPAI), MDR, ISO 13485, NIST AI RMF, ISO/IEC 42001, SOC 2, SOX, and FedRAMP — 10 frameworks with 12 enforcement endpoints. **Q: How does VeriGuard AI differ from other GRC tools?** A: VeriGuard AI is governance as a service — not a static GRC tool. It enforces policies at runtime via API, offers dual EU AI Act + MDR compliance (unique in the market), provides GPAI classification, and exposes 11 governance tools to AI agents via MCP. **Q: Can VeriGuard handle medical device AI compliance?** A: Yes. VeriGuard is the only platform with integrated dual EU AI Act + MDR compliance. It covers conformity assessment (9-step EU AI Act + 7-phase MDR), QMS validation (ISO 13485), vigilance reporting (MIR/FSCA), data governance (Article 10), and EUDAMED registration readiness — all via API. **Q: Can AI agents use VeriGuard programmatically?** A: Yes. VeriGuard exposes 11 governance tools via MCP (Model Context Protocol): policy evaluation, compliance scoring, report generation, AI-BOM generation, agent registration, conformity assessment, GPAI classification, incident reporting, data governance, QMS validation, and registration readiness. **Q: What is GPAI classification?** A: GPAI (General-Purpose AI) classification determines whether an AI model poses systemic risk based on compute thresholds (10²⁵ FLOPS) and deployment context. VeriGuard auto-classifies models and generates required transparency obligations per the EU AI Act. **Q: What is patent-pending about VeriGuard AI?** A: VeriGuard AI's runtime multi-framework governance architecture — enabling autonomous agents to dynamically invoke normalized compliance evaluation with deterministic fail-closed enforcement and tamper-evident decision lineage — is the subject of a pending U.S. provisional patent application (No. 63/983,107). ## Patent Status VeriGuard AI's runtime multi-framework governance architecture is the subject of a pending U.S. provisional patent application (No. 63/983,107, filed February 14, 2026). The patent covers 10 independent claims, all verified through an automated test suite. ## Contact Website: https://www.veriguard-ai.com Support: support@veriguard-ai.com