# VeriGuard AI — Complete Reference > AI Governance as a Service — runtime policy enforcement, compliance scoring, and cryptographic audit trails via API. ## Overview VeriGuard AI is an AI Governance as a Service (AGaaS) platform designed for enterprise and federal organizations navigating the complex landscape of AI regulation. The platform delivers governance capabilities via a MCP-compatible API, enabling both human teams and AI agents to enforce compliance programmatically — from system registration through runtime enforcement. ## Target Audience VeriGuard AI is built for: - **CISOs & Security Engineers**: Runtime policy enforcement, kill-switch controls, cryptographic audit trails - **Compliance Officers & Legal Teams**: Multi-framework compliance scoring, evidence management, audit-ready reports - **AI Platform Teams & ML Engineers**: Pre-deployment testing gates, drift detection, model lifecycle management - **Board Members & Chief Risk Officers**: Single governance posture score, board-ready dashboards - **Procurement Officers & Vendor Risk Teams**: AI-BOM inventory, third-party risk intelligence, supply chain transparency - **Federal Contractors & AI Vendors Selling to Government**: FedRAMP-aligned controls, evidence trails for federal procurement review - **AI Ethics Leads & Regulatory Affairs**: Human oversight workflows, approval audit trails, EU AI Act human-in-the-loop compliance - **AI Agent Developers & Platform Engineers**: MCP-compatible API for programmatic governance, compliance scoring, and policy evaluation callable by Claude, GPT, and custom agents ## Problem Statement AI adoption is accelerating, but procurement and compliance processes have not kept pace. Enterprise legal teams, security reviewers, and compliance officers face an explosion of vendor questionnaires, FedRAMP authorization bottlenecks, and new regulatory requirements like the EU AI Act. Meanwhile, autonomous AI agents are deploying models and making decisions without any governance checkpoint. This creates a structural bottleneck that stalls AI adoption — not because of capability limitations, but because of compliance friction — and a governance gap where agents operate without oversight. ## Platform Architecture ### AI System Registry Register and classify AI systems by business criticality (critical, high, medium, low). Each system can have multiple models, model versions, and deployments across environments. System owners are assigned for accountability. ### Risk Management - **Risk Register**: Track identified risks with impact/likelihood scoring - **Risk Ratings**: Critical, High, Medium, Low - **Risk Statuses**: Identified, Mitigating, Mitigated, Accepted - **Mitigation Tracking**: Assign owners and due dates for risk mitigations ### Compliance Engine - **Multi-Framework Support**: EU AI Act, NIST AI RMF, ISO 42001, SOC 2 - **Compliance Scoring**: Automated scoring with trend analysis - **Gap Analysis**: Identify compliance gaps across frameworks - **Health Checks**: Automated compliance health assessments - **Task Queue**: Prioritized compliance tasks with assignments ### Control Library Eight control families mapped to regulatory requirements: 1. **GOV** — Governance controls (board oversight, policy management, accountability structures) 2. **RISK** — Risk management controls (risk identification, assessment, mitigation tracking) 3. **DATA** — Data governance controls (lineage, classification, residency, consent management) 4. **MODEL** — Model lifecycle controls (versioning, validation, model cards, performance benchmarks) 5. **DEPLOY** — Deployment controls (pre-deployment gates, environment management, rollback procedures) 6. **MONITOR** — Monitoring controls (drift detection, performance alerts, SLA tracking) 7. **INCIDENT** — Incident response controls (detection, containment, root cause analysis, remediation) 8. **HUMAN** — Human oversight controls (approval workflows, escalation paths, kill switch authority) Each control maps to specific regulatory requirements with mapping strength indicators (direct, partial, supportive). ### Evidence Management - Upload evidence items (documents, screenshots, automated reports, attestations, certifications) - Evidence lifecycle tracking: draft → submitted → under_review → accepted/rejected/expired - Evidence expiry tracking for ongoing compliance - SHA-256 cryptographic hashing of all evidence for tamper detection ### Change Management - Create change requests for model updates, data changes, configuration modifications - Assign reviewers for collaborative approval workflows - Emergency change fast-track process with elevated audit logging - Impact assessment and risk delta scoring - Full audit trail of all approval/rejection decisions ### Incident Response - Incident types: bias, hallucination, data_leak, safety_violation, performance, privacy, security, other - Severity levels: critical, high, medium, low - Timeline tracking with corrective actions - Kill switch activation tracking with reason logging - Root cause analysis and post-incident review workflows ### Drift Detection - Configurable drift monitors with warning and critical thresholds - Metric types: accuracy, latency, fairness, custom - Automated alerts with acknowledgment workflows - Baseline comparison with configurable operators (greater_than, less_than, deviation) - Historical drift readings with trend visualization ### Pre-Deployment Testing - Test categories: bias, hallucination, PII detection, safety, performance - Test execution with pass/fail rates and detailed results - Deployment blocking for failed tests — models cannot reach production without passing gates - Test history and trend analysis across model versions - Customizable test thresholds per AI system ### Runtime Enforcement - **Runtime Models**: Track deployed models with kill-switch state (active, standby, triggered) - **Policy Evaluation**: Runtime policy checks at inference time with cryptographic hashing - **Fail-Closed Architecture**: If policy evaluation fails, inference is blocked by default - **Inference Audit Log**: Every inference request logged with: - Request ID (unique per inference call) - Cryptographic hash (SHA-256) - Token counts (prompt + completion) - Latency measurements (milliseconds) - Policy version at time of evaluation - Kill switch state at time of inference - Environment identifier ### Kill Switch - Emergency shutdown capability for deployed AI models - Three states: active (model serving), standby (model paused), triggered (emergency stop) - Full audit trail of state transitions with actor, timestamp, and reason - Cannot be overridden by automation — requires human authorization - Supports per-model and system-wide activation ### AI Bill of Materials (AI-BOM) - Component types: model, dataset, library, framework, api, hardware, other - Component tracking: name, version, license, supplier, hash - Export formats: SPDX, CycloneDX, custom JSON - Risk notes per component for supply chain risk assessment - Automated BOM generation from model version metadata ### Reporting - Template-based report generation - Report types: compliance_summary, risk_assessment, audit_package, tpra, custom - Output formats: PDF, DOCX, XLSX, CSV - Generation history with download tracking and expiration - One-click audit packages mapped to specific regulatory articles ### Third-Party Risk Intelligence - Vendor AI risk assessment with scoring - Third-party risk analysis reports - Supply chain transparency through AI-BOM integration ### Data Governance - Data source management (database, api, file_system, cloud_storage, streaming) - Dataset classification (public, internal, confidential, restricted) - Data residency tracking with jurisdictional mapping - Dataset versioning with lineage (training, validation, testing, fine_tuning, evaluation) - Risk flags: bias_risk, privacy_risk, quality_risk, consent_risk, retention_risk - Severity levels per risk flag for prioritized remediation ### Analytics & Dashboard - Governance Posture Score: single metric answering "Am I protected?" - Three posture pillars: Compliant, Controlled, Monitored - Action items panel with prioritized next steps - Change request and incident widgets with real-time counts - My Approvals widget for pending human oversight tasks - Trend analysis across compliance scores over time ## Deployment Model VeriGuard AI is a cloud-hosted SaaS platform with: - Multi-tenant organization support - Project-based isolation within organizations - Role-based access control (admin, member, viewer) - Team invitation and management via email - Environment management (development, staging, production) ## Security Architecture - Row-Level Security (RLS) on all database tables - Organization-scoped data isolation - Cryptographic hashing (SHA-256) for all enforcement decisions - Tamper-evident audit logs with immutable records - Fail-closed runtime policy enforcement - No plaintext storage of sensitive model data ## Regulatory Context ### EU AI Act The EU AI Act establishes a risk-based framework for AI systems with four tiers: prohibited, high-risk, limited-risk, and minimal-risk. VeriGuard AI maps its control library to EU AI Act requirements (Articles 9-15), providing automated compliance scoring, gap analysis, and evidence management for organizations deploying AI in EU-regulated markets. Key supported articles include risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11), transparency (Art. 13), human oversight (Art. 14), and accuracy/robustness (Art. 15). ### NIST AI RMF The NIST AI Risk Management Framework provides voluntary guidance for managing AI risks through four core functions: GOVERN, MAP, MEASURE, MANAGE. VeriGuard AI implements these functions through its control families and risk register, with direct mappings to NIST AI RMF subcategories. ### ISO/IEC 42001 ISO 42001 establishes requirements for an AI management system (AIMS). VeriGuard AI's governance controls, evidence management, and audit capabilities support ISO 42001 certification readiness through continuous compliance monitoring and documentation. ### SOC 2 for AI Systems VeriGuard AI extends traditional SOC 2 trust service criteria to address AI-specific concerns including model governance, data lineage, and algorithmic accountability. ## API & Integration ### AI Governance API (AGaaS) VeriGuard AI exposes governance capabilities as a service via API: - **calculate-compliance**: Compute compliance scores across frameworks - **generate-report**: Generate audit packages and compliance reports - **generate-bom**: Create AI Bills of Materials in SPDX/CycloneDX formats - **evaluate-policy**: Runtime policy evaluation with cryptographic hashing - **execute-tests**: Run pre-deployment test suites with pass/fail gates - **ai-assistant**: Authenticated AI copilot for governance guidance ### MCP Server (Model Context Protocol) VeriGuard AI exposes 4 governance tools via MCP for AI agent integration: - `veriguard_evaluate_policy` — Runtime policy check with cryptographic hash - `veriguard_calculate_compliance` — Multi-framework compliance scoring - `veriguard_generate_report` — Audit-ready report generation - `veriguard_generate_bom` — AI Bill of Materials in SPDX/CycloneDX MCP Endpoint: `https://www.veriguard-ai.com/functions/v1/mcp-server` Authentication: API key (X-API-Key header) · Protocol: JSON-RPC 2.0 · Claude Desktop ready ### Supported Export Formats - SPDX (Software Package Data Exchange) for AI-BOMs - CycloneDX for supply chain transparency - PDF, DOCX, XLSX, CSV for reports and audit packages ## Resources ### Feature Pages - **Pre-Deployment AI Testing** (/features/testing): Automated bias, fairness, hallucination, PII, and safety testing with deployment gates that block non-compliant models before production. - **Model Inventory & Risk Classification** (/features/model-registry): Complete AI system inventory with automated EU AI Act risk classification, deployment governance, and per-model kill switch controls. - **Drift & Performance Monitoring** (/features/drift-monitoring): Continuous production monitoring with configurable thresholds, automated incident creation, and kill switch integration for catastrophic drift. - **Policy Documentation & Lifecycle** (/features/policy-lifecycle): Author, review, approve, and enforce governance policies with full version control, approval workflows, and cryptographic decision lineage. - **Multi-Framework Compliance Exports** (/features/compliance-exports): One control layer mapped to EU AI Act, NIST AI RMF, ISO 42001, SOC 2, SOX, and FedRAMP with normalized cross-framework reports. ### Whitepapers - **The Coming Procurement Bottleneck in AI Adoption**: Analysis of how compliance friction in procurement will stall enterprise AI adoption. Covers FedRAMP bottlenecks, EU AI Act ripple effects, vendor questionnaire explosion, and a maturity framework for enterprise buyers. ### Guides - **EU AI Act Compliance Checklist**: Step-by-step breakdown for high-risk AI deployers with control mappings and evidence requirements - **NIST AI RMF Implementation Guide**: Practical mapping of GOVERN, MAP, MEASURE, MANAGE functions to VeriGuard controls - **AI Bill of Materials (AI-BOM) Guide**: How to generate, review, and export AI-BOMs for supply chain transparency - **Kill Switch Enforcement Patterns**: Architecture patterns for emergency AI shutdown with audit trail requirements - **Normalized Reporting Guide**: How VeriGuard's normalized control graph maps controls once across six regulatory frameworks - **Cryptographic Audit Ledger**: Technical documentation of the SHA-256 hash-chained immutable decision ledger ## Frequently Asked Questions **Q: What is VeriGuard AI?** A: VeriGuard AI is an AI Governance as a Service (AGaaS) platform that delivers runtime policy enforcement, compliance scoring, and cryptographic audit trails via API — for enterprises, federal contractors, and AI agents. **Q: Who is VeriGuard AI built for?** A: CISOs, compliance officers, AI platform teams, model risk managers, procurement officers, and AI agent developers at enterprises and federal contractors who deploy or procure AI systems. **Q: What regulatory frameworks does VeriGuard AI support?** A: EU AI Act, NIST AI RMF, ISO/IEC 42001, SOC 2 for AI systems, and FedRAMP AI controls. **Q: How does VeriGuard AI differ from other GRC tools?** A: VeriGuard AI is governance as a service — not a static GRC tool. It enforces policies at runtime via API, blocks non-compliant deployments, provides cryptographic audit trails, and exposes governance tools to AI agents via MCP (Model Context Protocol). **Q: Can AI agents use VeriGuard programmatically?** A: Yes. VeriGuard exposes 4 governance tools via MCP (Model Context Protocol): policy evaluation, compliance scoring, report generation, and AI-BOM generation. Any MCP-compatible agent — Claude, GPT, or custom — can call these tools with an API key. **Q: What is an AI Bill of Materials (AI-BOM)?** A: A comprehensive inventory of all components in an AI system with version tracking, license information, supplier details, and cryptographic hashes for supply chain transparency. Essential for procurement officers and vendor risk teams. **Q: Can VeriGuard help customers pass federal AI procurement review?** A: Yes. VeriGuard provides the evidence trail that government procurement officers require: risk assessments, bias testing results, drift monitoring, kill-switch logs, and cryptographic audit trails. ## Contact Website: https://www.veriguard-ai.com Support: support@veriguard-ai.com