Overview
Pillar 3 answers the question: How do we trust AI to operate in our business?
This pillar designs the governance, permissions, and observability structures that make trust operationally real—not theoretical.
Why Pillar 3 Matters
The Problem We're Solving
AI adoption fails when organizations cannot answer basic accountability questions:
- Who is responsible when AI makes a mistake?
- How do we know the AI is doing what we think it's doing?
- What happens when something goes wrong?
Without clear answers, any agent deployment creates organizational anxiety rather than resolves it. These questions are not obstacles to AI adoption—they are prerequisites.
What Success Looks Like
By the end of Pillar 3, the client organization has:
- Clear permission boundaries for each agent (what it can and cannot do)
- Named human accountability for every AI action
- Observability architecture for monitoring and audit
- Escalation procedures for uncertainty and failure
- A sanctioned AI policy that makes governed AI easier than shadow AI
Bringing In the Right Personas
Unlike Pillar 1 (which explicitly excluded certain personas), Pillar 3 requires voices that would have stalled earlier conversations.
Why These Personas Now
In Pillar 1, bringing Legal and Compliance into the room too early would have shifted the conversation from "where is value?" to "why we can't do this." Now, with prioritized use cases and understood data requirements, these personas have something concrete to govern rather than abstract risk to prevent.
The question is no longer "should we do AI?" but "how do we do these specific things safely?"
Required Personas
| Persona | Role | Why They're Essential Now |
|---|---|---|
| CIO | Accountable for safe AI adoption | Final authority on technology decisions |
| General Counsel | Legal liability and regulatory exposure | Defines what creates legal risk |
| Chief Compliance Officer | Regulatory requirements and audit readiness | Ensures external compliance |
| CISO | Security architecture and data protection | Ensures agent security |
| Head of QA | Operational accountability for agent outputs | Owns quality of agent work |
The OAIO Governance Methodology
Pillar 3 is delivered through two working sessions, each focused on different governance dimensions.
Session 1: Permission Boundaries
For each prioritized agent, the group defines explicit permission boundaries using a simple framework:
What can this agent DO?
- Read specified data sources
- Perform defined transformations
- Generate recommendations
- Flag issues for human review
What can this agent NEVER do?
- Approve final deliverables
- Modify protected data
- Access data outside its scope
- Take actions without human confirmation
Permission Boundary Template:
| Agent | CAN DO | CANNOT DO | REQUIRES APPROVAL |
|---|---|---|---|
| Pre-QA Validation | Check reports against rules, Flag issues | Approve reports, Modify data | None - recommendation only |
| Field Note Normalization | Suggest normalized text, Highlight changes | Change system of record | Human confirms before save |
| Exception Routing | Flag high-risk cases, Suggest expert | Assign work, Change priority | Expert reviews recommendation |
AWS accomplishes this with Amazon Bedrock Guardrails — input filtering and prompt attack detection. Configure guardrails to detect and block prompt injection attempts and malicious inputs. Learn more →
Session 2: Observability and Escalation
The second session focuses on making agent behavior visible and defining escalation paths.
Observability Requirements:
| Requirement | What It Means | Why It Matters |
|---|---|---|
| Action Logging | Every agent action logged with timestamp, input, output | Complete audit trail |
| Confidence Exposure | Confidence scores visible on all recommendations | Users know when to trust |
| Compliance Access | Audit trail accessible to compliance without IT intervention | Self-service compliance |
| Operational Review | Weekly summary reports for operational review | Ongoing governance |
Escalation Triggers:
Define specific conditions that trigger human intervention:
| Trigger | Action |
|---|---|
| Confidence below threshold | Automatic human review required |
| Data anomaly detected | Flag and pause agent processing |
| Pattern outside training distribution | Escalate to domain expert |
| Any action touching protected data | Mandatory human approval |
The Trust Spectrum
Not every agent action requires the same level of oversight. Pillar 3 helps organizations calibrate governance to risk.
| Risk Level | Agent Behavior | Governance Requirement |
|---|---|---|
| Low | Read-only data access, internal summaries | Basic logging, periodic review |
| Medium | Recommendations to humans, draft generation | Confidence thresholds, human confirmation |
| High | Actions affecting client deliverables | Mandatory human approval, audit trail |
| Critical | Licensed data, regulatory filings | Human execution only, AI prohibited |
Applying the Trust Spectrum:
For each agent, map every action to the appropriate risk level:
- Field Note Normalization = Medium (agent suggests, human confirms)
- Pre-QA Validation = High (affects client deliverables, all flags require acknowledgment)
- Licensed measurement modification = Critical (no AI involvement, ever)
Microsoft Azure accomplishes this with Azure AI Search + OpenAI — on your data with citation generation. Use Azure OpenAI On Your Data to ground responses with automatic citations from your sources. Learn more →
Addressing Shadow AI Directly
With formal governance taking shape, Pillar 3 addresses the elephant in the room: shadow AI usage that is already happening.
Employees are likely already using ChatGPT and similar tools for:
- Cleaning up document language
- Summarizing regulatory documents
- Drafting communications
- Analyzing data
None of this is governed. None of it is logged. Sensitive data may be flowing to external services.
The OAIO Approach: Better Path, Not Prohibition
Rather than issuing a prohibition (which rarely works), Orion proposes a different approach: make the governed path easier than the shadow path.
Sanctioned AI Policy Components:
| Component | What It Provides |
|---|---|
| Approved tools | Internal AI capabilities for common tasks (summarization, drafting, normalization) |
| Clear guidance | What data can and cannot be used with AI tools |
| Easy access | Integration into existing workflows, not separate systems |
| No punishment for past usage | Amnesty for previous shadow AI, focus on future behavior |
Key message to employees: "We know you've been using AI tools to get work done. That's resourceful. We're now providing a better way that protects you and the company."
Governance Artifacts
Pillar 3 produces concrete documentation that guides implementation and ongoing operations:
Per-Agent Artifacts
| Artifact | Content | Audience |
|---|---|---|
| Permission Specification | Can do / Cannot do / Requires approval matrix | Implementation team, Compliance |
| Data Access Boundaries | What data the agent can access and how | Security, Data stewards |
| Action Authority Limits | What actions require what approval levels | Operations, Compliance |
Organizational Artifacts
| Artifact | Content | Audience |
|---|---|---|
| Observability Architecture | Logging requirements, retention policies, dashboard specs | IT, Security, Compliance |
| Escalation Procedures | Trigger conditions, response times, incident classification | Operations, QA |
| Sanctioned AI Policy | Approved tools, use cases, data handling requirements | All employees |
Session Agendas
Session 1: Permission Boundaries (3-4 hours)
| Time | Focus | Purpose |
|---|---|---|
| 0:00–0:30 | Context Setting | Review agents, reframe from "should we" to "how safely" |
| 0:30–1:30 | Permission Boundary Design | Define can do / cannot do / requires approval per agent |
| 1:30–1:45 | Break | |
| 1:45–2:45 | Accountability Assignment | Who owns decisions, who escalates, who audits |
| 2:45–3:30 | Alignment and Documentation | Capture permissions, identify concerns, assign follow-ups |
Session 2: Observability and Escalation (3-4 hours)
| Time | Focus | Purpose |
|---|---|---|
| 0:00–0:30 | Observability Requirements | Define what must be logged and visible |
| 0:30–1:30 | Escalation Design | Define triggers, procedures, response times |
| 1:30–1:45 | Break | |
| 1:45–2:30 | Shadow AI Discussion | Surface current usage, design sanctioned alternative |
| 2:30–3:30 | Policy Drafting | Draft sanctioned AI policy, identify gaps |
Governance Design Principles
Outputs and Handoffs
Pillar 3 Deliverables
| Deliverable | Description | Example |
|---|---|---|
| Agent Permission Specifications | Per-agent capability matrix | View Example |
| Observability Architecture | Logging, dashboards, audit protocols | — |
| Escalation Procedures | Trigger conditions, response requirements | View Example |
| Sanctioned AI Policy | Approved tools, guidance, expectations | — |
| Trust Spectrum Application | Risk classification per agent action | — |
Handoff to Pillar 4
Pillar 3 outputs flow directly into Pillar 4 (AI Experience Design):
- Permission boundaries shape what users can ask agents to do
- Escalation triggers become UX decision points
- Confidence exposure requirements drive transparency design
- Accountability structures inform workflow integration
Delivery Timeline
Delivery Timeline
2 weeks total — click a week for details
Common Pitfalls
Facilitator Guidance
Mission & Charter
The Pillar 3 Governance Sessions exist to make trust operationally real. These are not abstract policy discussions or compliance checkbox exercises. The mission is to:
- Define explicit permission boundaries for each prioritized agent
- Assign named human accountability for every AI action
- Design observability and escalation architectures
- Draft a sanctioned AI policy that makes governed AI easier than shadow AI
What these sessions are NOT:
- A forum to re-debate whether AI should be adopted (that was Pillar 1)
- An abstract risk discussion without specific agent context
- A compliance documentation exercise divorced from operations
Session Inputs
Participants should have reviewed:
- Pillar 1 outputs: prioritized agents, value propositions, business owners
- Pillar 2 outputs: data maps, access models, named data owners
- Pre-read: summary of agents and their intended capabilities
Orion enters with prepared artifacts:
- Permission boundary templates (pre-populated with agent names)
- Trust Spectrum framework
- Example escalation procedures from similar engagements
Preparation Checklist
- Review Pillar 1 and 2 outputs — Understand each agent's value proposition and data profile
- Brief participants individually — Address concerns before the group session; surface objections early
- Prepare agent-specific scenarios — Use concrete examples to ground abstract governance discussions
- Have permission templates ready — Pre-populate with agent names and draft capabilities
- Coordinate with Legal/Compliance — Ensure they understand the goal is "how to do safely" not "whether to do"
- Prepare shadow AI discussion — Research likely current usage patterns to address proactively
Delivery Tips
Opening (Session 1):
- Reframe immediately: "We're not deciding whether to do AI. We're deciding how to do these specific agents safely."
- Acknowledge that skepticism is valuable—it makes governance better
- Review each agent briefly: what it does, what data it touches, who owns it
Managing the Room:
- When Legal says "we can't," ask "under what conditions could we?"
- When Security raises concerns, translate to specific controls needed
- Keep discussions anchored to specific agents, not abstract AI
- Capture concerns as requirements, not blockers
- Watch for the "governance theater" trap—policies that look good but don't govern
Session 2 Transition:
- Connect permission boundaries to observability: "Now that we know what agents can do, how do we see that they're doing it?"
- Frame escalation as protection for users, not restriction
Closing:
- Summarize permissions clearly—who can do what
- Confirm accountability assignments by name
- Review shadow AI discussion outcomes
- Schedule follow-up for policy drafting and stakeholder review
Output Artifacts Checklist
By session end, confirm you have:
- Permission boundary matrix for each agent (CAN DO / CANNOT DO / REQUIRES APPROVAL)
- Named accountability assignments (not roles—people)
- Observability requirements (what must be logged and visible)
- Escalation triggers and procedures
- Shadow AI discussion outcomes and sanctioned path plan
- Follow-up assignments for policy drafting
When Variations Are Required
Multiple sessions may be needed when:
- Organization has distinct business units with different risk profiles
- Regulatory requirements vary by region or function
- Agents touch multiple compliance regimes (HIPAA, PCI, etc.)
Session scope adjustments:
- If governance personas are new to AI: add 30 minutes for AI economics education
- If strong resistance emerges: pause for individual stakeholder alignment before continuing
- If shadow AI usage is extensive: may need dedicated follow-up session for sanctioned path design
Pricing and Positioning
Scope Options
| Scope | Duration | Description |
|---|---|---|
| Governance Workshop | 1-2 weeks | Two facilitated sessions with artifact documentation |
| Comprehensive Framework | 3-4 weeks | Full governance framework with policy drafting |
| Ongoing Governance | Retainer | Continuous governance review as agents evolve |
Integration with Compliance Programs
Pillar 3 artifacts integrate with existing compliance programs:
- SOC 2 — AI controls map to trust service criteria
- ISO 27001 — Agent security controls in ISMS
- GDPR/CCPA — Privacy controls for AI data processing
- Industry-specific — Financial services, healthcare, etc.
Required Collateral
- Permission Boundary Workshop GuideTODO
- Permission Specification TemplateTODO
- Observability Architecture TemplateTODO
- Escalation Procedure TemplateTODO
- Sanctioned AI Policy TemplateTODO
- Trust Spectrum Assessment ToolTODO
Reference Materials
Related Content
- NorthRidge Case Study: Pillar 3 — Story-based walkthrough
- Pillar 2: Data & AI Readiness — Prerequisites for Pillar 3
- Pillar 4: AI Experience Design — Where Pillar 3 outputs flow
External Resources
Governance Frameworks:
- NIST AI Risk Management Framework — Comprehensive framework for AI risk governance
- ISO/IEC 42001:2023 — AI management system standard
Security Guidance:
- OWASP Top 10 for Large Language Model Applications — Security risks specific to LLM deployments
- MITRE ATLAS — Adversarial threat landscape for AI systems
Compliance Standards:
- SOC 2 Trust Service Criteria — Map AI controls to SOC 2 criteria
- EU AI Act — European AI regulation requirements
Static ContentServing from MDX file
Source: content/methodology/03-protection-trust-guide.mdx