Overview
Pillar 4 answers the question: How will people actually use this?
This pillar designs the interaction models, interfaces, and workflows that determine whether AI actually gets adopted into daily work—through genuine improvement, not mandates.
Why Pillar 4 Matters
The Problem We're Solving
Most AI projects fail not because the technology does not work, but because people do not use it.
Organizations have seen this pattern repeatedly: workflow automation tools that sit unused, collaboration platforms that employees route around, analytics dashboards that no one checks. The technology worked; the experience did not.
Common adoption failures:
- AI tools feel foreign to existing workflows
- Users don't trust AI recommendations
- Interaction requires too much effort
- Value is not visible to users
- Senior experts feel threatened, not empowered
What Success Looks Like
By the end of Pillar 4, the client organization has:
- Detailed interaction designs for each prioritized agent
- Trust and transparency specifications built into the architecture
- Integration requirements mapped to existing systems
- Adoption metrics defined and baseline-ready
- User-validated designs that pass "the Experience Test"
Leveraging Superintelligent Insights
The Superintelligent survey from Pillar 1 captured rich data about how employees actually work—not how they are supposed to work, but how they really operate day-to-day. This data becomes the foundation for experience design.
What Superintelligent Reveals:
| Insight Type | What It Shows | Design Implication |
|---|---|---|
| Work Rhythm | When and how people work (bursts, disconnected, etc.) | Design for actual patterns, not ideal patterns |
| Context Switching | How often people jump between tasks | AI should bring context, not require users to find it |
| Expert Concerns | What senior practitioners worry about | Position AI as amplifier, not replacement |
| Informal Workarounds | Personal systems employees have built | Integrate with existing patterns, don't disrupt them |
The OAIO Experience Design Methodology
Pillar 4 delivers through agent-specific design sessions—one session per agent, each bringing together the right stakeholders for that specific use case.
Session Participants
Unlike governance sessions (which included Legal and Compliance), experience design sessions bring together:
| Participant | Role | Why They're Essential |
|---|---|---|
| End Users | Field workers, reviewers, analysts | Know the daily reality of the work |
| Workflow Owners | Process owners | Understand how work actually flows |
| Senior Practitioners | Veterans who've "seen everything" | Know edge cases and institutional context |
| UX Facilitator | Orion experience designer | Translates insights into design |
Session Structure (3-4 hours per agent)
Each session follows a consistent three-part structure:
Part 1: Current State Mapping (60-90 minutes)
Map the current workflow in detail:
- Where does work originate?
- What triggers human action?
- Where are the friction points?
- What happens when something goes wrong?
Part 2: Interaction Model Design (60-90 minutes)
Design how human-agent interaction will work:
| Element | Question to Answer |
|---|---|
| Entry Point | How does work reach the agent? |
| Handoff Protocol | How does the agent communicate what it has done? |
| Uncertainty Handling | How does the agent surface doubt or ambiguity? |
| Escalation Trigger | When does the agent ask for human help? |
| Exit Point | How does the human confirm completion? |
Part 3: Transparency and Trust Design (45-60 minutes)
Define what the agent shows users:
| Element | Purpose | Example |
|---|---|---|
| Confidence Indicators | Visual signals of certainty | "High confidence" badge, percentage scores |
| Reasoning Traces | Explain why agent made recommendations | "Flagged because: [specific rule]" |
| Source Attribution | Links to data agent relied on | Direct links to source documents |
| Correction Mechanisms | Easy ways to fix agent mistakes | One-click "dismiss" or "modify" |
Agent Interaction Model Templates
Pattern 1: Pre-Processing Agent (QA Validation)
The Challenge: Human reviewers spend hours on mechanical checks before applying judgment.
| Stage | Human Action | Agent Action |
|---|---|---|
| Entry | Upload item to queue | Automatic validation begins |
| Processing | Continue other work | Check against validation rules |
| Handoff | Receive notification | Present validation summary with issues flagged |
| Review | Examine flagged items | Provide context for each flag (source, rule, confidence) |
| Resolution | Accept, modify, or dismiss flags | Learn from corrections |
| Exit | Mark validation complete | Log decision for audit |
Trust Design: The agent never says "this is good." Instead: "I checked 47 items. Here are 3 that need your attention."
Pattern 2: Normalization Agent (Field Note Processing)
The Challenge: Inconsistent inputs (terminology, abbreviations, formats) require tedious standardization.
| Stage | Human Action | Agent Action |
|---|---|---|
| Entry | Submit raw content (text, photos) | Parse and interpret content |
| Processing | See real-time progress indicator | Normalize terminology, structure data |
| Handoff | Review normalized output | Present side-by-side comparison |
| Review | Accept or edit suggestions | Highlight changes for easy scanning |
| Exit | Confirm final version | Store normalized version with audit trail |
Trust Design: Show every change, color-coded:
- Blue = Terminology normalization
- Yellow = Inferred data (with confidence score)
- Normal = Unchanged content
One-click options: Accept All, Review Individually, Revert to Original.
Pattern 3: Routing Agent (Exception Handler)
The Challenge: Experts waste time on routine cases while genuinely complex cases sometimes get inadequate attention.
| Stage | Human Action | Agent Action |
|---|---|---|
| Entry | None (automatic monitoring) | Continuously analyze incoming work |
| Detection | Receive priority alert | Identify exception based on risk patterns |
| Context | Review exception summary | Provide case context, similar precedents, risk factors |
| Decision | Assign routing or handling | Learn from routing decisions |
| Exit | Confirm action taken | Log decision and outcome |
Trust Design: The agent surfaces information, not decisions. Each alert includes:
- Why this was flagged (specific risk indicators)
- Similar past cases (how they were handled)
- Recommended expert (based on expertise matching)
- Urgency assessment (time-sensitive or routine)
Confidence Design
Confidence indicators require careful calibration. Users initially ignore vague indicators ("high confidence") because they don't know what the threshold means.
Effective Confidence Communication:
| Approach | Example | When to Use |
|---|---|---|
| Comparative | "Similar to 94% of cases I've seen" | When historical context helps |
| Range | "Between 75-85% confident" | When precision matters |
| Categorical | "High / Medium / Low" with definitions | For quick scanning |
| Contextual | "Lower than usual for this document type" | When deviation matters |
Progressive Disclosure: Power users want to move fast; new users want to understand. Solution: Collapsed explanations with "Why?" links.
AWS accomplishes this with Amazon Bedrock — confidence scores and guardrail assessments. Access model confidence indicators and guardrail evaluation scores to surface uncertainty to users. Learn more →
Design Validation
Before finalizing designs, Orion conducts lightweight validation with representative users.
Method: Walk-Through Sessions with Prototypes
What We Test:
- Can users understand what the agent did?
- Do they trust the confidence indicators?
- Can they efficiently review and correct?
- Does it feel faster than current approach?
Common Validation Findings:
| Finding | Solution |
|---|---|
| Confidence indicators ignored | Show confidence as comparative range with context |
| Explanations slow power users | Make explanations optional (collapsed by default) |
| Mobile experience critical | Mobile-first design for field-facing features |
| Expert matching sometimes wrong | Easy reassignment with feedback mechanism |
Design Artifacts
Pillar 4 produces detailed specifications for implementation:
Interaction Specifications
| Artifact | Content | Audience |
|---|---|---|
| Screen-by-Screen Wireframes | Visual designs for each agent interface | Development team |
| State Diagrams | Workflow transitions and decision points | Development, QA |
| Error Handling Designs | Edge cases and exception flows | Development team |
Trust and Transparency Specifications
| Artifact | Content | Audience |
|---|---|---|
| Confidence Display Standards | How to show certainty levels | Development, UX |
| Explanation Templates | Standard formats for agent reasoning | Development, UX |
| Correction Mechanisms | How users fix mistakes | Development, QA |
Integration Requirements
| Artifact | Content | Audience |
|---|---|---|
| Entry Point Mapping | Where agents connect to existing systems | Architecture, Development |
| Notification Design | Alerting and messaging specs | Development, UX |
| Mobile/Offline Requirements | Disconnected operation needs | Development, Architecture |
Adoption Metrics
| Metric | Definition | Target |
|---|---|---|
| Activation | % of users who try the agent | 80%+ within 30 days |
| Engagement | Frequency of agent use | Daily for target users |
| Satisfaction | User perception of value | NPS > 30 |
| Efficiency | Time savings achieved | 50%+ reduction in task time |
Session Agenda
Per-Agent Design Session (3-4 hours):
| Time | Focus | Purpose |
|---|---|---|
| 0:00–0:15 | Context Setting | Review agent purpose, Superintelligent insights |
| 0:15–1:15 | Current State Mapping | Map workflow, identify friction, document workarounds |
| 1:15–1:30 | Break | |
| 1:30–2:30 | Interaction Model Design | Define entry, handoff, escalation, exit |
| 2:30–3:15 | Transparency and Trust Design | Confidence, explanations, corrections |
| 3:15–3:45 | Validation Planning | Identify test users, define success criteria |
Design Principles
Outputs and Handoffs
Pillar 4 Deliverables
| Deliverable | Description | Example |
|---|---|---|
| Interaction Specifications | Screen flows, state diagrams, wireframes | View Example |
| Trust Specifications | Confidence, explanation, correction designs | — |
| Integration Requirements | System touchpoints, API needs, offline requirements | — |
| Adoption Metrics | KPIs with targets and measurement plan | View Example |
| Validation Report | User feedback and design refinements | — |
Handoff to Pillar 5
Pillar 4 outputs flow directly into Pillar 5 (FinOps):
- Usage projections inform cost forecasting
- Interaction complexity affects token consumption estimates
- Training requirements impact operational cost model
- Adoption targets inform business case validation
Delivery Timeline
Delivery Timeline
2 weeks total — click a week for details
Common Pitfalls
Facilitator Guidance
Mission & Charter
The Pillar 4 Design Sessions exist to create AI experiences that earn adoption. These are not UI polish exercises or technology demonstrations. The mission is to:
- Design interaction models that fit how people actually work
- Build transparency and trust into every agent touchpoint
- Create experiences that are genuinely better than current workflows
- Validate designs with real users before development begins
What these sessions are NOT:
- UI mockup reviews divorced from workflow context
- Technology demonstrations to impress stakeholders
- Manager-driven design without actual end users in the room
- Abstract interaction patterns without specific agent grounding
Session Inputs
Participants should have reviewed:
- Superintelligent survey insights relevant to their agent
- Pillar 3 permission boundaries (what the agent can and cannot do)
- Current workflow documentation (if it exists)
Orion enters with prepared artifacts:
- Superintelligent insights summary for this user population
- Current state mapping templates
- Interaction model canvas templates
- Trust design checklist
Preparation Checklist
- Review Superintelligent data — Understand how this user population actually works (rhythms, pain points, workarounds)
- Study Pillar 3 outputs — Know the permission boundaries and trust requirements that constrain design
- Recruit the right participants — End users, not managers describing what users do; include skeptics
- Prepare current state templates — Have workflow mapping tools ready (whiteboard, digital, or paper)
- Identify senior practitioners — Veterans who can surface edge cases and institutional knowledge
- Review mobile/offline needs — Understand connectivity constraints for field workers
Delivery Tips
Opening:
- Start with Superintelligent insight: "Your colleagues told us X about how this work actually happens"
- Frame the goal: "We're designing something you'll actually want to use—not something you're told to use"
- Acknowledge past failed tools—show you understand their skepticism
Managing the Room:
- Draw out the quiet users—they often know the real workflows
- Capture workarounds without judgment—they're design opportunities, not compliance failures
- When experts express concern, validate and design for amplification: "How do we make this help you do more of what you're good at?"
- Watch for managers answering for users—redirect to actual practitioners
- Keep the interaction model grounded in the permission boundaries from Pillar 3
Part Transitions:
- Current State → Interaction Model: "Now that we see how work flows today, let's design where the agent fits"
- Interaction Model → Trust Design: "The interaction is clear. Now let's make sure you can trust what the agent tells you"
Closing:
- Walk through the interaction model end-to-end with a realistic scenario
- Confirm the "Experience Test" answer: Why would someone use this instead of their current approach?
- If the answer isn't compelling, the design isn't ready—iterate before leaving
- Schedule validation session with representative users (different from design participants)
Output Artifacts Checklist
By session end, confirm you have:
- Current state workflow map with pain points and workarounds identified
- Interaction model for the agent (entry, handoff, escalation, exit)
- Transparency specifications (confidence display, explanations, corrections)
- Integration requirements (where agent connects to existing systems)
- Mobile/offline requirements (if applicable)
- Validation session scheduled with representative users
- Compelling answer to "The Experience Test"
When Variations Are Required
Session adjustments:
- If users work in disconnected environments: prioritize mobile-first, offline-capable design
- If senior experts are highly skeptical: dedicate extra time to "amplification not replacement" framing
- If workflows vary significantly by region/team: may need multiple sessions with different user groups
Validation considerations:
- Always validate with users different from those who designed (avoid confirmation bias)
- For high-stakes agents: consider clickable prototypes, not just walk-throughs
- If validation reveals fundamental issues: pause for redesign rather than force-fitting
Pricing and Positioning
Scope Options
| Scope | Duration | Description |
|---|---|---|
| Experience Workshop | 1-2 weeks | Design sessions for 2-3 agents with basic validation |
| Comprehensive Design | 3-4 weeks | Full design package with extensive validation |
| Design + Prototyping | 4-6 weeks | Interactive prototypes for user testing |
Integration with UX Programs
Pillar 4 designs integrate with existing UX practices:
- Design system alignment
- Accessibility requirements
- Brand guidelines
- Existing interaction patterns
Required Collateral
- Experience Design Session GuideTODO
- Current State Mapping TemplateTODO
- Interaction Model CanvasTODO
- Trust Design ChecklistTODO
- Validation Session GuideTODO
- Adoption Metrics FrameworkTODO
Reference Materials
Related Content
- NorthRidge Case Study: Pillar 4 — Story-based walkthrough
- Pillar 3: AI Protection & Operational Trust — Prerequisites for Pillar 4
- Pillar 5: AI FinOps & Operational Economics — Where Pillar 4 outputs flow
External Resources
Human-AI Interaction:
- Google People + AI Guidebook — Design patterns for human-AI collaboration
- Microsoft HAX Toolkit — Human-AI experience design guidelines
- IBM Design for AI — Enterprise AI design principles
Accessibility:
- WCAG 2.1 Guidelines — Web content accessibility standards
- Inclusive Design at Microsoft — Inclusive design methodology
AI Transparency:
- Anthropic Constitutional AI — Approach to AI alignment and transparency
- Google Model Cards — Documenting AI model capabilities and limitations
Static ContentServing from MDX file
Source: content/methodology/04-experience-design-guide.mdx