Main
Pillar 4Complete

Pillar 4: AI Experience Design

How to design human-AI interactions that earn adoption through better workflows, not mandates.

Overview

Pillar 4 answers the question: How will people actually use this?

This pillar designs the interaction models, interfaces, and workflows that determine whether AI actually gets adopted into daily work—through genuine improvement, not mandates.


Why Pillar 4 Matters

The Problem We're Solving

Most AI projects fail not because the technology does not work, but because people do not use it.

Organizations have seen this pattern repeatedly: workflow automation tools that sit unused, collaboration platforms that employees route around, analytics dashboards that no one checks. The technology worked; the experience did not.

Common adoption failures:

  • AI tools feel foreign to existing workflows
  • Users don't trust AI recommendations
  • Interaction requires too much effort
  • Value is not visible to users
  • Senior experts feel threatened, not empowered

What Success Looks Like

By the end of Pillar 4, the client organization has:

  • Detailed interaction designs for each prioritized agent
  • Trust and transparency specifications built into the architecture
  • Integration requirements mapped to existing systems
  • Adoption metrics defined and baseline-ready
  • User-validated designs that pass "the Experience Test"

Leveraging Superintelligent Insights

The Superintelligent survey from Pillar 1 captured rich data about how employees actually work—not how they are supposed to work, but how they really operate day-to-day. This data becomes the foundation for experience design.

What Superintelligent Reveals:

Insight TypeWhat It ShowsDesign Implication
Work RhythmWhen and how people work (bursts, disconnected, etc.)Design for actual patterns, not ideal patterns
Context SwitchingHow often people jump between tasksAI should bring context, not require users to find it
Expert ConcernsWhat senior practitioners worry aboutPosition AI as amplifier, not replacement
Informal WorkaroundsPersonal systems employees have builtIntegrate with existing patterns, don't disrupt them

The OAIO Experience Design Methodology

Pillar 4 delivers through agent-specific design sessions—one session per agent, each bringing together the right stakeholders for that specific use case.

Session Participants

Unlike governance sessions (which included Legal and Compliance), experience design sessions bring together:

ParticipantRoleWhy They're Essential
End UsersField workers, reviewers, analystsKnow the daily reality of the work
Workflow OwnersProcess ownersUnderstand how work actually flows
Senior PractitionersVeterans who've "seen everything"Know edge cases and institutional context
UX FacilitatorOrion experience designerTranslates insights into design

Session Structure (3-4 hours per agent)

Each session follows a consistent three-part structure:

Part 1: Current State Mapping (60-90 minutes)

Map the current workflow in detail:

  • Where does work originate?
  • What triggers human action?
  • Where are the friction points?
  • What happens when something goes wrong?

Part 2: Interaction Model Design (60-90 minutes)

Design how human-agent interaction will work:

ElementQuestion to Answer
Entry PointHow does work reach the agent?
Handoff ProtocolHow does the agent communicate what it has done?
Uncertainty HandlingHow does the agent surface doubt or ambiguity?
Escalation TriggerWhen does the agent ask for human help?
Exit PointHow does the human confirm completion?

Part 3: Transparency and Trust Design (45-60 minutes)

Define what the agent shows users:

ElementPurposeExample
Confidence IndicatorsVisual signals of certainty"High confidence" badge, percentage scores
Reasoning TracesExplain why agent made recommendations"Flagged because: [specific rule]"
Source AttributionLinks to data agent relied onDirect links to source documents
Correction MechanismsEasy ways to fix agent mistakesOne-click "dismiss" or "modify"

Agent Interaction Model Templates

Pattern 1: Pre-Processing Agent (QA Validation)

The Challenge: Human reviewers spend hours on mechanical checks before applying judgment.

StageHuman ActionAgent Action
EntryUpload item to queueAutomatic validation begins
ProcessingContinue other workCheck against validation rules
HandoffReceive notificationPresent validation summary with issues flagged
ReviewExamine flagged itemsProvide context for each flag (source, rule, confidence)
ResolutionAccept, modify, or dismiss flagsLearn from corrections
ExitMark validation completeLog decision for audit

Trust Design: The agent never says "this is good." Instead: "I checked 47 items. Here are 3 that need your attention."


Pattern 2: Normalization Agent (Field Note Processing)

The Challenge: Inconsistent inputs (terminology, abbreviations, formats) require tedious standardization.

StageHuman ActionAgent Action
EntrySubmit raw content (text, photos)Parse and interpret content
ProcessingSee real-time progress indicatorNormalize terminology, structure data
HandoffReview normalized outputPresent side-by-side comparison
ReviewAccept or edit suggestionsHighlight changes for easy scanning
ExitConfirm final versionStore normalized version with audit trail

Trust Design: Show every change, color-coded:

  • Blue = Terminology normalization
  • Yellow = Inferred data (with confidence score)
  • Normal = Unchanged content

One-click options: Accept All, Review Individually, Revert to Original.


Pattern 3: Routing Agent (Exception Handler)

The Challenge: Experts waste time on routine cases while genuinely complex cases sometimes get inadequate attention.

StageHuman ActionAgent Action
EntryNone (automatic monitoring)Continuously analyze incoming work
DetectionReceive priority alertIdentify exception based on risk patterns
ContextReview exception summaryProvide case context, similar precedents, risk factors
DecisionAssign routing or handlingLearn from routing decisions
ExitConfirm action takenLog decision and outcome

Trust Design: The agent surfaces information, not decisions. Each alert includes:

  • Why this was flagged (specific risk indicators)
  • Similar past cases (how they were handled)
  • Recommended expert (based on expertise matching)
  • Urgency assessment (time-sensitive or routine)

Confidence Design

Confidence indicators require careful calibration. Users initially ignore vague indicators ("high confidence") because they don't know what the threshold means.

Effective Confidence Communication:

ApproachExampleWhen to Use
Comparative"Similar to 94% of cases I've seen"When historical context helps
Range"Between 75-85% confident"When precision matters
Categorical"High / Medium / Low" with definitionsFor quick scanning
Contextual"Lower than usual for this document type"When deviation matters

Progressive Disclosure: Power users want to move fast; new users want to understand. Solution: Collapsed explanations with "Why?" links.

💡

AWS accomplishes this with Amazon Bedrockconfidence scores and guardrail assessments. Access model confidence indicators and guardrail evaluation scores to surface uncertainty to users. Learn more →


Design Validation

Before finalizing designs, Orion conducts lightweight validation with representative users.

Method: Walk-Through Sessions with Prototypes

What We Test:

  • Can users understand what the agent did?
  • Do they trust the confidence indicators?
  • Can they efficiently review and correct?
  • Does it feel faster than current approach?

Common Validation Findings:

FindingSolution
Confidence indicators ignoredShow confidence as comparative range with context
Explanations slow power usersMake explanations optional (collapsed by default)
Mobile experience criticalMobile-first design for field-facing features
Expert matching sometimes wrongEasy reassignment with feedback mechanism

Design Artifacts

Pillar 4 produces detailed specifications for implementation:

Interaction Specifications

ArtifactContentAudience
Screen-by-Screen WireframesVisual designs for each agent interfaceDevelopment team
State DiagramsWorkflow transitions and decision pointsDevelopment, QA
Error Handling DesignsEdge cases and exception flowsDevelopment team

Trust and Transparency Specifications

ArtifactContentAudience
Confidence Display StandardsHow to show certainty levelsDevelopment, UX
Explanation TemplatesStandard formats for agent reasoningDevelopment, UX
Correction MechanismsHow users fix mistakesDevelopment, QA

Integration Requirements

ArtifactContentAudience
Entry Point MappingWhere agents connect to existing systemsArchitecture, Development
Notification DesignAlerting and messaging specsDevelopment, UX
Mobile/Offline RequirementsDisconnected operation needsDevelopment, Architecture

Adoption Metrics

MetricDefinitionTarget
Activation% of users who try the agent80%+ within 30 days
EngagementFrequency of agent useDaily for target users
SatisfactionUser perception of valueNPS > 30
EfficiencyTime savings achieved50%+ reduction in task time

Session Agenda

Per-Agent Design Session (3-4 hours):

TimeFocusPurpose
0:00–0:15Context SettingReview agent purpose, Superintelligent insights
0:15–1:15Current State MappingMap workflow, identify friction, document workarounds
1:15–1:30Break
1:30–2:30Interaction Model DesignDefine entry, handoff, escalation, exit
2:30–3:15Transparency and Trust DesignConfidence, explanations, corrections
3:15–3:45Validation PlanningIdentify test users, define success criteria

Design Principles


Outputs and Handoffs

Pillar 4 Deliverables

DeliverableDescriptionExample
Interaction SpecificationsScreen flows, state diagrams, wireframesView Example
Trust SpecificationsConfidence, explanation, correction designs
Integration RequirementsSystem touchpoints, API needs, offline requirements
Adoption MetricsKPIs with targets and measurement planView Example
Validation ReportUser feedback and design refinements

Handoff to Pillar 5

Pillar 4 outputs flow directly into Pillar 5 (FinOps):

  • Usage projections inform cost forecasting
  • Interaction complexity affects token consumption estimates
  • Training requirements impact operational cost model
  • Adoption targets inform business case validation

Delivery Timeline

4

Delivery Timeline

2 weeks total — click a week for details

Week 1
Week 2
Design Sessions
3-4 hrs/agent
Validation & Documentation
Half day
Client effort shown in bars

Common Pitfalls


Facilitator Guidance

Mission & Charter

The Pillar 4 Design Sessions exist to create AI experiences that earn adoption. These are not UI polish exercises or technology demonstrations. The mission is to:

  • Design interaction models that fit how people actually work
  • Build transparency and trust into every agent touchpoint
  • Create experiences that are genuinely better than current workflows
  • Validate designs with real users before development begins

What these sessions are NOT:

  • UI mockup reviews divorced from workflow context
  • Technology demonstrations to impress stakeholders
  • Manager-driven design without actual end users in the room
  • Abstract interaction patterns without specific agent grounding

Session Inputs

Participants should have reviewed:

  • Superintelligent survey insights relevant to their agent
  • Pillar 3 permission boundaries (what the agent can and cannot do)
  • Current workflow documentation (if it exists)

Orion enters with prepared artifacts:

  • Superintelligent insights summary for this user population
  • Current state mapping templates
  • Interaction model canvas templates
  • Trust design checklist

Preparation Checklist

  1. Review Superintelligent data — Understand how this user population actually works (rhythms, pain points, workarounds)
  2. Study Pillar 3 outputs — Know the permission boundaries and trust requirements that constrain design
  3. Recruit the right participants — End users, not managers describing what users do; include skeptics
  4. Prepare current state templates — Have workflow mapping tools ready (whiteboard, digital, or paper)
  5. Identify senior practitioners — Veterans who can surface edge cases and institutional knowledge
  6. Review mobile/offline needs — Understand connectivity constraints for field workers

Delivery Tips

Opening:

  • Start with Superintelligent insight: "Your colleagues told us X about how this work actually happens"
  • Frame the goal: "We're designing something you'll actually want to use—not something you're told to use"
  • Acknowledge past failed tools—show you understand their skepticism

Managing the Room:

  • Draw out the quiet users—they often know the real workflows
  • Capture workarounds without judgment—they're design opportunities, not compliance failures
  • When experts express concern, validate and design for amplification: "How do we make this help you do more of what you're good at?"
  • Watch for managers answering for users—redirect to actual practitioners
  • Keep the interaction model grounded in the permission boundaries from Pillar 3

Part Transitions:

  • Current State → Interaction Model: "Now that we see how work flows today, let's design where the agent fits"
  • Interaction Model → Trust Design: "The interaction is clear. Now let's make sure you can trust what the agent tells you"

Closing:

  • Walk through the interaction model end-to-end with a realistic scenario
  • Confirm the "Experience Test" answer: Why would someone use this instead of their current approach?
  • If the answer isn't compelling, the design isn't ready—iterate before leaving
  • Schedule validation session with representative users (different from design participants)

Output Artifacts Checklist

By session end, confirm you have:

  • Current state workflow map with pain points and workarounds identified
  • Interaction model for the agent (entry, handoff, escalation, exit)
  • Transparency specifications (confidence display, explanations, corrections)
  • Integration requirements (where agent connects to existing systems)
  • Mobile/offline requirements (if applicable)
  • Validation session scheduled with representative users
  • Compelling answer to "The Experience Test"

When Variations Are Required

Session adjustments:

  • If users work in disconnected environments: prioritize mobile-first, offline-capable design
  • If senior experts are highly skeptical: dedicate extra time to "amplification not replacement" framing
  • If workflows vary significantly by region/team: may need multiple sessions with different user groups

Validation considerations:

  • Always validate with users different from those who designed (avoid confirmation bias)
  • For high-stakes agents: consider clickable prototypes, not just walk-throughs
  • If validation reveals fundamental issues: pause for redesign rather than force-fitting

Pricing and Positioning

Scope Options

ScopeDurationDescription
Experience Workshop1-2 weeksDesign sessions for 2-3 agents with basic validation
Comprehensive Design3-4 weeksFull design package with extensive validation
Design + Prototyping4-6 weeksInteractive prototypes for user testing

Integration with UX Programs

Pillar 4 designs integrate with existing UX practices:

  • Design system alignment
  • Accessibility requirements
  • Brand guidelines
  • Existing interaction patterns

Required Collateral

Pillar 4 Collateral Status
  • Experience Design Session GuideTODO
  • Current State Mapping TemplateTODO
  • Interaction Model CanvasTODO
  • Trust Design ChecklistTODO
  • Validation Session GuideTODO
  • Adoption Metrics FrameworkTODO

Reference Materials

External Resources

Human-AI Interaction:

Accessibility:

AI Transparency:

Static ContentServing from MDX file

Source: content/methodology/04-experience-design-guide.mdx