Overview
Pillar 1 answers the question: Where is AI worth applying?
This pillar moves organizations from scattered experimentation to evidence-based prioritization. By the end of Pillar 1, the client has a prioritized portfolio of AI opportunities with clear ownership, documented rationale, and a path forward.
Why Pillar 1 Matters
The Problem We're Solving
Most organizations approaching AI face one of two failure modes:
-
Paralysis by Analysis β Endless strategy sessions, no action. Leadership debates AI in abstract terms while competitors move forward.
-
Chaos by Experimentation β Shadow AI proliferates. Individual teams adopt tools without governance, creating compliance risk and duplicated effort.
What separates leaders isn't the number of AI tools they've deployedβit's the discipline of enterprise-wide integration. As one recent study put it: "Successful businesses will move from isolated experiments to enterprise transformation, weaving AI into how the business runs."
Pillar 1 breaks both patterns by providing structured discovery that leads to owned decisions within 4 weeks.
The Four Mental Shifts
Research from enterprise AI deployments reveals four critical mental shifts that separate leaders from laggards:
1. From Tools to Systems
The difference in organizational success will have almost nothing to do with whether you choose OpenAI, Microsoft, or Google. Success depends on how good the systems around AI areβthe governance, the learning loops, the prioritization frameworks. Pillar 1 builds systems, not just tool recommendations.
This extends to how AI itself is deployed. Apps scale with the number of people using them. Agents scale with compute. That single shift changes everything about what you're building. The design question moves from "How do humans navigate this?" to "How does the system orchestrate this on their behalf?"
2. New Velocity of Change
In 2024, OpenAI alone shipped a new feature approximately every 3 days. The capability set of AI tools vastly outstrips the ability of business users to put them into practiceβand that gap is expanding. Organizations need systematic approaches to learning and adaptation, not one-time training.
3. Solutions from Anywhere
AI innovation can come from any team, any level. A marketing analyst who automates reporting might discover use cases that scale across the entire company. There's no seniority prerequisite for figuring out how to use AI betterβjust time on task and willingness to experiment.
4. Compounding ROI
Think of AI value as cumulative and linked, not isolated wins. Time savings enable capability building, which enables new revenue streams, which funds deeper AI investment. Pillar 1 initiates this flywheel with evidence-based prioritization.
What Success Looks Like
By the end of Pillar 1, the client organization has:
- 3-5 prioritized AI opportunities with clear business rationale
- Executive alignment on risk tolerance and guardrails
- Named owners for each priority workstream
- Evidence-based foundation (not gut-feel or vendor hype)
- Documented decisions that can withstand board scrutiny
The Four-Step OAIO Methodology
Pillar 1 is not a traditional interview-heavy discovery exercise. Broad interviews are slow, biased (toward whoever gets interviewed), disruptive, and expensive. Instead, Pillar 1 surfaces operational truth at scale while protecting client time and Orion economics.
Step 1: Lightweight Virtual Alignment
See Virtual Alignment Session Guide
The engagement begins with a small number of targeted virtual sessions with the executive team and a handful of functional leaders. These sessions are intentionally narrow and time-bound.
The purpose is NOT to catalogue use cases. The purpose is to establish:
- Strategic priorities and fears
- Risk tolerance for AI
- Regulatory constraints
- Where leadership believes value and friction exist
Session Structure (2 hours):
| Time | Focus | Purpose |
|---|---|---|
| 0β15 min | Framing | Reiterate OAIO philosophy, establish adoption as north-star |
| 15β45 min | Leadership Perspective | Where do leaders believe friction exists? |
| 45β75 min | Guardrails | What cannot be automated? What failures are unacceptable? |
| 75β100 min | Hypotheses | Capture assumptions to be tested (not confirmed) |
| 100β120 min | Superintelligent Setup | Survey scope, populations, communications |
Target Personas:
- CIO (accountable for safe AI adoption)
- CEO or Business Sponsor (optional but recommended)
- Select functional leaders (signal sample, not representative sample)
Explicitly excluded: Data, security, legal, finance, architecture, vendors. Introducing constraints before value is defined reliably stalls progress.
Outputs:
- Executive Guardrails Brief
- Leadership Hypotheses Register
- Superintelligent Deployment Plan
Step 2: Superintelligent Survey (Primary Discovery Engine)
See Example Survey and Survey Summary
Rather than expanding into dozens of interviews, Orion deploys the Superintelligent survey as the core discovery mechanism.
The Methodology:
Using leadership guardrails, stated anxieties, and hypotheses from the virtual sessions, Orion configures the survey to test what leaders believe against how work is actually experienced across the organization. This ensures the survey is focused, relevant, and decision-orientedβnot academic.
Survey Design Principles:
- Lightweight and fast β Respects employee time (10-15 minutes max)
- Plain-language β No AI jargon or leading questions
- Scenario- and workflow-oriented β Grounded in day-to-day reality
- Asynchronous β Participation without meetings or disruption
What the Survey Captures:
The survey focuses on lived reality rather than hypothetical AI ideas:
- Where work slows down
- Where decisions are repeated mechanically
- Where senior judgment is overused
- Where errors cause rework
- Where informal AI usage is already occurring (shadow AI detection)
Distribution:
Orion works with the client to ensure a broad and credible cross-section of responses across roles, functions, and locations. Participation is framed clearly: this is not an evaluation of performance, but an opportunity to surface friction and protect the business from unmanaged risk.
Why This Works:
Because the survey is short, relevant, and clearly explained, response rates are high and signal quality is strong. Within days, Orion has a data-backed, organization-wide view of the client's workflows that would have taken months to assemble through interviewsβwithout travel, disruption, or fatigue.
Step 3: Pre-Read & Cognitive Preparation
See Example Pre-Read
Before convening the in-person workshop, Orion delivers a curated synthesis of Superintelligent findings to selected participants.
The Process:
- Orion prepares synthesis document with key findings
- Selected participants receive pre-read 1 week before workshop
- Participants review and provide feedback or corrections
- Orion incorporates input before the on-site session
Why This Matters:
No one encounters the insights for the first time in the room. This prevents:
- Defensive reactions to surprising findings
- Time wasted on "that can't be right" discussions
- Decision paralysis from information overload
Participants arrive primed for decision-making, not discovery.
Step 4: In-Person Decision Workshop
The on-site session is explicitly framed as a decision-making forum, not a discovery workshop. Discovery has already occurred. This is where evidence becomes commitment.
Persona Selection (Strictly Enforced):
Included:
- The CIO β positioned as accountable for safe AI adoption
- Line-of-business leaders β with P&L ownership and budget authority
- Veteran practitioners β who have "seen everything" and understand institutional realities
Explicitly Excluded (For Now):
- Data
- Security
- Legal
- Finance
Identifying Champions:
Additionally, the workshop should surface AI championsβemployees at any level who have already demonstrated initiative with AI tools. These individuals:
- Have more "time on task" with AI than their peers
- Have translated general AI capabilities to specific organizational contexts
- Can accelerate peer adoption through practical guidance
Day 1 Agenda (Full Day):
| Time | Focus |
|---|---|
| 10:00β10:30 | Context setting, reconfirm mission: decisions not discovery |
| 10:30β11:30 | Structured readout of Superintelligent findings (10 opportunities) |
| 11:30β11:50 | Break |
| 11:50β12:45 | Validation: What aligns with lived experience? What needs context? |
| 12:45β1:45 | Lunch (unstructured discussion, Orion observes themes) |
| 1:45β2:45 | Priority evaluation using 5-Lens Framework |
| 2:45β3:05 | Break |
| 3:05β4:00 | Final selection (2-3 priorities), assign owners |
| 4:00β4:30 | Wrap-up, determine if Day 2 needed |
Optional Day 2 (Half-Day):
- Deep dive on selected workstreams
- Propagation planning across Pillars 2-5
- Final alignment and close
The 5-Lens Prioritization Framework
During the workshop, opportunities are evaluated through five lenses:
| Lens | Question | What We're Assessing |
|---|---|---|
| Value | What's the business impact? | Revenue, cost reduction, risk mitigation |
| Adoption | Will people actually use it? | Change burden, workflow fit, user willingness, depth potential |
| Data | Is the data ready? | Availability, accuracy, accessibility |
| Risk | What could go wrong? | Compliance, security, reputation, errors |
| Feasibility | Can we build it? | Technical complexity, skills, timeline |
Beyond the 5 Lenses: Design for Reuse
When prioritizing opportunities, look for recurring patterns that can support multiple use cases:
- Data assets that serve multiple workstreams
- Orchestration flows that can be templated
- Integration points that unlock future capabilities
- Governance frameworks that scale across initiatives
See 5-Lens Prioritization Framework for detailed scoring guidance.
Example Prioritized Outcomes
In the NorthRidge case study, the workshop produced three priorities:
-
Pre-QA validation of survey reports β Deterministic checks consuming senior QA time. High-frequency, clear data, bounded risk.
-
Field note normalization and interpretation β A major friction point and active shadow-AI hotspot. Surveyors already using ChatGPT informally.
-
Exception handling for high-risk cases β Ensuring expert judgment was applied where it mattered most. Lower frequency but high consequence.
These priorities were chosen by NorthRidge, not Orion. The evidence surfaced them; the client decided their relative importance.
Outputs and Handoffs
Pillar 1 Deliverables
| Deliverable | Description | Example |
|---|---|---|
| Executive Guardrails Brief | Strategic intent, risk tolerance, non-negotiables | View Example |
| Superintelligent Analysis Report | Pattern analysis, opportunity heat map, shadow AI findings | View Example |
| Prioritized Agent Portfolio | 2-3 AI agents with rationale and 5-Lens scores | View Example |
| Decision Records | Documented decisions with rationale and dissent | Decision Workshop Guide |
| Workstream Definitions | Named owners, scope, success criteria per agent | View Example |
| Propagation Plan | How each agent flows through Pillars 2-5 | View Example |
Handoff to Pillar 2
Pillar 1 outputs flow directly into Pillar 2 (Data Readiness):
- Prioritized use cases become the focus for data assessment
- Technical profiles inform data mapping exercises
- Risk assessments shape data governance requirements
The AI Value Flywheel
Pillar 1 isn't just about finding use casesβit's about initiating a compounding advantage loop that accelerates over time.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β THE AI VALUE FLYWHEEL β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β PILLAR 1 β β EARLY WINS β β REINVEST β β
β β Prioritized β βββΊ β Productivityβ βββΊ β in AI β β
β β Use Cases β β Gains β β Capabilitiesβ β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β² β β
β β βΌ β
β ββββββββββββββββ ββββββββββββββββ β
β β STRUCTURAL β ββββββββββββββββββββββ β COMPLEX β β
β β ADVANTAGE β β USE CASES β β
β ββββββββββββββββ ββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
How the Flywheel Works:
-
Pillar 1 identifies high-value, adoptable use cases β Not random experiments, but evidence-based priorities with clear ownership.
-
Early wins generate productivity gains β Research shows 96% of organizations with structured AI programs see measurable improvements.
-
Gains get reinvested into AI capabilities β Leading organizations put 47% of gains back into expanding AI capabilities, 42% into new capabilities, and 39% into R&D.
-
Investment enables more complex use cases β Moving from simple "time savings" to decision-making, new capabilities, and revenue generationβwhere ROI is significantly higher.
-
Complex use cases create structural advantages β Data becomes organized, workflows become AI-native, and the organization builds capabilities competitors cannot quickly replicate.
-
Human touchpoints become system intelligence β Every correction, review, and intervention teaches the system. Organizations that treat human oversight as instructionβnot just validationβsee their AI systems require fewer interventions over time. This is where productivity truly compounds: effort shifts from repeatedly doing work to permanently improving how work gets done.
Delivery Timeline
Delivery Timeline
3 weeks total β click a week for details
Common Pitfalls
Facilitator Guidance
Preparation
- Know the client β Industry, competitors, recent news, organizational structure
- Review all inputs β Virtual alignment outputs, survey results, pre-read feedback
- Prepare artifacts β All materials ready, no last-minute scrambling
- Anticipate objections β What findings might be challenged? How will you respond?
Delivery Tips
Opening:
- Start with a provocative observation, not credentials
- Establish stakes immediatelyβcost of inaction
- Frame as "decisions, not discovery"
Managing the Room:
- Read body languageβadjust pace accordingly
- Balance dominant voices with quieter participants
- Park tangents visibly (whiteboard "parking lot")
- Name tensions directly: "I sense some skepticism here..."
Closing:
- Summarize decisions made, owners assigned
- Confirm next steps with dates
- End with energy: "You've accomplished in one day what most organizations take months to achieve."
Pricing and Positioning
Cloud Partner Subsidies
Pillar 1 is typically positioned as a cloud service providerβsubsidized engagement:
- AWS β Eligible for AWS Partner funding programs
- Microsoft β Covered under Azure consumption commitments
Key message: "Your investment is minimal, and in many cases, the initial assessment is fully funded through our cloud partnerships."
Pricing Guidance
| Client Size | Duration | Range |
|---|---|---|
| Mid-market (500-2000 employees) | 4 weeks | Subsidized / Low five figures |
| Enterprise (2000-10000 employees) | 6 weeks | Mid five figures |
| Large Enterprise (10000+) | 6-8 weeks | High five figures |
Required Collateral
- Virtual Alignment Facilitator GuidePLACEHOLDER β
- Superintelligent Deployment PlaybookPLACEHOLDER β
- Standard Survey Prompt LibraryTODO
- Decision Workshop Facilitator GuidePLACEHOLDER β
- 5-Lens Prioritization FrameworkCOMPLETE β
- Pre-Read TemplateTODO
- Decision Record TemplateTODO
Reference Materials
Facilitator Guides
- Virtual Alignment Session Guide β Full session design and facilitation
- Decision Workshop Guide β Workshop design with detailed agenda
Example Deliverables
- Example Survey β Sample Superintelligent survey
- Survey Summary β Example analysis and prioritization
- Customer Communication β Pre-read and prep templates
Related Content
- NorthRidge Case Study: Pillar 1 β Story-based walkthrough
- The OAIO Pitch β How to earn commitment to Pillar 1
External Resources
Enterprise AI Research (2024-2025):
- McKinsey: The State of AI β 62% still in experimentation/piloting; only 7% fully scaled
- KPMG 2025 CEO Outlook β 67% expect AI ROI in 1-3 years (up from 20% in 2024)
- OpenAI: From Experiments to Deployments β Four-phase scaling framework; "tools to systems" mental shift
- Wharton AI ROI Study β 74% report positive ROI; 82% weekly AI use in enterprises
- UKG Frontline Worker Survey β 24% lower burnout among AI users (41% vs 54%)
AI Strategy & Adoption:
- Gartner AI Adoption Framework β Enterprise AI maturity models
- MIT Sloan AI Research β Academic perspective on AI business value
- Matt Wood: AI Gifts for 2026 β PwC CTIO on depth over breadth, agents vs apps, learning systems
Change Management:
- Prosci ADKAR Model β Change management framework
- Kotter's 8-Step Process β Leading organizational change
Survey & Research:
- Superintelligent β AI-powered organizational surveys (OAIO partner)
Static ContentServing from MDX file
Source: content/methodology/01-value-adoption-guide.mdx