Before scaling OAIO across the sales organization, validate the methodology with real customer engagements. This framework defines what we're testing, proving, and learning through 2-3 pilot engagements.
🎯 Pilot Philosophy: We're not just delivering engagements—we're building the evidence base that makes OAIO credible, repeatable, and scalable. Every pilot is both a customer win and a methodology proof point.
Hypothesis Test Method Success Criteria Superintelligent delivers actionable insights Deploy survey to 50+ employees Synthesis surfaces 8-12 opportunities with clear signal strength Virtual alignment sessions work remotely Conduct 2-hour sessions via video Executives engage, hypotheses documented, no "we need to meet in person" Decision workshop drives commitment Facilitate 1-day onsite 2-3 agents selected with named owners and budget discussion 9-week timeline is realistic Track actual vs. planned Complete Pillars 1-5 within 20% of estimated timeline Deliverables are useful Customer feedback on artifacts Customers reference deliverables in subsequent planning
Claim Evidence Needed "Evidence-based prioritization beats committee debates" Customer quotes comparing OAIO process to prior AI discussions "Superintelligent surfaces what leadership wouldn't ask" 2-3 specific examples of insights that surprised leadership "Named human accountability prevents governance paralysis" Governance sessions complete in 2 weeks, not months "Experience design earns adoption" User validation sessions show positive reception "CFO-legible economics enables investment" Finance approves budget for implementation phase
Experiment Variations to Try Learning Goal Survey deployment timing Before vs. during alignment sessions Does survey-first improve hypothesis quality? Virtual vs. hybrid workshops Full virtual, hybrid, full onsite What drives best engagement and decisions? Pillar sequencing Standard (1-5) vs. compressed (1,3,5 then 2,4) Can governance and economics run parallel? Deliverable formats PDF vs. interactive vs. system-integrated What format gets actually used? Podcast consumption Provide before sessions vs. as follow-up Does audio prep improve session quality?
Question How We'll Learn What's the real client effort? Track hours per pillar, compare to estimates Where do sessions run long? Note agenda vs. actual timing, identify friction points What questions come up repeatedly? Catalog FAQs to improve collateral and training What objections emerge? Document and develop responses for seller enablement What do facilitators need more of? Post-engagement facilitator debriefs How do customers describe OAIO to others? Capture language they use (for our messaging)
Criterion Why It Matters Existing relationship Trust enables honest feedback; reduces sales cycle risk AI interest but not commitment Demonstrates value of evidence-based approach Accessible leadership CEO/CIO participation is critical for Pillar 1 100-500 employees Large enough for meaningful survey, small enough for agility Pragmatic culture Will give direct feedback, not polite silence Potential case study permission Willing to be referenced (with approval)
Active RFP or competitive evaluation (too transactional)
Recent failed AI initiative (baggage complicates signal)
Leadership not aligned on participating (creates friction)
Unrealistic timeline expectations (sets up failure)
Unwilling to deploy Superintelligent (blocks core methodology)
Timeline: 3-4 weeks
Client Effort: ~20 hours total
Orion Effort: ~60 hours
Activity Week Outputs Superintelligent survey deployment 1-2 Survey responses from 50+ employees Virtual alignment sessions (2x 2hrs) 2 Executive hypotheses, guardrails Survey synthesis and pre-read 2-3 Prioritized opportunity analysis In-person decision workshop 3-4 2-3 selected agents with owners
What this proves: Core methodology works—survey delivers signal, workshops drive decisions.
Timeline: 9-10 weeks
Client Effort: ~60 hours total
Orion Effort: ~200 hours
Pillar Timeline Key Outputs P1: Value & Adoption Weeks 1-3 Prioritized agents, named owners P2: Data Readiness Weeks 3-4 Data maps, readiness scores P3: AI Protection Weeks 5-6 Permission specs, governance framework P4: Experience Design Weeks 6-7 Interaction models, adoption metrics P5: FinOps Weeks 8-9 Economic models, CFO sign-off
What this proves: End-to-end methodology produces implementation-ready blueprint.
Scope List Price Pilot Price Rationale Pillar 1 Only $75,000 $35,000-45,000 Validate core methodology Full Engagement (P1-5) $150,000 $85,000-100,000 Prove complete value chain
Pilot discount justification:
Customer agrees to provide detailed feedback
Customer permits anonymized case study usage
Customer participates in methodology refinement
Customer provides testimonial quote (if satisfied)
For AWS or Microsoft partner-funded pilots:
Target 50-70% partner funding
Customer pays remaining 30-50%
Position as "co-investment in AI strategy"
Metric Collection Method Target Survey response rate Superintelligent dashboard 60%+ Survey completion time Superintelligent analytics Under 15 min average Session NPS Post-session survey 40+ Timeline adherence Project tracking Within 20% Deliverable usage Follow-up interview Referenced in planning
Touchpoint Questions to Ask Post-survey "What surprised you in the results?" Post-workshop "How did this compare to other strategic planning?" Post-engagement "What would you tell a peer considering this?" 30-day follow-up "How are you using the deliverables?"
After each pilot, facilitators document:
What worked better than expected?
What was harder than expected?
What questions kept coming up?
What would you change for next time?
What collateral/training would have helped?
Photograph key moments (with permission): whiteboard sessions, workshop discussions
Record memorable quotes as they happen
Note specific numbers : survey responses, opportunities identified, time saved
Capture the "before state" : How were they approaching AI before OAIO?
Post-Pilot Case Study Structure
The Challenge — Why they engaged (in their words)
The Approach — OAIO methodology summary
Key Findings — What Superintelligent revealed
The Outcome — Agents selected, governance established, economics modeled
Customer Quote — Their assessment of the value
What's Next — Implementation plans
Obtain written case study permission before engagement
Share draft for customer review before publication
Offer anonymization option if preferred
Provide final case study to customer for their use
Task Owner Due Customer agreement signed AE W-2 Kick-off call scheduled Facilitator W-2 Superintelligent instance configured Ops W-1 Survey distribution list obtained Customer W-1 Case study permission confirmed AE W-1
Week Key Activities Data Collection 1 Survey launch, alignment sessions Response rate, session feedback 2 Survey closes, synthesis begins Completion rate, time-to-complete 3 Decision workshop Workshop NPS, decisions made 4+ Pillars 2-5 (if full pilot) Session timing, facilitator notes
Post-Pilot (2 weeks after)
Task Owner Due Customer feedback interview Facilitator W+1 Facilitator debrief completed Facilitator W+1 Case study draft created Marketing W+2 Methodology refinements documented Ops W+2 30-day follow-up scheduled AE W+4
A pilot is successful if:
Customer selects 2-3 agents with named owners
Customer provides NPS 30+ for overall engagement
Customer agrees to provide testimonial quote
Timeline completed within 25% of estimate
Facilitator rates engagement quality 7/10 or higher
The pilot program is successful if:
2+ pilots complete with case study permission
Methodology refinements documented and incorporated
Seller enablement materials updated with real objections/responses
Facilitator training updated with lessons learned
Pricing validated (customers perceive fair value)
Cloud partner funding model tested
Risk Impact Mitigation Customer disengages mid-pilot Incomplete data, no case study Strong exec sponsorship, weekly check-ins Survey response rate low Weak signal, less credible synthesis CEO communication, response incentives Methodology takes longer than planned Cost overrun, customer frustration Buffer time built in, transparent tracking Customer unhappy with results No case study, negative reference Early feedback loops, adjust in real-time Facilitator unavailable Delays, inconsistent experience Cross-train backup facilitators
💡 Immediate Actions:
Identify 3-5 potential pilot customers from existing relationships
Qualify against selection criteria
Socialize pilot economics with leadership
Prepare pilot-specific SOW template
Brief facilitators on data collection requirements