Main
Chapter 2Complete

Pillar 1: Agent Value & Adoption

From anxiety to evidence-based prioritization. Defining where intelligence is worth applying.

From Anxiety to Evidence-Based Prioritization

Orion explained that the first phase could not be a traditional interview-heavy discovery exercise. NorthRidge's workforce was too distributed, and broad interviews would be slow, biased, disruptive, and expensive. Instead, Pillar 1 was designed to surface operational truth at scale while protecting client time and Orion economics.

Step 1: Lightweight Virtual Alignment

The engagement began with a small number of targeted virtual sessions with the executive team and a handful of functional leaders (see Virtual Alignment Session Guide). These sessions were intentionally narrow and time-bound. Their purpose was not to catalogue use cases, but to establish:

  • Strategic priorities and fears
  • Risk tolerance for AI
  • Regulatory constraints
  • Where leadership believed value and friction existed

Orion was explicit that these perspectives were necessary — but insufficient.

Step 2: Superintelligent Survey (Primary Discovery Engine)

See the Pillar 1 Methodology Guide for example survey and analysis documents.

Rather than expanding into dozens of interviews, Orion deployed the Superintelligent survey as the core discovery mechanism. This survey was not generic and not off-the-shelf. It was deliberately crafted and curated based on the inputs gathered during Step 1 virtual alignment.

Using leadership guardrails, stated anxieties, and hypotheses captured in the virtual sessions, Orion configured the survey to test what leaders believed against how work was actually experienced across the organization. This ensured the survey was focused, relevant, and decision-oriented — not academic.

The survey was intentionally designed to be:

  • Lightweight and fast to complete, respecting employee time
  • Plain-language, avoiding AI jargon or leading questions
  • Scenario- and workflow-oriented, grounded in day-to-day reality
  • Asynchronous, allowing participation without meetings or disruption

Orion worked with NorthRidge to manage distribution carefully, ensuring a broad and credible cross-section of responses across field surveyors, office staff, QA reviewers, compliance specialists, and project managers. Participation was framed clearly: this was not an evaluation of performance, but an opportunity to surface friction and protect the business from unmanaged risk.

The survey focused on lived reality rather than hypothetical AI ideas:

  • Where work slowed down
  • Where decisions were repeated mechanically
  • Where senior judgment was overused
  • Where errors caused rework
  • Where informal AI usage was already occurring

Because the survey was short, relevant, and clearly explained, response rates were high and signal quality was strong. Within days, Orion had a data-backed, organization-wide view of NorthRidge's workflows that would have taken months to assemble through interviews — without travel, disruption, or fatigue.

Step 3: Pre-Read & Cognitive Preparation

See the Pillar 1 Methodology Guide for customer communication templates.

Before convening an in-person session, Orion delivered a curated synthesis of the Superintelligent findings to selected participants. This was not raw data or a slide dump, but a narrative summary of key patterns, quantified friction points, and implications for value and risk.

Participants were expected to review this material in advance and provide feedback or corrections. Orion incorporated this input before the on-site session, ensuring no one encountered the insights for the first time in the room.

Step 4: In-Person Decision Workshop

See Decision Workshop Guide

The workshop convened at NorthRidge's Denver headquarters on a Tuesday morning. Michael Santos arrived early to set up the war room—a large conference space with whiteboards covering three walls and the pre-read materials printed and waiting at each seat.

The on-site session was explicitly framed as a decision-making forum, not a discovery workshop. Orion tightly constrained the personas in the room:

Included personas:

  • The CIO, positioned as accountable for safe AI adoption
  • Line-of-business leaders with P&L ownership and budget authority
  • Deeply experienced company veterans who had "seen everything" and understood institutional realities

Explicitly excluded (for now):

  • Data
  • Security
  • Legal
  • Finance

Orion explained that bringing those personas in too early would prematurely shift the discussion from value to constraints.

The Workshop Opens

Michael opened with a reminder of why they were there.

"We're not here to brainstorm AI ideas. The survey already surfaced those—over forty potential opportunities from your own people. We're here to make decisions. By end of day, you'll have selected 2-3 AI agents that NorthRidge will actually build."

Marcus glanced at the thick pre-read binder. "I'll be honest—some of these findings surprised me. I didn't realize how much time our senior QA people were spending on basic validation checks."

Lisa nodded. "The shadow AI numbers were worse than I expected. Seventeen percent of respondents admitted to using ChatGPT for client-facing work. That's not a technology problem—that's a governance gap we created by not giving them anything better."

Surfacing the Real Friction

Michael walked the group through the Superintelligent analysis, projected on the main screen. The data told a story that challenged some assumptions.

"Your field surveyors identified 'field note cleanup' as their number one time sink. But here's what's interesting—it's not just about time. Look at the variance."

He pulled up a chart showing wildly inconsistent time-per-report across regions.

David leaned forward. "That's the Denver office versus the Gulf Coast teams. Different training, different tools, different everything. We've never been able to standardize it."

"Exactly," Michael said. "This isn't a training problem. Your people are solving the same problem dozens of different ways because the work itself is ambiguous. That's exactly where AI can help—not replacing judgment, but normalizing the inputs so judgment can focus on what matters."

Sarah raised her hand. "What's the cost of that inconsistency? In dollars."

"Based on the survey data, we estimate 12-15 hours per week per surveyor spent on normalization tasks. Across your 340 field staff, that's roughly $2.8 million annually in labor applied to work that doesn't require human judgment."

The room went quiet.

The Debate: Which Opportunities First?

Michael moved to the prioritization framework—the 5-Lens Scoring system that evaluated each opportunity against Value, Feasibility, Risk, Adoption, and Strategic Fit.

Marcus pointed to the top-ranked item. "Pre-QA validation scores highest. But I'm looking at adoption risk—our QA team has been doing this for twenty years. Will they trust an AI to check their work?"

Lisa jumped in. "That's backwards. The AI wouldn't check their work—it would check the surveyor's work before it even reaches QA. It's giving QA back their time, not threatening their expertise."

David wasn't convinced. "My field teams are the ones who'll actually use this. They're already frustrated. If we give them another tool that creates more work, we'll lose them."

"That's exactly why we're being deliberate," Michael said. "Look at the adoption scores. Field note normalization has the highest adoption likelihood because your people are already trying to solve it—with consumer AI tools, with macros, with workarounds. We're not introducing a new behavior. We're sanctioning and improving one that already exists."

Sarah pulled out her calculator. "What's the investment for each of these? I need to understand the cost-benefit before we prioritize."

"That's Pillar 5—FinOps. We'll model the economics in detail. But for now, think of it this way: these three opportunities all address high-frequency, high-volume work. The economics will favor them over lower-volume, high-complexity alternatives."*

The Uncomfortable Conversation

An hour in, the discussion hit a friction point. Marcus had been quiet, staring at one item on the list: "Client communication drafting."

"This one scored well," he said slowly. "But I'm not comfortable with it. We're a professional services firm. Our reputation is built on the expertise in our deliverables. If clients find out AI wrote their reports..."

The room tensed.

Lisa spoke carefully. "They might already know. Seventeen percent shadow AI usage, remember? The question isn't whether AI is touching client work—it's whether we govern it."

Michael stepped in. "This is exactly the kind of decision Pillar 1 is designed to surface. You don't have to pursue every opportunity. The value of this process is that you're making conscious choices—including conscious choices to not do something."

Marcus nodded slowly. "Then let's set that one aside. Not a no forever, but not in the first wave."

David looked relieved. "Good. My teams trust me. I need to be able to look them in the eye and say we're doing this right."

The Final Decisions

By late afternoon, the whiteboard had been reorganized three times. Post-it notes had migrated, been debated, and found their final positions.

Michael summarized what the group had decided:

"You've selected three AI agents for development:"

Marcus stood up and stretched. "I came in here expecting to approve a strategy deck. Instead, we made real decisions with real owners. That's... different."

"That's the point," Michael said. "Strategy decks sit on shelves. Assigned owners with success metrics—those get built."

Sarah gathered her notes. "I still need to understand the economics before we commit budget."

"Pillar 5 will give you that. But notice what you didn't do today—you didn't let cost uncertainty stop you from identifying value. Most organizations do that backwards. They ask 'what can we afford?' before asking 'what's worth doing?' You now know what's worth doing. The economics come next."*

The Prioritized Outcomes

The workshop produced clear, owner-assigned priorities backed by organizational evidence:

  1. Pre-QA validation of survey reports — deterministic checks consuming senior QA time
  2. Field note normalization and interpretation — a major friction point and shadow-AI hotspot
  3. Exception handling for high-risk cases — ensuring expert judgment was applied where it mattered most

These priorities were chosen by NorthRidge, not Orion. The Superintelligent data provided evidence. The workshop provided decisions. The owners provided accountability.