Main
Pillar 2: Data ReadinessComplete

Data Readiness Session Guide

Facilitator guide for Pillar 2 agent-specific data readiness working sessions.

Agent-Specific Data Readiness Working Sessions – Facilitator Guide

Mission & Charter

These sessions exist to translate selected agent ideas into concrete, testable data assumptions. They are not architecture design meetings and not implementation planning forums. The purpose is to make implicit data dependencies explicit and to surface risk early, before build decisions are made.

Each session focuses on one agent use case only. Critically, the persona leader responsible for Agent Case 1 is not in the room for Agent Case 2. This prevents scope bleed, political compromise, and cross-contamination of requirements.

Depending on the number of prioritized agents, these sessions are conducted over sequential days.


Required Personas (Per Agent Session)

1. Agent Business Owner (LOB Leader)

Accountable for the problem being solved and the business outcome. Owns prioritization, tradeoffs, and adoption success.

2. Veteran Practitioners (2–3)

Deep domain experts who understand how data is actually created, corrected, and interpreted in practice. These participants surface informal workflows, edge cases, and historical context.

3. System Owners / IT Application Owners

Individuals responsible for the systems that house the relevant data (e.g., survey systems, document repositories, GIS platforms). Their role is to explain:

  • How data is stored today
  • How it is accessed in practice (not just on paper)
  • Operational constraints and failure modes

4. Data Stewards / Data Custodians (e.g., DBA, BI, Analytics Leads)

Participants who understand data structure, lineage, quality, and lifecycle. Their role is not to redesign data architecture, but to validate feasibility and clarify how existing data can be safely used.

5. Orion AI Outcomes Data & Agent Experts

Facilitate the session, challenge assumptions, and translate domain and system knowledge into agent-relevant data constructs.

Explicitly excluded: other agent owners, security, legal, finance, enterprise architecture, platform modernization teams.

Important framing: These sessions are not a precursor to a data consolidation or platform transformation effort. The intent is to assess whether the prioritized agents can operate effectively within how data is housed, accessed, and controlled today, with minimal disruption.


Core Working Method: Data Sketching

The heart of each session is a literal sketching exercise. Orion facilitates whiteboard or digital sketching to map:

  • Upstream and downstream data sources
  • Clear separation of internal vs. external data sources
    • Internal (systems of record, operational databases, document stores)
    • External – public (regulations, standards, public records)
    • External – private (partners, clients, licensed third-party data)
  • Human touchpoints where data is created or corrected
  • Decision points that rely on data interpretation

This visual approach forces clarity and quickly exposes hidden complexity, hidden dependencies on external data, and implicit assumptions about data availability.


Key Questions Answered (Per Data Source)

For each identified data source, the group explicitly tags:

  • Data location (system of record, file store, tool)
  • Access method (API, batch, manual export, read-only view)
  • Data sensitivity (public, internal, confidential, regulated)
  • Existing controls (permissions, approvals, audit logs)
  • Agent interaction model:
    • Does the agent only read this data?
    • Or does the agent propose changes?
    • If changes are proposed, who approves them?

These questions are non-negotiable. If they cannot be answered, the agent scope is adjusted.


Example Agenda (Per Agent | 4–5 Hours)

0:00–0:30 | Agent Context & Success Definition

  • Reconfirm the problem statement
  • Define what successful adoption looks like

0:30–2:00 | Data Source Identification & Sketching

  • Map all relevant data sources
  • Identify human touchpoints and corrections

2:00–2:15 | Break

2:15–3:30 | Data Tagging & Risk Surfacing

  • Apply location, access, sensitivity, and control tags
  • Explicitly discuss agent read vs write behavior

3:30–4:30 | Feasibility & Scope Adjustment

  • Identify unacceptable risk areas
  • Adjust agent scope accordingly

4:30–5:00 | Session Wrap-Up

  • Summarize assumptions and open questions
  • Confirm what must be validated next

Outputs & Collateral Generated (Per Agent)

Each session produces durable artifacts:

  • Agent Data Map (visual)
  • Tagged Data Inventory (location, access, sensitivity, controls)
  • Named Data Access Owners Register — For each data source, explicitly identifies the human owner responsible for granting, revoking, and auditing access
  • Agent Access Model (read vs propose-change), including:
    • Which actions are automated
    • Which actions require human approval
    • Which named role provides that approval
  • Open Risk & Validation Log

These artifacts become the entry criteria for Pillar 3 (Trust & Guardrails) and directly inform experience design and FinOps modeling.


Why This Step Matters

Most AI initiatives fail not because models are weak, but because data assumptions are wrong. These sessions ensure that by the time Orion moves forward:

  • Data scope is intentional
  • Risk is surfaced early
  • Agents are constrained by reality, not aspiration

This is how slow is smooth; smooth is fast is operationalized in data readiness.

Edit this file: content/collateral/data-readiness-session-guide.mdx