What Is the Resolution Architecture Audit?
The Resolution Architecture Audit (RAA) is the CPAG five-step diagnostic that identifies which workflows in a vendor's platform qualify as viable Atomic Resolutions before outcome-based pricing is attempted.
The Resolution Architecture Audit (RAA) is the Crown Point Advisory Group five-step diagnostic that any SaaS CEO or Chief Product Officer can run against their platform to identify which workflows qualify as viable Atomic Resolutions before outcome-based pricing is attempted. It is Step 3A of Phase 1 in the Three-Phase RaaS Transition Roadmap and the prerequisite for building a credible Atomic Resolution Catalogue.
The Atomic Resolution Catalogue template tells a product team how to document their resolutions once they know what those resolutions are. The Resolution Architecture Audit tells them what those resolutions actually are. These are not the same exercise.
Why the Audit Exists
Most product teams, when first asked to define their Atomic Resolutions, produce descriptions of what the AI does rather than what the AI achieves. “The agent drafts a response” is activity. “The ticket was resolved without human intervention within the defined SLA” is an outcome. The RAA forces the transition from activity description to outcome definition by running each candidate workflow through a five-step test sequence.
The RAA also produces the most commercially valuable finding in Phase 1: the identification of workflows that cannot survive the transition to Resolution as a Service (RaaS) because a general-purpose AI agent can achieve the same outcome without touching the vendor’s platform. Identifying those workflows before 18 months of transition effort is invested is among the highest-value outputs the entire Phase 1 process can produce.
Step 1: The Bypass Test
For each major workflow the platform supports, ask honestly: can an AI agent achieve this outcome without touching your product?
Map every workflow into one of three buckets.
Fully bypassable. An agent can achieve the outcome via a direct API or serverless function without the vendor’s platform. These workflows do not qualify as Atomic Resolutions and must be escalated to product strategy review. A product that can be fully bypassed by AI agents has no RaaS future because it has no future in any pricing model. The honest response is to use the RAA findings to inform what the company should build next, not to force-fit outcome pricing onto workflows that do not survive the bypass test.
Partially bypassable. An agent can handle the routine portion of the workflow but requires the vendor’s data, logic, or institutional context for the judgment-sensitive portion. These are conditional resolution candidates. The RaaS pricing applies to the autonomous portion. The human-assisted portion must be excluded from resolution billing and priced separately or as part of a platform fee.
Repository-dependent. The agent cannot achieve the outcome without the vendor’s data or institutional logic. These are the strongest Atomic Resolution candidates. The vendor’s High-Fidelity Repository is the moat. General-purpose agents cannot replicate it without years of domain-specific data access.
Step 2: The Attribution Test
For each workflow that survives the Bypass Test, ask: if the outcome is achieved, can the vendor prove that the AI executed it rather than a human operator?
Attribution is the second of the three Atomic Resolution criteria. A resolution that cannot be traced to the platform’s AI execution cannot be billed at the autonomous resolution rate. The Attribution Test requires the vendor to examine their current platform logging and ask whether the audit trail distinguishes autonomous AI execution from human-initiated or human-completed actions.
Most platforms in Phase 1 fail this test for a meaningful fraction of their workflows. The audit log captures what happened, not who or what caused it. The Phase 2 instrumentation work is built directly on the Attribution Test results: the workflows that failed attribution logging in Phase 1 are the first to be instrumented in Phase 2.
Step 3: The Finite Boundary Test
For each workflow that passes the Attribution Test, ask: does the workflow have a defined endpoint that prevents an open-ended agentic loop?
Finiteness is the third Atomic Resolution criterion. An agent that continues processing without a defined completion state generates unlimited compute cost under a fixed resolution fee. The Finite Boundary Test requires the product team to specify, in plain language, exactly what state constitutes completion for each resolution type and what event triggers the resolution count.
The most common failure mode here is workflows with conditional completion: a resolution is considered complete unless certain conditions are met, at which point additional processing is triggered. These workflows are not finite in the Atomic Resolution sense unless the conditional branch is scoped and priced separately. Unbounded escalation paths within a single resolution type destroy the cost-predictability that makes outcome pricing viable.
Step 4: The Quality Gate Specification
For each workflow that passes the first three tests, specify the quality gate: the post-completion monitoring window and the reopen trigger that determines whether a completed resolution holds.
The CPAG standard is a 48-hour quality gate: a resolution that is reopened by the end user within 48 hours is a presumptive failure and is not billable at the full resolution rate. The Quality Gate Specification for each resolution type must define the reopen trigger precisely: what user action constitutes a reopen, what system event logs it, and what the credit policy is when a reopen occurs.
This specification is not legal boilerplate. It is the operational definition of what the vendor is promising when they sell an Atomic Resolution. A vendor who cannot specify the quality gate has not finished defining the resolution.
Step 5: The Cost-to-Serve Estimate
For each resolution type that passes all four prior tests, estimate the cost to serve: the compute, orchestration, and model hosting cost per resolution execution.
This estimate does not need to be precise at the RAA stage. It needs to be directionally accurate enough to confirm that the resolution type can be priced above the 1-to-4 Rule threshold before Phase 2 instrumentation begins. A resolution type where the estimated cost-to-serve is $2.00 and the customer’s willingness to pay is $3.00 fails the 1-to-4 Rule and should not be included in the Phase 2 pilot without a pricing model redesign.
The most common error at this step is using platform-average AI cost rather than resolution-specific cost. A support ticket and a complex legal document review have materially different compute signatures. Averaging them produces a number that conceals the margin erosion risk on the complex resolution types.
The RAA Output
The Resolution Architecture Audit produces a classified list of every candidate workflow with its bypass status, attribution status, finite boundary specification, quality gate definition, and cost-to-serve estimate. This classified list is the input to the Atomic Resolution Catalogue.
Workflows that pass all five steps become catalogue entries. Workflows that fail Step 1 are escalated to product strategy. Workflows that pass Step 1 but fail a later step are flagged for Phase 2 instrumentation with the specific gap identified.
The RAA also produces the bypass-adjusted Resolution-to-Seat Ratio, stripping out the workflows that fail the bypass test from the RSR numerator. This adjusted figure is the defensible RSR: the fraction of ARR that can be repriced as resolutions that the vendor’s platform is genuinely required to deliver.
The Resolution Architecture Audit is defined in Step 3A of Phase 1 in the Crown Point Advisory Group Vendor Transition Playbook. The Atomic Resolution standard that each workflow must satisfy is defined at What Is Atomic Resolution?. The broader Phase 1 context is at What Is the Three-Phase RaaS Transition Roadmap?