Strategy & Architecture • November 27, 2025

Agentic PRD — Designing Requirements for AI-Driven Engineering

How product teams move from prose-based PRDs to machine-first, executable specifications that enable AI-assisted development, eliminate feature slippage, and make requirements verifiable at the speed of code.

Introduction — Why the PRD Must Evolve

Product Requirements Documents (PRDs) historically existed to communicate product intent between humans: PM → Engineer → QA. That model worked when humans were the only coders and release cadence was slow. Today, the landscape has changed: AI agents and IDE copilots routinely generate large portions of production code, and agentic systems plan, modify, and verify code autonomously. These tools do not interpret fuzzy prose. They require deterministic, structured inputs. An Agentic PRD is the machine-first evolution of the PRD designed for a world where AI is a first-class contributor to engineering work.

What is an Agentic PRD?

An Agentic PRD is a PRD intentionally authored for both machines and humans. It is not merely “better writing”; it is a different artifact — machine-readable, versioned, and executable. Key attributes:

Why Traditional PRDs Fail in AI-Driven Development

Traditional PRDs fail in five amplified ways when AI participates in the build process:

  1. Ambiguity is multiplied: AI models will invent defaults when a requirement is vague.
  2. Context gaps: AI agents lack organizational memory unless the PRD is programmatically available.
  3. IDE invisibility: developers shouldn't need to Alt+Tab to read the single source of truth.
  4. Specification decay: clarifications in chat or meetings rarely update the canonical PRD.
  5. Lack of schema: models need predictable structure — freeform prose fails machine-readability.

The Five Pillars of an Agentic PRD

To be useful for agents while remaining useful for humans, an Agentic PRD must satisfy five pillars:

Pillar 1 — Declarative Acceptance Criteria

ACs must be atomic, verifiable predicates. Example (JSON representation):

{
  "type": "predicate",
  "expr": "image.size_mb <= 5"
}

Pillar 2 — Input/Output Examples as Ground Truth

Examples anchor behavior and reduce hallucination. Example:

{
  "input": { "image": "photo.png", "dimensions": "800x800" },
  "output": { "status": "accepted" }
}

Pillar 3 — NFRs as First-Class Citizens

Non-functional requirements must be explicit and machine-validated:

{
  "nfr": "latency",
  "max_ms": 300
}

Pillar 4 — Semantic Metadata & Glossary

Provide domain definitions to remove ambiguous terms:

{
  "glossary": {
    "paid_badge": "Displayed for items with price > 0"
  }
}

Pillar 5 — Versioned Immutable Snapshots

Use immutable PRD versions and include the version in PR metadata:

refs prd:prod-2025-payment-v1#1.3

Agentic PRD Workflow — Humans + Machines

A practical workflow looks like this:

  1. PM authors structured PRD in ProdMoh and publishes a canonical version.
  2. ProdMoh exposes PRD fragments through MCP and issues a scoped token.
  3. Developer configures MCP token in the IDE plugin (Cursor / VS Code / JetBrains).
  4. IDE agent calls get_story_context(story_id) to fetch acceptance criteria, examples, and NFRs.
  5. Agent generates code + test scaffolds; developer reviews, runs, and commits tests + code.
  6. CI validates PRD version, runs tests, and enforces policy gates before merge.

Concrete example — paid badge

Example PRD fragment for a paid-badge story:

{
  "id":"S-100",
  "title":"Display paid badge on priced items",
  "acceptance":[
    {"type":"predicate","expr":"response.json.items[0].badges includes 'paid'"}
  ],
  "examples":[
    {"query":"red shoes","product":{"id":"p-123","price":199}}
  ],
  "meta":{"version":"2025.11.1","author":"pm@company.com"}
}

From this, an IDE agent can generate a Jest test, a mock response, and inline implementation suggestions.

Mechanics: Turning ACs into Tests

Test generation is core to the Agentic PRD. Steps:

  1. Canonicalize — convert prose ACs to predicate forms.
  2. Map — a mapping layer translates predicates to test templates per language.
  3. Score — confidence scoring based on example completeness and parse certainty.
  4. Propose — IDE proposes tests as drafts; developers review and commit.
  5. Gate — CI enforces presence and passing of PRD-derived tests for changed stories.

Concrete example — paid badge

Example PRD fragment for a paid-badge story:

{
  "id": "S-100",
  "title": "Display paid badge on priced items",
  "acceptance": [
    { "type": "predicate", "expr": "response.json.items[0].badges includes 'paid'" }
  ],
  "examples": [
    { "query": "red shoes", "product": { "id": "p-123", "price": 199 } }
  ],
  "meta": { "version": "2025.11.1", "author": "pm@company.com" }
}

From this, an IDE agent can generate a Jest test, a mock response, and inline implementation suggestions.

Mechanics: Turning ACs into Tests

Test generation is core to the Agentic PRD. Steps:

  1. Canonicalize — convert prose ACs to predicate forms.
  2. Map — a mapping layer translates predicates to test templates per language.
  3. Score — confidence scoring based on example completeness and parse certainty.
  4. Propose — IDE proposes tests as drafts; developers review and commit.
  5. Gate — CI enforces PRD-derived tests before merge.
{
  "predicate": "response.json.items[0].badges contains 'paid'",
  "template": "test('<title>', async () => { const res = await <call>; expect(res.json.items[0].badges).toContain('paid'); });"
}

Confidence Scoring & Human-in-the-loop

Not every generated artifact should be auto-approved. Use a confidence score computed from:

Low confidence → PM signoff required before tests are considered authoritative by CI.

CI Gating & Policy

Enforce these checks in CI:

Security, Governance & Token Design

MCP tokens are sensitive credentials. Recommendations:

Store PRD version and story IDs in PR metadata, ProdMoh audit logs, and CI build metadata for full traceability.

Organizational adoption — rollout plan

A pragmatic phased approach reduces friction:

Phase 0 — Foundations

Phase 1 — Pilot

Phase 2 — Scale

Operational edge cases

Underspecified stories

ProdMoh should mark stories with requires-clarification when acceptance criteria lack predicates or examples. The IDE surfaces this flag and blocks test generation until clarified.

Non-functional requirements

NFRs convert into CI smoke tests (latency budgets, throughput thresholds) and are enforced as policy suites rather than unit tests.

Security-sensitive flows

Features touching PII or financial flows require an elevated token scope and automatic security gating in CI; they should also route to a security review queue before generation is permitted.

Business impact & metrics

Pilots using ProdMoh + MCP typically show measurable gains:

These improvements come primarily from better specification and earlier verification rather than “smarter AI” alone.

Agentic PRD vs Traditional PRD — quick comparison

DimensionTraditional PRDAgentic PRD
AudienceHumansHumans + AI agents
FormatProseSchema-driven JSON / structured fields
Acceptance CriteriaBulleted proseMachine-parsable predicates
DiscoverabilityExternal to IDEPullable into IDE via MCP
TestabilityManualAuto-generated
TraceabilityWeakStrong (versioned)

How to start — practical checklist

  1. Adopt ProdMoh canonical PRD schema.
  2. Train PMs on predicate-based ACs and example-driven design.
  3. Provision MCP tokens and configure an IDE client for one pilot team.
  4. Add PRD version checks to CI and require PR metadata that references PRD versions.
  5. Measure cycle time, clarification threads, and escaped defects — report outcomes in 30–60 days.

Conclusion — Agentic PRDs are the future of product intent

As AI becomes an active contributor to engineering, requirements must become machine-first. Agentic PRDs represent a practical, production-ready approach to transforming product intent into executable, verifiable, versioned artifacts. Platforms like ProdMoh, exposing those artifacts through MCP, make this transition feasible today: they reduce ambiguity, shift verification left, and ensure that the code being written aligns to product intent.