Introduction — Why the PRD Must Evolve
Product Requirements Documents (PRDs) historically existed to communicate product intent between humans: PM → Engineer → QA. That model worked when humans were the only coders and release cadence was slow. Today, the landscape has changed: AI agents and IDE copilots routinely generate large portions of production code, and agentic systems plan, modify, and verify code autonomously. These tools do not interpret fuzzy prose. They require deterministic, structured inputs. An Agentic PRD is the machine-first evolution of the PRD designed for a world where AI is a first-class contributor to engineering work.
What is an Agentic PRD?
An Agentic PRD is a PRD intentionally authored for both machines and humans. It is not merely “better writing”; it is a different artifact — machine-readable, versioned, and executable. Key attributes:
- Structured acceptance criteria: predicates, invariants, and example-driven assertions.
- Versioned artifacts: semver/timestamps for deterministic generation.
- Explicit decision boundaries: defaults, edge cases, error paths, and thresholds.
- Semantic metadata & glossary: domain labels and definitions for agents.
- Downstream automation readiness: test scaffolds, mocks, CI recipes generated from the PRD.
Why Traditional PRDs Fail in AI-Driven Development
Traditional PRDs fail in five amplified ways when AI participates in the build process:
- Ambiguity is multiplied: AI models will invent defaults when a requirement is vague.
- Context gaps: AI agents lack organizational memory unless the PRD is programmatically available.
- IDE invisibility: developers shouldn't need to Alt+Tab to read the single source of truth.
- Specification decay: clarifications in chat or meetings rarely update the canonical PRD.
- Lack of schema: models need predictable structure — freeform prose fails machine-readability.
The Five Pillars of an Agentic PRD
To be useful for agents while remaining useful for humans, an Agentic PRD must satisfy five pillars:
Pillar 1 — Declarative Acceptance Criteria
ACs must be atomic, verifiable predicates. Example (JSON representation):
{
"type": "predicate",
"expr": "image.size_mb <= 5"
}
Pillar 2 — Input/Output Examples as Ground Truth
Examples anchor behavior and reduce hallucination. Example:
{
"input": { "image": "photo.png", "dimensions": "800x800" },
"output": { "status": "accepted" }
}
Pillar 3 — NFRs as First-Class Citizens
Non-functional requirements must be explicit and machine-validated:
{
"nfr": "latency",
"max_ms": 300
}
Pillar 4 — Semantic Metadata & Glossary
Provide domain definitions to remove ambiguous terms:
{
"glossary": {
"paid_badge": "Displayed for items with price > 0"
}
}
Pillar 5 — Versioned Immutable Snapshots
Use immutable PRD versions and include the version in PR metadata:
refs prd:prod-2025-payment-v1#1.3
Agentic PRD Workflow — Humans + Machines
A practical workflow looks like this:
- PM authors structured PRD in ProdMoh and publishes a canonical version.
- ProdMoh exposes PRD fragments through MCP and issues a scoped token.
- Developer configures MCP token in the IDE plugin (Cursor / VS Code / JetBrains).
- IDE agent calls
get_story_context(story_id)to fetch acceptance criteria, examples, and NFRs. - Agent generates code + test scaffolds; developer reviews, runs, and commits tests + code.
- CI validates PRD version, runs tests, and enforces policy gates before merge.
Concrete example — paid badge
Example PRD fragment for a paid-badge story:
{
"id":"S-100",
"title":"Display paid badge on priced items",
"acceptance":[
{"type":"predicate","expr":"response.json.items[0].badges includes 'paid'"}
],
"examples":[
{"query":"red shoes","product":{"id":"p-123","price":199}}
],
"meta":{"version":"2025.11.1","author":"pm@company.com"}
}
From this, an IDE agent can generate a Jest test, a mock response, and inline implementation suggestions.
Mechanics: Turning ACs into Tests
Test generation is core to the Agentic PRD. Steps:
- Canonicalize — convert prose ACs to predicate forms.
- Map — a mapping layer translates predicates to test templates per language.
- Score — confidence scoring based on example completeness and parse certainty.
- Propose — IDE proposes tests as drafts; developers review and commit.
- Gate — CI enforces presence and passing of PRD-derived tests for changed stories.
Concrete example — paid badge
Example PRD fragment for a paid-badge story:
{
"id": "S-100",
"title": "Display paid badge on priced items",
"acceptance": [
{ "type": "predicate", "expr": "response.json.items[0].badges includes 'paid'" }
],
"examples": [
{ "query": "red shoes", "product": { "id": "p-123", "price": 199 } }
],
"meta": { "version": "2025.11.1", "author": "pm@company.com" }
}
From this, an IDE agent can generate a Jest test, a mock response, and inline implementation suggestions.
Mechanics: Turning ACs into Tests
Test generation is core to the Agentic PRD. Steps:
- Canonicalize — convert prose ACs to predicate forms.
- Map — a mapping layer translates predicates to test templates per language.
- Score — confidence scoring based on example completeness and parse certainty.
- Propose — IDE proposes tests as drafts; developers review and commit.
- Gate — CI enforces PRD-derived tests before merge.
{
"predicate": "response.json.items[0].badges contains 'paid'",
"template": "test('<title>', async () => { const res = await <call>; expect(res.json.items[0].badges).toContain('paid'); });"
}
Confidence Scoring & Human-in-the-loop
Not every generated artifact should be auto-approved. Use a confidence score computed from:
- Completeness of examples
- Semantic match across stories
- Availability of mock data
Low confidence → PM signoff required before tests are considered authoritative by CI.
CI Gating & Policy
Enforce these checks in CI:
- PRD version in commit/PR metadata matches ProdMoh canonical version
- All changed stories have PRD-derived tests present
- NFR checks (latency/security) added as separate policy suites
- Audit metadata stored in CI logs
Security, Governance & Token Design
MCP tokens are sensitive credentials. Recommendations:
- Scoped tokens per purpose: read:stories, annotate, generate-tests.
- Short lifetimes for developer tokens; longer for CI but with minimal scope.
- Repository binding to reduce cross-project blast radius.
- Audit logging for token usage, client id, and PRD version.
Store PRD version and story IDs in PR metadata, ProdMoh audit logs, and CI build metadata for full traceability.
Organizational adoption — rollout plan
A pragmatic phased approach reduces friction:
Phase 0 — Foundations
- Create canonical PRD templates in ProdMoh emphasizing predicate-based ACs.
- Train PMs with linting and pre-publish validators.
- Define token governance and a platform ticket for token issuance.
Phase 1 — Pilot
- Choose one team and one medium-complexity feature.
- Install IDE MCP client, generate tests, add CI gating for that feature branch.
- Measure PR cycle time, clarification threads per PR, and escaped defects.
Phase 2 — Scale
- Roll out templates, standardize training, automate token issuance.
- Integrate NFR enforcement into CI and expand policy automation.
Operational edge cases
Underspecified stories
ProdMoh should mark stories with requires-clarification when acceptance criteria lack predicates or examples.
The IDE surfaces this flag and blocks test generation until clarified.
Non-functional requirements
NFRs convert into CI smoke tests (latency budgets, throughput thresholds) and are enforced as policy suites rather than unit tests.
Security-sensitive flows
Features touching PII or financial flows require an elevated token scope and automatic security gating in CI; they should also route to a security review queue before generation is permitted.
Business impact & metrics
Pilots using ProdMoh + MCP typically show measurable gains:
- Clarification threads in PRs — down by ~50–70%
- PR cycle time — down by ~15–25%
- Escaped defects — down by ~30–70%
These improvements come primarily from better specification and earlier verification rather than “smarter AI” alone.
Agentic PRD vs Traditional PRD — quick comparison
| Dimension | Traditional PRD | Agentic PRD |
|---|---|---|
| Audience | Humans | Humans + AI agents |
| Format | Prose | Schema-driven JSON / structured fields |
| Acceptance Criteria | Bulleted prose | Machine-parsable predicates |
| Discoverability | External to IDE | Pullable into IDE via MCP |
| Testability | Manual | Auto-generated |
| Traceability | Weak | Strong (versioned) |
How to start — practical checklist
- Adopt ProdMoh canonical PRD schema.
- Train PMs on predicate-based ACs and example-driven design.
- Provision MCP tokens and configure an IDE client for one pilot team.
- Add PRD version checks to CI and require PR metadata that references PRD versions.
- Measure cycle time, clarification threads, and escaped defects — report outcomes in 30–60 days.
Conclusion — Agentic PRDs are the future of product intent
As AI becomes an active contributor to engineering, requirements must become machine-first. Agentic PRDs represent a practical, production-ready approach to transforming product intent into executable, verifiable, versioned artifacts. Platforms like ProdMoh, exposing those artifacts through MCP, make this transition feasible today: they reduce ambiguity, shift verification left, and ensure that the code being written aligns to product intent.