Advanced Developer Guide
Stop treating requirements as text. Treat them as data. This guide covers how to use ProdMoh MCP to automate architecture decisions.
Under the Hood: Architecture
Unlike standard REST APIs where your IDE polls for data, ProdMoh MCP utilizes Server-Sent Events (SSE) for a persistent, unidirectional stream.
Fundamentals & AI Native Concepts
To get the most out of ProdMoh MCP, it helps to understand the AI concepts driving this technology. It is more than just an API connection; it is a new way to manage LLM Context.
1. What is MCP? (The Protocol)
The Model Context Protocol (MCP) is an open standard backed by Anthropic. It solves the "Data Silo" problem for AI.
You manually copy text from a website and paste it into ChatGPT.
❌ Stale Data
❌ Limited by Clipboard
❌ No Security Boundary
The AI "asks" the server for data only when needed.
✓ Real-time Sync
✓ Unlimited Knowledge Base
✓ Tokenized Access Control
2. AI-Native Concept: "Just-in-Time" RAG
You may know RAG (Retrieval-Augmented Generation) as "vector databases." ProdMoh MCP introduces Dynamic RAG.
- Context Window Management: LLMs (like Claude 3.5 or GPT-4o) have a limit on how much text they can read (the context window). If you paste an entire 50-page PRD, you fill the window with noise. MCP allows the AI to fetch only the specific user story relevant to the code you are currently writing.
-
Grounding:
By forcing the AI to reference a specific Resource URI (e.g.,
prodmoh://req/102), we "ground" the model. This significantly reduces hallucinations because the AI is restricted to facts provided by the server.
3. Strategic Advantages for Teams
Because MCP is a standard, your ProdMoh integration works seamlessly whether your team uses Claude, OpenAI, or open-source models via Ollama. You don't need to build separate integrations for each AI tool.
In traditional dev, the "truth" is lost in Slack messages and verbal calls. With MCP, the code generation pipeline is physically connected to the signed-off Requirement Document. If the doc changes, the code generation changes.
The MCP Primitives
To understand how ProdMoh interacts with your IDE, we must define the three atomic primitives of the Model Context Protocol. In Computer Science terms, these represent State (Resources), Execution (Tools), and Instruction (Prompts).
Definition: Data that can be read by the client (IDE). Think of these like
GET endpoints or file handles.
In ProdMoh: Every User Story, PRD, and Research Document is exposed as a Resource.
URI Scheme: prodmoh://stories/{id}/read
Definition: Functions the AI can call to perform an action or retrieve dynamic calculations. This is often called "Tool Use" or "Function Calling" in LLM literature.
In ProdMoh: We expose tools to search your knowledge base dynamically.
Function: search_requirements(query: string)
Definition: Pre-defined instructions that orchestrate complex interactions between the user, the context, and the model.
In ProdMoh: We provide "System Prompts" that automatically format your PRDs into code-ready instructions (e.g., "Scaffold Component from PRD").
Protocol Anatomy: JSON-RPC
At the transport layer, MCP uses JSON-RPC 2.0. This stateless, lightweight protocol allows the IDE and the ProdMoh Server to exchange messages over the SSE stream.
When you ask your IDE to read a requirement, the following handshake occurs on the wire:
1. The Request (Client → Server)
The IDE requests a list of available resources to populate its context menu.
{
"jsonrpc": "2.0",
"id": 1,
"method": "resources/list",
"params": {}
}
2. The Response (Server → Client)
ProdMoh responds with a structured list of your documents.
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"resources": [
{
"uri": "prodmoh://prd/102/checkout-flow",
"name": "PRD: Checkout Flow V2",
"mimeType": "text/markdown",
"description": "Updated requirements for stripe integration"
}
]
}
}
mimeType (text/markdown), the AI model immediately
understands how to parse the structure (headers, bullet points) of your document without any
custom parsers on your machine.
- Handshake: Your IDE initiates a connection to
/mcp/ssevia GET. - Transport: ProdMoh upgrades the connection to an EventStream.
- Query: When you type
@ProdMoh, the IDE sends a JSON-RPC payload via POST to/mcp/messages. - Context Injection: The server parses the specific PRD section relevant to your query and injects it directly into the LLM's system prompt context window.
Configuration
Add this configuration to your global or project-level settings file.
File Path: ~/.cursor/mcp.json or ~/.vscode/mcp.json
{
"mcpServers": {
"prodmoh": {
"type": "sse",
"url": "https://prodmoh.com/sse",
"headers": {
"x-prodmoh-token": "YOUR_TOKEN_HERE"
}
}
}
}
ChatGPT Custom GPT Integration
Connect ProdMoh to your Custom GPT in the ChatGPT GPT Store. This allows ChatGPT to access your PRDs, user stories, and product context during conversations.
Users authenticate seamlessly via ProdMoh login. No manual API key copying needed.
For GPT Developers
In the GPT Builder → Configure → Actions → Authentication:
- Select OAuth
- Client ID:
chatgpt-prodmoh - Authorization URL:
https://prodmoh.com/oauth/authorize - Token URL:
https://prodmoh.com/oauth/token - Scope:
read
Users manually copy their API key from the ProdMoh dashboard.
Step 1 — Generate Your API Key
- Open your ProdMoh Dashboard
- Click IDE Assistant in the sidebar
- Click Generate Access Token
- Copy the ChatGPT API Key shown at the top
Step 2 — Configure ChatGPT Actions
In the GPT Builder:
- Go to Configure → Actions
- Click Authentication
- Select API Key
- Set Auth Type to Bearer
- Paste your API key from ProdMoh
Step 3 — Add the OpenAPI Schema
In the Actions section, add this server URL:
https://prodmoh.com
OAuth is recommended for GPTs you'll publish — users get a seamless login experience.
API Key is faster for personal testing — just copy and paste your key.
Available Actions for ChatGPT
Once connected, your Custom GPT can:
- list_prds — Get all your PRDs
- list_userstories — Get all user stories
- get_prd_context — Read a specific PRD with full details
- get_userstory_context — Read a specific user story
- generate_coding_prompt — Generate AI coding prompts from requirements
- analyze_customer_feedback — Analyze feedback files (CSV, JSON, PDF)
Step-by-Step: Connect ProdMoh MCP to Your IDE
This guide walks you through installing, configuring, and verifying your MCP connection so your IDE can fetch PRDs, user stories, and acceptance criteria in real-time.
1. Open your ProdMoh Dashboard
2. Go to Share with Developers
3. Click Enable IDE Assistant
4. Copy the token (keep it private)
Create or update the following file:
~/.cursor/mcp.json
# or
~/.vscode/mcp.json
Add this content:
{
"mcpServers": {
"prodmoh": {
"type": "sse",
"url": "https://prodmoh.com/sse",
"headers": {
"x-prodmoh-token": "YOUR_TOKEN_HERE"
}
}
}
}
MCP servers are loaded on startup.
After editing mcp.json, do:
- VS Code → Developer: Reload Window
- Cursor → CMD + R
- Zed, Windsurf, etc. → Restart
Inside your IDE, open the Command Palette:
@ProdMoh list resources@ProdMoh search "checkout flow"@ProdMoh read prd 102
You should now see:
Connected to ProdMoh MCP (SSE)
Resources Loaded: ✓ PRDs, ✓ User Stories, ✓ Research Docs
Start coding with PRD-aware guidance. Examples:
// Generate a Zod schema
@ProdMoh Read the Signup PRD. Build a Zod schema for POST /register.
// Generate a DB model
@ProdMoh Analyze "Data Requirements" section. Create a Prisma model for Order.
// Improve tests
@ProdMoh Audit my test file payment.test.ts using PRD 145.
ProdMoh will fetch only the relevant doc fragments and inject them as grounded context into your LLM.
Advanced: Generating DB Schemas
One of the highest-leverage uses of MCP is translating business data requirements directly into database models. This reduces the risk of field mismatch.
The Workflow
1. Ensure your PRD contains a section on "Data Requirements" or "User Attributes".
2. Use the following prompt pattern:
User: @ProdMoh Reference the "User Profile" PRD. Generate the Prisma Schema (schema.prisma) for the User model. Ensure all field constraints (unique, optional, defaults) match the document exactly.
The Output (Example):
model User {
id String @id @default(uuid())
email String @unique // Derived from PRD Req #2.1
role Role @default(USER)
// PRD Req #2.4: "Users must have a bio, max 500 chars"
bio String? @db.VarChar(500)
createdAt DateTime @default(now())
}
Advanced: API Contracts
Prevent frontend-backend desync by generating Zod schemas or TypeScript interfaces directly from the acceptance criteria.
Prompt Engineering for Zod
User: @ProdMoh Read the "Signup Flow" requirements. Create a Zod validation schema for the POST /register endpoint. Pay attention to password complexity rules defined in the "Security" section.
The MCP server will retrieve the specific regex rules for passwords defined by your PM and enforce them in the generated code:
const RegisterSchema = z.object({
email: z.string().email(),
// Rules pulled from ProdMoh Doc #102
password: z.string()
.min(12, "Password must be 12 chars per Sec-Req-01")
.regex(/[A-Z]/, "Must contain uppercase per Sec-Req-02"),
});
Advanced TDD & Mocking
Moving beyond simple unit tests, ProdMoh MCP can orchestrate complex testing scenarios by understanding the intent behind your requirements.
1. The "QA Critic" Workflow
Before you commit, ask the MCP agent to audit your test suite against the PRD. It acts as an automated QA engineer.
User: @ProdMoh Read "Payment Flow Requirements". Now look at my current file payment.test.ts.
List 3 edge cases defined in the PRD that I have NOT covered in these tests.
Common catch: "You tested for successful payments, but the PRD Section 4.2 specifies that a '402 Payment Required' error must occur if the card balance is below $10. Your test suite misses this."
2. Generating Realistic Mock Data
Hardcoding test data is brittle. Use MCP to generate Mock Data Factories that strictly adhere to your data validation rules.
// Prompt: "@ProdMoh Generate a Faker.js factory for the 'Order' object based on PRD constraints."
export const createMockOrder = () => ({
id: faker.string.uuid(),
// PRD Req: "Order total cannot be negative"
total: faker.number.float({ min: 0.01, max: 9999 }),
// PRD Req: "Status must be PENDING, SHIPPED, or DELIVERED"
status: faker.helpers.arrayElement(['PENDING', 'SHIPPED', 'DELIVERED']),
items: []
});
3. Integration Test Scenarios
For end-to-end (E2E) testing with tools like Playwright or Cypress, you can generate the entire user journey script.
@ProdMoh Read the "New User Onboarding" User Story. Generate a Cypress E2E test file that steps through the entire flow described in the "Happy Path" section.
Security & Token Management
Your ProdMoh MCP token is a sensitive credential. It grants read access to your organization's internal product documentation. Treat it like an API Key.
Best Practices
-
Do not commit to Git
Your
mcp.jsonis often in your global home directory (~/.cursor/), which is safe. If you use a project-level config (.vscode/mcp.json), ensure it is added to your.gitignorefile immediately. - Read-Only Architecture The MCP token currently provides Read-Only access. The AI agent in your IDE cannot modify, delete, or overwrite your official requirements documents.
Revoking Access
If a token is compromised or a developer leaves the team, you can secure your data instantly:
Go to the ProdMoh Dashboard > "Share with Developers" > Click Regenerate. This immediately invalidates the old string and severs all active IDE connections.
Troubleshooting
Cause: Invalid Token or Firewall.
Fix: Regenerate token in dashboard. Ensure port 443 is open.
Cause: PRD is empty or lacks text content.
Fix: Ensure the document has saved content in the ProdMoh editor.
mcp.json config file, you typically need to restart the IDE (or
reload window) for the changes to take effect.