PM Skills

Product thinking for Claude Code.

The product expertise your AI is missing, and the hard questions it won't ask.

What's included
Core PM knowledge skill + anti-slop patterns / 16 command skills: /teach-pm, /setup, /discover, /position, /strategy, /prioritise, /brief, /spec, /metrics, /stories, /review, /decide, /translate, /stakeholders, /audit, /retro
What this is

Established PM frameworks, wired into your workflow.

AI writes fast but ships bloated docs and vague specs. Every skill here draws on established product thinking: asking questions before generating, flagging what you missed, and cutting what doesn't need to be there.

01

Asks hard questions first

/pm:decide doesn't produce a decision doc from a one-liner. It pushes back: "What's the constraint? What did you rule out? Why not the simpler option?"

02

Finds what you missed

/pm:review is adversarial. It finds what you missed: vague acceptance criteria, unstated assumptions, missing edge cases. The questions engineering would ask, before they ask them.

03

Context is the quality gate

Every skill checks for product context before generating. Run /pm:teach-pm once and your product, users, and constraints are baked into every output.

16 skills

What you get.

Get started
/pm:teach-pm One-time setup. Explores your codebase, asks about your product, users, and strategy, then writes the context file that every other skill uses.
/pm:setup Generate a CLAUDE.md for your product team. Interviews about team structure, norms, and ways of working. Makes AI useful for the whole team.
Discover
/pm:discover Plan customer conversations that get truth, not politeness. Debrief after to extract signal from noise. Maps the four forces driving switching behaviour.
/pm:position Five-component positioning from Obviously Awesome. Competitive alternatives, unique attributes, value, target customers, market category. Tests positioning and checks for common pitfalls.
/pm:strategy Walk through the Playing to Win choice cascade. Winning aspiration, where to play, how to win, capabilities, management systems. Tests for coherence, flags broken links, forces "what we're NOT doing."
/pm:prioritise Stack-rank work against outcomes and strategy. Forces trade-offs, checks for drift, flags when you're shipping features instead of moving metrics.
Create
/pm:brief Generate an engineering brief from a feature description, design, or screenshot. Structured, gap-free, and slop-tested. Anticipates the questions engineering will ask.
/pm:spec Full product specification with success metrics, risks, rollout plan, and pre-mortem. Forces you to define what's NOT in scope. Runs the four risks framework.
/pm:metrics Define primary (one only), secondary (2–3), guardrail, and counter-metrics. Forces baselines, specific targets, and measurement plans. Pushes back on vanity metrics.
/pm:stories Break features into JTBD-framed user stories with testable acceptance criteria. Flags hidden dependencies. Each story is independently valuable and sprint-sized.
Sharpen
/pm:review Adversarial review of any spec, PRD, or brief. Finds gaps, contradictions, vague criteria, and slop. Delivers the questions engineering would ask, ranked by severity.
/pm:decide Structure a decision with options, weighted criteria, and trade-offs. Checks for cognitive biases. Runs a pre-mortem. Won't let you skip the hard parts.
/pm:translate Turn any content into a deck, email, doc, or talking points for any audience. Asks who's in the room, what they care about, and what you need them to walk away with before it writes a word.
/pm:stakeholders Get the right message to the right person. Handle tough conversations. Builds stakeholder profiles and engagement plans; doesn't persist sensitive data to files.
Assess
/pm:audit Challenge whether you're doing the right work. Checks evidence quality, strategic alignment, discovery gaps, and priority drift. The question review won't ask: should you be building this at all?
/pm:retro Evaluate what shipped against what you expected. Reconstructs the original hypothesis to prevent hindsight bias. Separates decision quality from outcome quality. Ends with: double down, pivot, investigate, or kill.

Missing a skill? Suggest one on GitHub

Built-in quality gate

The PM Slop Test.

Every skill runs this before delivering. If you showed this output to engineering and they came back with a bunch of clarifying questions in the first hour, it's slop.

Audience specified

Not "users." Which users, in what context, with what constraints?

Problem stated

Not "better experience." What's broken, for whom, and what's the evidence?

Success measurable

Not "positive feedback." What metric, what target, measured how?

Edge cases covered

Not just the happy path. What happens when it fails, when data's empty, when users race?

Scope bounded

Not infinite. At least three things explicitly NOT in scope.

Trade-offs explicit

Not "obviously correct." What are you giving up? What's the cost of this choice?

Concise enough to read

Could this be half as long? If a section exists only to sound thorough, cut it.

Get started

Up and running in three minutes.

Step 1

Install

Add the marketplace and install the plugin. Two commands, no dependencies, no config.

Step 2

Teach

Run /pm:teach-pm once. It explores your codebase, asks about your product, and writes the context that makes everything work.

Step 3

Work

Use /pm:brief, /pm:spec, /pm:decide, or any skill. They inherit your product context. No copy-pasting. No prompt engineering.

# Add the marketplace and install the plugin /plugin marketplace add smonggliddery/pm-skills /plugin install pm@pm-skills # Run once to set up product context /pm:teach-pm # Then use any skill /pm:brief "User can filter dashboard by date range" /pm:review path/to/spec.md /pm:decide "Should we build SSO or focus on onboarding?"
Ready?

Stop shipping vague specs.

Free and open source. Install it, run /pm:teach-pm, and try /pm:brief on something you're working on.