Skip to content

Search is only available in production builds. Try building and previewing the site to test it out locally.

Prompt Quality (PQ)

Weight: 25% — the highest-weighted dimension. Prompt quality has the most direct impact on token efficiency.

Does the prompt give the AI enough context to act precisely?

Scoring signals:

SignalImpact
File paths (src/auth/validate.ts)Strong positive
Function/class names (handleSubmit())Strong positive
Line numbers (line 42, :42)Moderate positive
Error messages (quoted text, stack traces)Moderate positive
Expected behavior (“should return 200”)Moderate positive
Code snippets (inline code blocks)Mild positive
No context at allStrong negative

Examples:

ScorePrompt
9”Fix the TypeError in src/auth/validate.ts:42user.email is undefined when OAuth callback lacks profile scope. Should return 401 instead of crashing.”
5”Fix the auth bug — users can’t log in with Google OAuth”
2”fix the login bug”

Is the prompt focused on a single, clear task?

Scoring signals:

SignalImpact
Single verb with clear scopeStrong positive
Single verb, vague scopeModerate positive
Multiple verbs (“fix X and refactor Y”)Negative (-1.5 each)
Bundling phrases (“and also”, “while you’re at it”)Strong negative (-2.0)
List items (numbered/bulleted tasks)Strong negative (-2.5)

Examples:

ScorePrompt
9”Add input validation to the email field in SignupForm.tsx — reject malformed addresses and show an inline error”
4”Fix the signup form validation and also add rate limiting to the API and update the tests”
1”Fix all the bugs in the auth system, refactor the database layer, add caching, and write docs”

The fastest wins:

  1. Add file paths — always reference the specific file you’re working on
  2. Name the function — “fix parseConfig” beats “fix the parser”
  3. One task per prompt — split “fix X and add Y” into two prompts
  4. Include the error — paste the actual error message or stack trace
  5. State expected behavior — “should return 200 with { ok: true }