Core Concepts
Optra Prism is built on a few core ideas.
The problem
Section titled “The problem”AI coding agents are powerful but opaque. Developers face:
- No prompt feedback — you don’t know if your prompts are efficient or wasteful
- Invisible throttling — rate limits silently slow you down with no visibility
- No cost visibility — token spend accumulates with no breakdown by session, model, or pattern
- No coaching — you repeat the same mistakes because nothing tells you what to improve
- No guardrails — no DLP, no budget caps, no access control for teams
Prism solves these by instrumenting the AI coding workflow and surfacing insights.
PRISM Score
Section titled “PRISM Score”The PRISM score is a composite quality metric for AI-assisted coding sessions. It measures five dimensions:
| Dimension | Weight | What it measures |
|---|---|---|
| Prompt Quality (PQ) | 25% | Specificity and decomposition of your prompts |
| Iteration Efficiency (IE) | 20% | How quickly you converge and recover from errors |
| Verification Discipline (VD) | 20% | Whether you review and validate AI output |
| Tool Use (TU) | 10% | Selection and context of tool usage |
| Advanced Features (AF) | 10% | Delegation to subagents and configuration of AI behavior |
Each dimension has 2 metrics, scored 0–10. The composite PRISM score is a weighted average.
See PRISM Score for the full breakdown.
Telemetry pipeline
Section titled “Telemetry pipeline”Data flows through four stages:
- Capture — the plugin captures OTEL telemetry and prompt text during your session
- Ingest — the ingest service (port 9005) receives OTLP data and publishes to NATS
- Store — the engine’s S3 writer consumes from NATS and writes Parquet files to S3
- Analyze — the engine scores sessions, detects patterns, and serves queries via DataFusion
Intelligence loop
Section titled “Intelligence loop”Prism creates a feedback loop:
Code with AI → Capture telemetry → Score & analyze → Surface insights → Improve → repeat- Real-time: the prompt advisor scores each prompt before submission
- Session-level: the engine scores completed sessions and detects waste patterns
- Trend-level: the dashboard shows improvement over days and weeks
- Recommendations: data-driven suggestions for model rightsizing, prompt patterns, and budgets