Welcome to promptctl! This guide will walk you through the philosophy and practical steps of managing your prompts as code.
At 360labs.dev, we believe that prompts are software. They should not live in database columns, sticky notes, or loose playground links. They should be:
- Version Controlled: Track changes over time.
- Tested: Verified against deterministic datasets.
- Measurable: Latency, Cost, and Accuracy should be visible.
promptctl is built to enable this workflow.
git clone https://github.com/360labs/promptctl.git
cd promptctl
pnpm install
pnpm run buildVerify the installation:
./packages/cli/bin/run --helpFor convenience, link it globally:
cd packages/cli && pnpm link --globalCreate a dedicated workspace for your AI artifacts.
mkdir demo-project
cd demo-project
promptctl initThis creates the scaffolding:
.promptctl/: Do not commit this if it contains secrets or heavy logs.prompts/: Place your logic here.tests/: Place your truth data here.
We use Frontmatter Markdown (.md). This allows proper syntax highlighting in editors (VS Code) while keeping metadata structured.
prompts/joke.md
---
name: joke-generator
model: gemini-1.5-flash
temperature: 0.9
description: Generates dad jokes based on topics.
---
Write a short, pun-based dad joke about {{topic}}.
Keep it under 20 words.The {{topic}} is a variable slot. You can have as many as you need.
Tests are defined in a JSON Array. Each object is a "Case".
tests/jokes.json
[
{
"id": "t1",
"input": { "topic": "fruit" },
"assert": [
{ "type": "regex", "pattern": "(apple|banana|orange|pear|fruit)", "flags": "i" }
]
},
{
"id": "t2",
"input": { "topic": "atoms" },
"assert": [
{ "type": "contains", "value": "trust" }
]
}
]Run the eval:
promptctl eval prompts/joke.md --tests tests/jokes.jsonIf a test fails, promptctl returns a non-zero exit code, making it perfect for CI/CD pipelines (GitHub Actions, Jenkins, etc.).
- Learn about advanced Evaluations.
- Connect different Providers.
- Visualize results with the command
promptctl dashboard.