What is this?
smriti context generates a compact project summary (~200-300 tokens) from your session history and injects it into .smriti/CLAUDE.md, which Claude Code auto-discovers. The idea is that new sessions start with awareness of recent work — hot files, git activity, recent sessions — instead of re-discovering everything from scratch.
We don't know yet if this actually saves tokens. Our initial tests show mixed results, and we need data from real projects to understand where context injection matters.
How to test
Prerequisites
smriti ingest claude # make sure sessions are ingested
Step 1: Baseline session (no context)
mv .smriti/CLAUDE.md .smriti/CLAUDE.md.bak
Start a new Claude Code session, give it a task, let it finish, exit.
Step 2: Context session
mv .smriti/CLAUDE.md.bak .smriti/CLAUDE.md
smriti context
Start a new Claude Code session, give the exact same task, let it finish, exit.
Step 3: Compare
smriti ingest claude
smriti compare --last
What to share
Post a comment here with:
- The task prompt you used (same for both sessions)
- The
smriti compare output (copy-paste the table)
- Project size — rough number of files, whether you have a detailed
CLAUDE.md in the repo
- Your observations — did the context-aware session behave differently? Fewer exploratory reads? Better first attempt?
What we've found so far
| Task Type |
Context Impact |
Notes |
| Knowledge questions ("how does X work?") |
Minimal |
Both sessions found the right files immediately from project CLAUDE.md |
| Implementation tasks ("add --since flag") |
Minimal |
Small, well-scoped tasks don't need exploration |
| Ambiguous/exploration tasks |
Untested |
Expected sweet spot — hot files guide Claude to the right area |
| Large codebases (no project CLAUDE.md) |
Untested |
Expected sweet spot — context replaces missing documentation |
Good task prompts to try
These should stress-test whether context helps:
- Ambiguous bug fix: "There's a bug in the search results, fix it" (forces exploration)
- Cross-cutting feature: "Add logging to all database operations" (needs to find all DB touchpoints)
- Continuation task: "Continue the refactoring we started yesterday" (tests session memory)
- Large codebase, no CLAUDE.md: Any implementation task on a project without a detailed CLAUDE.md
Tips
- Use
smriti compare --json for machine-readable output
- You can compare any two sessions:
smriti compare <id-a> <id-b> (supports partial IDs)
- Run
smriti context --dry-run to see what context your sessions will get
What is this?
smriti contextgenerates a compact project summary (~200-300 tokens) from your session history and injects it into.smriti/CLAUDE.md, which Claude Code auto-discovers. The idea is that new sessions start with awareness of recent work — hot files, git activity, recent sessions — instead of re-discovering everything from scratch.We don't know yet if this actually saves tokens. Our initial tests show mixed results, and we need data from real projects to understand where context injection matters.
How to test
Prerequisites
smriti ingest claude # make sure sessions are ingestedStep 1: Baseline session (no context)
Start a new Claude Code session, give it a task, let it finish, exit.
Step 2: Context session
Start a new Claude Code session, give the exact same task, let it finish, exit.
Step 3: Compare
What to share
Post a comment here with:
smriti compareoutput (copy-paste the table)CLAUDE.mdin the repoWhat we've found so far
Good task prompts to try
These should stress-test whether context helps:
Tips
smriti compare --jsonfor machine-readable outputsmriti compare <id-a> <id-b>(supports partial IDs)smriti context --dry-runto see what context your sessions will get