Skip to content

feat: agentic smoke test for AI developer experience #20

@Keinberger

Description

@Keinberger

Context

The project template ships AI context resources (CLAUDE.md, skills, build hook) that guide AI agents through Miden smart contract development. Currently there is no automated way to verify that these resources actually produce working results end-to-end.

Raised in PR #18 review as a follow-up for the testing plan.

Proposal

Create an automated test that:

  1. Runs a Claude Code (or equivalent) session against the project template
  2. Gives the agent a known task (e.g., "build a token transfer contract with tests")
  3. Verifies the resulting code compiles and tests pass

This validates that the full AI developer experience works, not just that individual code blocks compile.

Notes

  • Could run on a schedule (weekly) rather than on every PR
  • Start simple: verify a basic contract + test can be generated and compiled
  • Longer term: expand to cover more complex patterns (multi-contract, note flows)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions