Skip to content

Aztec State Migration Benchmark#51

Merged
JanKuczma merged 2 commits intomainfrom
jk-minimal-benchmark
Mar 5, 2026
Merged

Aztec State Migration Benchmark#51
JanKuczma merged 2 commits intomainfrom
jk-minimal-benchmark

Conversation

@JanKuczma
Copy link
Contributor

@JanKuczma JanKuczma commented Mar 5, 2026

Summary by CodeRabbit

  • New Features

    • Added comprehensive benchmarking infrastructure to measure and analyze circuit complexity
    • Introduced benchmark contract for evaluating migration performance paths
    • Added npm script to automate benchmark generation and documentation
  • Documentation

    • Added BENCHMARKS.md documenting circuit complexity metrics and comparative analysis results

@coderabbitai
Copy link

coderabbitai bot commented Mar 5, 2026

Walkthrough

A benchmarking system for the aztec-state-migration library is introduced, consisting of a MinimalBenchmark Noir contract with multiple test functions measuring migration complexity, a Python utility script that compiles contracts and extracts circuit metrics, and benchmark documentation. Configuration updates add the new contract to the workspace.

Changes

Cohort / File(s) Summary
Documentation & Configuration
BENCHMARKS.md, noir/Nargo.toml, package.json
New benchmark results documentation, workspace configuration for minimal-benchmark contract, and npm script entry point for the benchmark utility.
Benchmark Contract
noir/test-contracts/minimal-benchmark/Nargo.toml, noir/test-contracts/minimal-benchmark/src/main.nr
New MinimalBenchmark contract with 11 public functions organized across Mode A (locking and claiming notes) and Mode B (emergency snapshot and public state) migration paths for circuit complexity measurement.
Benchmark Analysis Tool
scripts/benchmark.py
New Python utility to compile Noir contracts, extract ACIR bytecode metrics from artifacts, compute marginal costs per note, and generate markdown benchmark reports comparing against a reference transfer function.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Python as benchmark.py
    participant Nargo as Nargo Compiler
    participant Contracts as Noir Contracts
    participant Artifacts as Artifact JSONs
    participant MarkdownGen as Markdown Generator
    participant Output as BENCHMARKS.md

    User->>Python: npm run benchmark
    alt --skip-compile not set
        Python->>Nargo: Compile minimal_benchmark
        Nargo->>Contracts: Process contract code
        Contracts->>Artifacts: Generate JSON artifacts
    end
    Python->>Artifacts: Extract function metrics
    Artifacts->>Python: ACIR sizes, function names, parameters
    Python->>Python: Calculate marginal costs per note
    Python->>MarkdownGen: Pass extracted functions & reference
    MarkdownGen->>MarkdownGen: Generate results table & sections
    MarkdownGen->>Output: Write BENCHMARKS.md
    Output->>User: Display benchmark report
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Suggested reviewers

  • DamianStraszak

Poem

🐰 Benchmarks hopping, circuits measured clear,
Mode A locks notes, Mode B migrates near,
Python counts bytes, ACIR sizes bloom,
A rabbit's metrics banish the gloom! 🎯✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 10.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Aztec State Migration Benchmark' directly summarizes the main addition to the pull request: a comprehensive benchmarking system for the aztec-state-migration library, including benchmark contract, Python utility script, and documentation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch jk-minimal-benchmark

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@noir/test-contracts/minimal-benchmark/Nargo.toml`:
- Line 5: The Nargo.toml entry compiler_version currently reads ">=0.18.0" which
violates the repo policy requiring Noir v1.0.0-beta.x; update the
compiler_version field in Nargo.toml to the beta toolchain used for these
benchmarks (e.g., set compiler_version to "1.0.0-beta.18") so it matches the
repo's Noir beta version policy and the nargo version used to generate the
benchmarks.

In `@scripts/benchmark.py`:
- Around line 172-177: The ratio calculation can raise ZeroDivisionError when
reference["acir_bytes"] is zero; in the for loop over constrained (where
acir_kb, ratio, f are computed and lines.append is called) guard the division by
checking reference["acir_bytes"] first and set ratio to a safe sentinel (e.g.,
float('inf') or a string like "N/A") if it's zero, otherwise compute ratio =
f["acir_bytes"] / reference["acir_bytes"]; then use that guarded ratio in the
lines.append call.
- Around line 94-104: Replace the broad/silent exception handling in
get_nargo_version and the nearby try/except blocks (the generic "except
Exception:" handlers around lines 102 and 126) with targeted error handling that
logs or re-raises the caught exception so failures during parse/process don't
get swallowed; in get_nargo_version capture the subprocess exception (and
parsing errors) and include the exception message in a log or exception before
returning "unknown". Also guard the benchmark ratio computation that divides by
reference["acir_bytes"] by checking that reference["acir_bytes"] is non‑zero
(handle zero by skipping the ratio, using None/inf, or emitting a clear error)
to avoid ZeroDivisionError. Ensure the changes reference the get_nargo_version
function and the code that computes the ratio from reference["acir_bytes"] so
reviewers can locate the fixes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: bdf539fc-e10f-45a7-950d-70ffbd2225fc

📥 Commits

Reviewing files that changed from the base of the PR and between 42a96f1 and 038d0b3.

📒 Files selected for processing (6)
  • BENCHMARKS.md
  • noir/Nargo.toml
  • noir/test-contracts/minimal-benchmark/Nargo.toml
  • noir/test-contracts/minimal-benchmark/src/main.nr
  • package.json
  • scripts/benchmark.py

Copy link
Contributor

@DamianStraszak DamianStraszak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested and it works. Looking good.

@JanKuczma JanKuczma merged commit cd0b8c3 into main Mar 5, 2026
3 checks passed
@JanKuczma JanKuczma deleted the jk-minimal-benchmark branch March 5, 2026 14:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants