Tools used to:
- Accumulate Plutus script evaluation events by replaying blockchain folding over the Ledger state and extracting
PlutusScriptEvaluationEvents. - Record accumulated events:
- On the file system as "dump" files.
- In the PostgreSQL database.
-
Initialise the PostgreSQL database and connection using files in the
databasefolder:- There is a pgModeler (Open-source tool) project for it
- As well as the DDL statements
-
Create
.envrc.localwith the following content (adjust the paths as needed):export CARDANO_NODE_SOCKET_PATH="/home/projects/cardano/node/node-state/mainnet/node.sock" export CARDANO_NODE_CONFIG_PATH="/home/projects/cardano/playground/docs/environments/mainnet/config.json" export DB_CONN_STRING="dbname=mainnet_plutus_events"
-
Enter the
nixshell using eithernix developcommand ordirenvhooked to your shell. -
See available commands by entering
infoin the shell. -
Run the script dump job using the
dumpcommand or script upload job with theloadcommand.
The database contains plutus script evaluation events from Mainnet which can be replayed locally to re-evaluate the scripts.
There is less value in re-evaluating scripts without any changes, as one would simply re-obtain results that are already known. However, this can be useful when the script evaluation logic has changed, and one wants to compare results produced by the new logic with the old results.
The repository contains a program that can be used to re-evaluate the scripts locally. You can use this program as a basis for your own re-evaluation, where you can modify various parameters to suit your needs:
- The Main module of the
run-script-evaluationsexecutable. - The main workhorse, function
evaluateScriptsin the Evaluation module does the boring parts (aggregating relevant script evaluation inputs, streaming the data from DB to a local computer, decoding CBOR, forking worker threads) so that you can do the interesting part: traverse script evaluations from the Mainnet accessing all of the original evaluation inputs to re-interpret them accordingly to your task.
Re-evaluates the Plutus script events from Mainnet periodically (once a week) to test compatibility with the latest Plutus development.
The workflow is defined in the .github/workflows/evaluate.yml file.
The workflow runs:
- Scheduled: Every Saturday at midnight (UTC)
- Manual: Via workflow dispatch with optional
startFromparameter - Environment: Self-hosted runner on
plutus-nodeserver with database access
- Checkout Repository: Uses the latest code from this repository
- Update Plutus Dependency: Executes
run-script-evaluations/add_srp.shto:- Fetch the latest commit from
IntersectMBO/plutusmaster branch - Generate a
source-repository-packageentry with the latest commit hash and nix SHA - Update
cabal.projectwith the new dependency (removing any existing entries for idempotency)
- Fetch the latest commit from
- Run Evaluations: Builds and executes the
run-script-evaluationsprogram:- Uses
nix developenvironment for reproducible builds - Connects to the local PostgreSQL database containing Mainnet script events
- Processes scripts starting from the specified record (default: 0)
- Outputs evaluation results to
evaluation.log
- Uses
The add_srp.sh script provides robust dependency management with:
- Command-line options:
--branch <branch>(default: master),--quietfor CI - Error handling: Validates dependencies (
jq,nix-prefetch-git) and fetched data - Idempotent operation: Safely removes existing entries before adding new ones
- Atomic updates: Uses backup/restore mechanism to prevent file corruption
- Exit codes: Returns specific codes for different error conditions (0-5)
# Default behavior (master branch)
./run-script-evaluations/add_srp.sh
# Use specific branch
./run-script-evaluations/add_srp.sh --branch develop
# Quiet mode for CI
./run-script-evaluations/add_srp.sh --quiet
# Combined options
./run-script-evaluations/add_srp.sh --branch main --quietThis automated testing ensures that script evaluations remain compatible as Plutus evolves, providing early detection of breaking changes or performance regressions.