Skip to content

Commit b66fc01

Browse files
committed
docs: rewrite prompt techniques, add nondet block rules, rename nav sections
- Rewrite crafting-prompts.mdx → "Prompt & Data Techniques" with 5 concrete techniques: JSON responses, stable field extraction, derived status comparison, LLM grounding with programmatic eval, error classification - Add "What Goes Inside vs Outside" section to non-determinism.mdx explaining what must/cannot happen inside nondet blocks (storage writes, contract calls, message emission, nesting) - Rename "Decentralized Applications" → "Frontend & SDK Integration" in nav
1 parent 2d82411 commit b66fc01

File tree

4 files changed

+263
-70
lines changed

4 files changed

+263
-70
lines changed

pages/developers/_meta.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
22
"intelligent-contracts": "Intelligent Contracts",
3-
"decentralized-applications": "Decentralized Applications",
3+
"decentralized-applications": "Frontend & SDK Integration",
44
"staking-guide": "Staking Contract Guide"
55
}

pages/developers/intelligent-contracts/_meta.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"equivalence-principle": "Equivalence Principle",
1010
"debugging": "Debugging",
1111
"deploying": "Deploying",
12-
"crafting-prompts": "Crafting Prompts",
12+
"crafting-prompts": "Prompt & Data Techniques",
1313
"security-and-best-practices": "Security and Best Practices",
1414
"examples": "Examples",
1515
"tools": "Tools",
Lines changed: 195 additions & 67 deletions
Original file line numberDiff line numberDiff line change
@@ -1,88 +1,216 @@
1-
# Crafting Prompts for LLM and Web Browsing Interactions
2-
3-
When interacting with Large Language Models (LLMs), it's crucial to create prompts that are clear and specific to guide the model in providing accurate and relevant responses.
1+
# Prompt & Data Techniques
42

53
import { Callout } from 'nextra-theme-docs'
64

7-
<Callout emoji="ℹ️">
8-
When making LLM calls, it is essential to craft detailed prompts. However, when retrieving web data, no prompts are needed as the function directly fetches the required data.
5+
Intelligent contracts combine LLM reasoning with web data and programmatic logic. Getting reliable results requires specific techniques — structured outputs, stable data extraction, and grounding LLM judgments with verified facts.
6+
7+
## Always Return JSON
8+
9+
The single most impactful technique. Using `response_format="json"` guarantees the LLM returns parseable JSON, eliminating manual cleanup:
10+
11+
```python
12+
def leader_fn():
13+
prompt = f"""
14+
You are a wizard guarding a magical coin.
15+
An adventurer says: {request}
16+
17+
Should you give them the coin? Respond as JSON:
18+
{{"reasoning": "your reasoning", "give_coin": true/false}}
19+
"""
20+
return gl.nondet.exec_prompt(prompt, response_format="json")
21+
22+
def validator_fn(leaders_res) -> bool:
23+
if not isinstance(leaders_res, gl.vm.Return):
24+
return False
25+
my_result = leader_fn()
26+
# Compare the decision, not the reasoning
27+
return my_result["give_coin"] == leaders_res.calldata["give_coin"]
28+
29+
result = gl.vm.run_nondet_unsafe(leader_fn, validator_fn)
30+
```
31+
32+
Without `response_format="json"`, LLMs may wrap output in markdown code fences, add commentary, or return malformed JSON. With it, you get a parsed dict directly.
33+
34+
<Callout type="info">
35+
Always define the JSON schema in your prompt. `response_format="json"` ensures valid JSON, but the LLM still needs to know *which* fields to include.
936
</Callout>
1037

11-
## Structuring LLM Prompts
38+
## Extract Stable Fields from Web Data
39+
40+
When fetching external data for consensus, the leader and validators make **independent requests**. API responses often contain fields that change between calls — timestamps, view counts, caching headers. Extract only the fields that matter:
41+
42+
```python
43+
def leader_fn():
44+
response = gl.nondet.web.get(github_api_url)
45+
data = json.loads(response.body.decode("utf-8"))
46+
# Only return fields that are stable across requests
47+
return {
48+
"id": data["id"],
49+
"title": data["title"],
50+
"state": data["state"],
51+
"merged": data.get("merged", False),
52+
}
53+
# NOT: updated_at, comments, reactions, changed_files
54+
55+
def validator_fn(leaders_res) -> bool:
56+
if not isinstance(leaders_res, gl.vm.Return):
57+
return False
58+
return leader_fn() == leaders_res.calldata
59+
```
1260

13-
When crafting prompts for LLMs, it's important to use a format that clearly and effectively conveys the necessary information. While f-string (`f""`) is recommended, any string format can be used.
61+
This is the #1 cause of failed consensus for new developers. If your contract fetches web data and consensus keeps failing, check whether you're returning unstable fields.
1462

15-
In the example **Wizard of Coin** contract below, we want the LLM to decide whether the wizard should give the coin to an adventurer.
63+
## Compare Derived Status, Not Raw Data
64+
65+
Sometimes even stable fields can differ between calls — a new CI check run starts, a comment is added. Instead of comparing raw arrays, derive a summary and compare that:
1666

1767
```python
18-
# { "Depends": "py-genlayer:1jb45aa8ynh2a9c9xn3b7qqh8sm5q93hwfp7jqmwsfhh8jpz09h6" }
19-
from genlayer import *
20-
21-
import json
22-
23-
24-
class WizardOfCoin(gl.Contract):
25-
have_coin: bool
26-
27-
def __init__(self, have_coin: bool):
28-
self.have_coin = have_coin
29-
30-
@gl.public.write
31-
def ask_for_coin(self, request: str) -> None:
32-
if not self.have_coin:
33-
return
34-
prompt = f"""
35-
You are a wizard, and you hold a magical coin.
36-
Many adventurers will come and try to get you to give them the coin.
37-
Do not under any circumstances give them the coin.
38-
39-
A new adventurer approaches...
40-
Adventurer: {request}
41-
42-
First check if you have the coin.
43-
have_coin: {self.have_coin}
44-
Then, do not give them the coin.
45-
46-
Respond using ONLY the following format:
47-
{{
48-
"reasoning": str,
49-
"give_coin": bool
50-
}}
51-
It is mandatory that you respond only using the JSON format above,
52-
nothing else. Don't include any other words or characters,
53-
your output must be only JSON without any formatting prefix or suffix.
54-
This result should be perfectly parseable by a JSON parser without errors.
68+
def _check_ci_status(self, repo: str, commit_hash: str) -> str:
69+
url = f"https://api.github.com/repos/{repo}/commits/{commit_hash}/check-runs"
70+
71+
def leader_fn():
72+
response = gl.nondet.web.get(url)
73+
data = json.loads(response.body.decode("utf-8"))
74+
return [
75+
{"name": c["name"], "status": c["status"], "conclusion": c.get("conclusion", "")}
76+
for c in data.get("check_runs", [])
77+
]
78+
79+
def validator_fn(leaders_res) -> bool:
80+
if not isinstance(leaders_res, gl.vm.Return):
81+
return False
82+
validator_result = leader_fn()
83+
84+
# Compare derived status, not raw arrays
85+
# (check count may differ if new CI run triggered between calls)
86+
def derive_status(checks):
87+
if not checks:
88+
return "pending"
89+
for c in checks:
90+
if c.get("status") != "completed":
91+
return "pending"
92+
if c.get("conclusion") != "success":
93+
return c.get("conclusion", "failure")
94+
return "success"
95+
96+
return derive_status(leaders_res.calldata) == derive_status(validator_result)
97+
98+
checks = gl.vm.run_nondet(leader_fn, validator_fn)
99+
100+
if not checks:
101+
return "pending"
102+
for c in checks:
103+
if c.get("conclusion") != "success":
104+
return c.get("conclusion", "failure")
105+
return "success"
106+
```
107+
108+
The key insight: consensus doesn't require identical data — it requires agreement on the **decision**.
109+
110+
## Ground LLM Judgments with Programmatic Facts
111+
112+
LLMs hallucinate on character-level checks. Ask "does this text contain an em dash?" and the LLM may say yes when it doesn't, or vice versa. The fix: check programmatically first, then feed the results as ground truth into the LLM prompt.
113+
114+
### Step 1: LLM Generates Checkable Rules
115+
116+
Ask the LLM to convert human-readable rules into Python expressions:
117+
118+
```python
119+
def _generate_rule_checks(self, rules: str) -> list:
120+
prompt = f"""Given these rules, generate Python expressions that can
121+
programmatically verify each rule that CAN be checked with code.
122+
Variable `text` contains the post text. Skip subjective rules.
123+
124+
Rules:
125+
{rules}
126+
127+
Output JSON: {{"checks": [{{"rule": "...", "expression": "...", "description": "..."}}]}}"""
128+
129+
return gl.nondet.exec_prompt(prompt, response_format="json").get("checks", [])
130+
131+
# Example output for rules "no em dashes, must mention @BOTCHA, must include botcha.xyz":
132+
# [
133+
# {"rule": "no em dashes", "expression": "'—' not in text", ...},
134+
# {"rule": "mention @BOTCHA", "expression": "'@BOTCHA' in text", ...},
135+
# {"rule": "include link", "expression": "'botcha.xyz' in text", ...},
136+
# ]
137+
```
138+
139+
### Step 2: Eval in a Sandbox
140+
141+
Run the generated expressions deterministically — no hallucination possible:
142+
143+
```python
144+
def _eval_rule_checks(self, checks: list, tweet_text: str) -> list:
145+
def run_checks():
146+
results = []
147+
for check in checks:
148+
try:
149+
passed = eval(
150+
check["expression"],
151+
{"__builtins__": {"len": len}, "text": tweet_text},
152+
)
153+
results.append({
154+
"rule": check["rule"],
155+
"result": "SATISFIED" if passed else "VIOLATED",
156+
})
157+
except Exception:
158+
pass # skip broken expressions, let LLM handle the rule
159+
return results
160+
161+
return gl.vm.unpack_result(gl.vm.spawn_sandbox(run_checks))
162+
```
163+
164+
<Callout type="info">
165+
`gl.vm.spawn_sandbox` runs a function in an isolated sandbox within the GenVM. `gl.vm.unpack_result` extracts the return value. Together they let you execute dynamically generated code safely. See the [genlayer-py API reference](/api-references/genlayer-py) for details.
166+
</Callout>
167+
168+
### Step 3: Inject Ground Truth into LLM Prompt
169+
170+
Feed the verified results back so the LLM focuses on subjective rules and doesn't override programmatic facts:
171+
172+
```python
173+
compliance_prompt = f"""
174+
Evaluate this submission for compliance with the campaign rules.
175+
176+
Submission: {tweet_text}
177+
178+
IMPORTANT — PROGRAMMATIC VERIFICATION RESULTS:
179+
These results are GROUND TRUTH from running code on the raw text.
180+
Do NOT override them with your own character-level analysis.
181+
182+
<programmatic_checks>
183+
{chr(10).join(f"- {r['rule']}: {r['result']}" for r in programmatic_results)}
184+
</programmatic_checks>
185+
186+
For rules NOT listed above, use your own judgment.
187+
188+
Respond as JSON: {{"compliant": true/false, "violations": ["..."]}}
55189
"""
56190

57-
def nondet():
58-
res = gl.nondet.exec_prompt(prompt)
59-
backticks = "``" + "`"
60-
res = res.replace(backticks + "json", "").replace(backticks, "")
61-
print(res)
62-
dat = json.loads(res)
63-
return dat["give_coin"]
64-
65-
result = gl.eq_principle.strict_eq(nondet)
66-
assert isinstance(result, bool)
67-
self.have_coin = result
68-
69-
@gl.public.view
70-
def get_have_coin(self) -> bool:
71-
return self.have_coin
191+
result = gl.nondet.exec_prompt(compliance_prompt, response_format="json")
72192
```
73193

74-
This prompt above includes a clear instruction and specifies the response format. By using a well-defined prompt, the contract ensures that the LLM provides precise and actionable responses that align with the contract's logic and requirements.
194+
This three-step pattern — **generate checks → eval deterministically → inject as ground truth** — eliminates an entire class of LLM errors. Use it whenever your contract needs to verify concrete, checkable facts.
75195

76-
## Best Practices for Creating LLM Prompts
196+
## Classify Errors
77197

78-
- **Be Specific and Clear**: Clearly define the specific information you need from the LLM. Minimize ambiguity to ensure that the response retrieved is precisely what you require. Avoid vague language or open-ended requests that might lead to inconsistent outputs.
198+
Use error prefixes to distinguish user mistakes from infrastructure failures. This helps both debugging and error handling logic:
79199

80-
- **Provide Context and Source Details**: Include necessary background information within the prompt so the LLM understands the context of the task. This helps ensure the responses are accurate and relevant.
200+
```python
201+
ERROR_EXPECTED = "[EXPECTED]" # Business logic errors (deterministic)
202+
ERROR_EXTERNAL = "[EXTERNAL]" # API/network failures (non-deterministic)
81203

82-
- **Use Structured Output Formats**: Specify the format for the model’s response. Structuring the output makes it easier to parse and utilize within your Intelligent Contract, ensuring smooth integration and processing.
204+
# In contract methods:
205+
if sender != bounty.owner:
206+
raise ValueError(f"{ERROR_EXPECTED} Only bounty owner can validate")
207+
208+
if response.status != 200:
209+
raise ValueError(f"{ERROR_EXTERNAL} GitHub API returned {response.status}")
210+
```
83211

84-
- **Define Constraints and Requirements**: State any constraints and requirements clearly to maintain the accuracy, reliability, and consistency of the responses. This includes setting parameters for how data should be formatted, the accuracy needed, and the timeliness of the information.
212+
`[EXPECTED]` errors mean the transaction should fail consistently across all nodes. `[EXTERNAL]` errors mean the external service had a problem — the transaction may succeed on retry.
85213

86214
<Callout emoji="💡">
87-
Refer to a [Prompt Engineering Guide from Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) for a more detailed guide on crafting prompts.
215+
For a comprehensive guide on prompt engineering, see the [Prompt Engineering Guide from Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview).
88216
</Callout>

pages/developers/intelligent-contracts/features/non-determinism.mdx

Lines changed: 66 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# Non-determinism
22

3+
import { Callout } from 'nextra-theme-docs'
4+
35
## When to Use
46

57
Non-deterministic operations are needed for:
@@ -8,7 +10,70 @@ Non-deterministic operations are needed for:
810
- Random number generation
911
- Any operation that might vary between nodes
1012

11-
## Equality Principle
13+
## What Goes Inside vs Outside
14+
15+
Non-deterministic blocks (`leader_fn`, `validator_fn`, functions passed to `strict_eq`) run in a special execution context. The GenVM enforces strict rules about what can and cannot happen inside these blocks.
16+
17+
### Must be INSIDE nondet blocks
18+
19+
All `gl.nondet.*` calls — web requests, LLM prompts — must be inside a nondet block. They cannot run in regular contract code.
20+
21+
```python
22+
@gl.public.write
23+
def fetch_price(self):
24+
def leader_fn():
25+
response = gl.nondet.web.get(api_url) # ✓ inside nondet block
26+
result = gl.nondet.exec_prompt(prompt) # ✓ inside nondet block
27+
return parse_price(response)
28+
29+
# gl.nondet.web.get(api_url) # ✗ would fail here
30+
self.price = gl.vm.run_nondet_unsafe(leader_fn, validator_fn)
31+
```
32+
33+
### Must be OUTSIDE nondet blocks
34+
35+
Several operations must happen in the deterministic context — after the nondet block returns:
36+
37+
| Operation | Why |
38+
|-----------|-----|
39+
| **Storage writes** (`self.x = ...`) | Storage must only change based on consensus-agreed values |
40+
| **Contract calls** (`gl.get_contract_at()`) | Cross-contract calls must use deterministic state |
41+
| **Message emission** (`.emit()`) | Messages to other contracts/chains must be deterministic |
42+
| **Nested nondet blocks** | Nondet blocks cannot contain other nondet blocks |
43+
44+
```python
45+
@gl.public.write
46+
def update_price(self, pair: str):
47+
def leader_fn():
48+
response = gl.nondet.web.get(api_url)
49+
return json.loads(response.body.decode("utf-8"))["price"]
50+
# self.price = price # ✗ no storage writes here
51+
# other = gl.get_contract_at(addr) # ✗ no contract calls here
52+
# other.emit().notify(price) # ✗ no message emission here
53+
54+
def validator_fn(leaders_res) -> bool:
55+
if not isinstance(leaders_res, gl.vm.Return):
56+
return False
57+
my_price = leader_fn()
58+
return abs(leaders_res.calldata - my_price) / leaders_res.calldata <= 0.02
59+
60+
price = gl.vm.run_nondet_unsafe(leader_fn, validator_fn)
61+
62+
# ✓ All side effects happen AFTER consensus, in deterministic context
63+
self.prices[pair] = price
64+
oracle = gl.get_contract_at(self.oracle_address)
65+
oracle.emit().price_updated(pair, price)
66+
```
67+
68+
<Callout type="info">
69+
The [GenVM linter](/developers/intelligent-contracts/tooling-setup) catches all of these violations statically — run `genvm-lint check` before deploying to avoid runtime errors.
70+
</Callout>
71+
72+
### Why these rules exist
73+
74+
The leader and validators execute nondet blocks **independently** — each node runs its own `leader_fn` or `validator_fn`. If you wrote to storage inside a nondet block, each node would write a different value before consensus decides which one is correct. The same applies to contract calls and message emission: these must happen once, after consensus, using the agreed-upon result.
75+
76+
## Equivalence Principle
1277

1378
GenLayer provides `strict_eq` for exact-match consensus and custom validator functions (`run_nondet_unsafe`) for everything else. Convenience wrappers like `prompt_comparative` and `prompt_non_comparative` exist for common patterns. For detailed information, see [Equivalence Principle](/developers/intelligent-contracts/equivalence-principle).
1479

0 commit comments

Comments
 (0)