docs: add examples for tool calling and llm_task (fixes #26)#29
docs: add examples for tool calling and llm_task (fixes #26)#29gandli wants to merge 3 commits intoopenclaw:mainfrom
Conversation
Added comprehensive documentation for Lobster workflows: - openclaw.invoke examples with message tool and --each flag - llm_task.invoke examples with prompts, schemas, and artifacts - Detailed section on passing data between steps - Cookbook with 5 common patterns: - Fetch → LLM → Notify pipeline - Approval workflow with LLM recommendation - Batch processing with --each - Conditional steps based on LLM output - Retry with different models - Daily standup example from issue openclaw#26
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a155e6b81f
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
README.md
Outdated
|
|
||
| - id: handle-urgent | ||
| command: openclaw.invoke --tool message --action send --args-json '{"provider":"discord","channel":"urgent","message":"Urgent item detected!"}' | ||
| condition: 'test "$classify.json.priority" = "urgent"' |
There was a problem hiding this comment.
Use supported condition syntax in cookbook examples
Workflow file conditions are not shell commands: evaluateCondition in src/workflows/file.ts only accepts true/false or $<step>.approved|skipped, and throws Unsupported condition for other strings. This test ... condition (and the similar one in Pattern 5) will cause the documented workflows to fail at runtime instead of branching when users copy the example.
Useful? React with 👍 / 👎.
README.md
Outdated
| env: | ||
| MSG: "$analyze.stdout" | ||
| command: > | ||
| openclaw.invoke --tool message --action send --args-json "{\"provider\":\"discord\",\"channel\":\"$LOBSTER_ARG_CHANNEL\",\"message\":\"$MSG\"}" |
There was a problem hiding this comment.
Escape LLM output before embedding into args-json
This command interpolates $MSG directly inside a JSON literal passed to --args-json; if the LLM output contains quotes or newlines, shell expansion produces invalid JSON and openclaw.invoke fails with --args-json must be valid JSON. Because arbitrary model text frequently includes these characters, this example is unreliable unless the payload is JSON-escaped first (for example via jq --arg).
Useful? React with 👍 / 👎.
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a155e6b81f
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
README.md
Outdated
|
|
||
| **Basic syntax:** | ||
| ```bash | ||
| llm_task.invoke --prompt 'Your prompt here' |
There was a problem hiding this comment.
Use an installed executable for llm_task.invoke examples
These examples present llm_task.invoke as a shell command, but Lobster only installs openclaw.invoke and clawd.invoke shims in package.json while workflow steps run under /bin/sh; on a normal install this command is not found, so users following this snippet will hit llm_task.invoke: not found unless they manually add their own wrapper.
Useful? React with 👍 / 👎.
README.md
Outdated
| command: echo '[{"user":"alice"},{"user":"bob"}]' | ||
| - id: notify-each | ||
| command: > | ||
| lobster 'exec --json --stdin json | openclaw.invoke --tool message --action send --each --item-key to --args-json "{\"provider\":\"discord\",\"message\":\"Hello!\"}"' |
There was a problem hiding this comment.
Remove bare exec stage from pipeline examples
This pipeline starts with exec --json --stdin json but provides no command to run; src/commands/stdlib/exec.ts explicitly throws when exec has no command (exec requires a command), so the documented pattern fails before openclaw.invoke is reached.
Useful? React with 👍 / 👎.
README.md
Outdated
|
|
||
| - id: handle-urgent | ||
| command: openclaw.invoke --tool message --action send --args-json '{"provider":"discord","channel":"urgent","message":"Urgent item detected!"}' | ||
| condition: 'test "$classify.json.priority" = "urgent"' |
There was a problem hiding this comment.
Use supported workflow condition syntax in branching example
This condition uses a shell test expression, but workflow conditions are not shell-evaluated: evaluateCondition in src/workflows/file.ts only accepts true, false, or $<step>.(approved|skipped) and throws on other strings, so this branching example errors at runtime instead of routing by priority.
Useful? React with 👍 / 👎.
- Replace llm_task.invoke with openclaw.invoke (lobster only installs openclaw.invoke and clawd.invoke) - Remove invalid exec --stdin without command patterns - Fix condition syntax: only true/false or $step.approved|skipped are supported - Add jq-based JSON escaping for LLM output in message payloads Fixes Codex review feedback on PR openclaw#29
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a7e90e1e48
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
README.md
Outdated
| PROJECT: "$LOBSTER_ARG_PROJECT" | ||
| LIMIT: "$LOBSTER_ARG_LIMIT" | ||
| command: > | ||
| jq -n --argjson tickets "$cat" --arg team "$TEAM" --arg project "$PROJECT" --arg limit "$LIMIT" '{"prompt":("Summarize the top " + $limit + " most urgent tickets for the daily standup. Team: " + $team + ", Project: " + $project),"context":$tickets}' | |
There was a problem hiding this comment.
Read stdin data instead of undefined
$cat variable
In the daily-standup example, jq -n --argjson tickets "$cat" ... reads from a shell variable named cat, but that variable is never set; in /bin/sh this expands to an empty string, so jq --argjson errors with invalid JSON and the step fails before openclaw.invoke runs. This makes the showcased workflow non-runnable as written.
Useful? React with 👍 / 👎.
README.md
Outdated
| LIMIT: "$LOBSTER_ARG_LIMIT" | ||
| command: > | ||
| jq -n --argjson tickets "$cat" --arg team "$TEAM" --arg project "$PROJECT" --arg limit "$LIMIT" '{"prompt":("Summarize the top " + $limit + " most urgent tickets for the daily standup. Team: " + $team + ", Project: " + $project),"context":$tickets}' | | ||
| openclaw.invoke --tool llm_task --action invoke --args-json @/dev/stdin |
There was a problem hiding this comment.
Pass valid JSON string to
--args-json
This passes @/dev/stdin to --args-json, but openclaw.invoke parses that flag with JSON.parse (see src/commands/stdlib/openclaw_invoke.ts), so non-JSON tokens are rejected with openclaw.invoke --args-json must be valid JSON. As written, the standup example fails at runtime even if the prior jq command succeeds.
Useful? React with 👍 / 👎.
| else | ||
| echo '{"success": true}' | ||
| fi | ||
| approval: required |
There was a problem hiding this comment.
Remove unconditional approval from retry control step
Marking check-error as approval: required forces an approval pause on every run, including the success path that emits {"success": true}; workflow execution then sets $check-error.approved from the human response, so approving that fallback prompt can trigger the fallback model call even when the primary call succeeded. This breaks the intended “retry only on failure” behavior of the pattern.
Useful? React with 👍 / 👎.
- Fix daily-standup example: use proper stdin handling instead of undefined $cat variable - Fix --args-json usage: pass valid JSON string instead of @/dev/stdin - Fix retry example: clarify approval behavior and add automatic retry alternative - Add comments explaining the approval mechanism limitations
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ad39be28a1
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
|
|
||
| ### Calling the LLM task tool via openclaw.invoke | ||
|
|
||
| Use `openclaw.invoke` with `--tool llm_task` (or your configured LLM tool name) to call the LLM: |
There was a problem hiding this comment.
Use default llm-task tool name in LLM examples
These docs tell users to call OpenClaw with --tool llm_task, but Lobster’s built-in LLM invocation path uses tool: 'llm-task' (src/commands/stdlib/llm_task_invoke.ts:454). In environments that only expose the default tool name, copying these examples will fail with an unknown-tool error, so the new LLM workflow guidance is not runnable as written.
Useful? React with 👍 / 👎.
| # Only run fallback if error was detected AND user approved retry | ||
| condition: $on-error.approved |
There was a problem hiding this comment.
Gate fallback step on primary failure
This condition checks only whether the approval step was accepted, not whether the primary call failed. As written, approving after a successful primary invocation still runs the fallback model, which contradicts the inline comment and triggers an unnecessary second LLM request (extra cost/latency and potentially conflicting outputs).
Useful? React with 👍 / 👎.
Summary
This PR adds comprehensive documentation for Lobster workflows, addressing issue #26 which requested better examples for tool calling and llm_task.
Changes
openclaw.invoke examples
--eachflagllm_task.invoke examples
Data passing between steps
stdin: $stepId.stdoutto pipe data.jsonCookbook: Common Patterns
--eachfor multiple itemsTesting
All examples follow the existing Lobster YAML syntax and are based on the test cases in the repository.
Fixes #26