Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
192 changes: 65 additions & 127 deletions EXAMPLES.md
Original file line number Diff line number Diff line change
@@ -1,174 +1,112 @@
# Examples

First of all, please see help for additional information on what can be done:
Dive deeper into the subject you're interested in by seeing the different files in the [./architecture](./architecture), see links below.

```bash
clai help `# For more info about the available commands (and shorthands)`
```

### Queries
## Query + reply + directory reply

```bash
clai query My favorite color is blue, tell me some facts about it
clai query "Explain the design"
clai -re query "Now give the trade-offs"
clai -dre query "Apply it to this repo"
```

```bash
clai -re `# Use the -re flag to use the previous query as context for some next query` \
q Write a poem about my favorite colour
```

Personally I have `alias ask=clai q` and then `alias rask=clai -re q`.
This way I can `ask` -> `rask` -> `rask` for a temporary conversation.

Every 'temporary conversation' is also saved as a chat, so it's possible to continue it later, see below on how to list chats.

Piping into queries is also very useful:

<div align="center">
<img src="img/piping.gif" alt="Banner">
</div>
- `query` saves reply context to `<clai-config>/conversations/globalScope.json`.
- Non-reply queries also bind CWD → chat ID (dir-scope).
- `-re` loads `globalScope.json` as context.
- `-dre` first copies the directory-bound chat into `globalScope.json`, then uses normal `-re` plumbing.

With this, any data may be piped into Clai -> some LLM.
See: [`QUERY.md`](./architecture/QUERY.md), [`CHAT.md`](./architecture/CHAT.md), [`DRE.md`](./architecture/DRE.md).

Some tips of what to use it with:

- `git diff | clai query Please analyze this diff and give me a review` (tip: you may also use the [git tool](../internal/tools/programming_tool_git.go))
- `cat /tmp/some-well-prompted-specialized-task | clai query Do this task:` (tip: you may also use the [cat](../internal/tools/bash_tool_cat.go))
- `dig lorentz.app | clai query Please tell me what this arcane output means:`

### Tooling

Many vendors support function calling/tooling.
This basically means that the AI model will ask _your local machine_ to run some command, then it will analyze the output of said command.

See all the currently available tools [here](./internal/tools/).
## Inspect “what did it say last time?”

```bash
clai -t q `# Specify you wish to enable tools with -t/-tools` \
Analyze the project found at ~/Projects/clai and give me a brief summary of what it does
clai replay # last message from globalScope.json
clai dre # last message from directory-bound chat
clai -r replay # raw (no pretty/glow)
```

There's also support for [MCP servers](https://modelcontextprotocol.io/examples).

Easiest way to integrate them is to paste them using the setup command: `clai setup -> 3 -> 'p'`, following this format:

```
{
"mcpServers": {
"everything": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-everything"
],
"envfile": "/path/to/mcp.env"
}
}
}
```
- `replay` and `dre` do not call any LLM.
- They load a chat transcript and pretty-print the last message.

Use `envfile` to load credentials from a separate file so you can check in profiles and mcpServers to version control without secrets.
See: [`REPLAY.md`](./architecture/REPLAY.md), [`DRE.md`](./architecture/DRE.md).

### Conversations

```bash
clai -chat-model claude-sonnet-4-20250514 ` # Using some other model` \
chat new Lets have a conversation about Hegel
```

The `-cm`/`-chat-model` flag works for any text-like command.
Meaning: you can start a conversation with one chat model, then continue it with another.

Continue a previous conversation with:
## Bind a previous conversation to the current directory

```bash
clai chat list
clai chat continue 3
clai -dre query "Continue from that context"
```

<div align="center">
<img src="img/chats.gif" alt="Banner">
</div>
- `chat continue <index|id>` selects an existing transcript and stores a directory binding.
- After that, `-dre` in this directory uses that conversation as context.

or by using the chat's filename. The filename is generated by the 5 first tokens in the chat.
See: [`CHAT.md`](./architecture/CHAT.md).

```bash
clai c continue Lets_have_a_conversation_about
```
## Profiles = workflow presets

```bash
clai c continue 1 kant is better `# Continue some previous chat with message `
clai profiles list
clai -p ops query "Find the owners of this subsystem"
```

Within [os.GetConfigDir()](https://pkg.go.dev/os#UserConfigDir)`/.clai/conversations` you'll find all the conversations.
You can also modify the chats here as a way to prompt, or create entirely new ones as you see fit.

### Profiles
- Profiles live in `<clai-config>/profiles/*.json`.
- They can override model, prompts, and requested tools.
- These are colliqualy "agent configurations"

1. `clai setup -> 2 -> n`
1. Write some profile, example [gopher](./examples/profiles/gopher.json)
1. `cat main.go | clai -profile gopher q Fix the tests in this file: `
See: [`PROFILES.md`](./architecture/PROFILES.md), [`CONFIG.md`](./architecture/CONFIG.md).

See some examples of profiles [here](./examples/profiles/) and them try them out with `go run . -profile-path ./examples/profiles/cody.json query What is your purpose\?`
Also see examples:

Profiles allow you to preconfigure certain fields passed to the LLMs, most notably the prompt and which tools to use.
This, in turn, enables you to quickly swap between different 'LLM-modes'.
- [ops](./examples/profiles/ops.json) - This agent can answer any question about your company's systems and customers
- [tradebot](./examples/profiles/trade-bot.json) - Fully functional polymarket trade bot, defined as json. Swap prompt and model, try it out!

For instance, you may have one profile which is prompted for golang programming tasks "gopher", it has tools `write_file`, `rip grep` and `go` enabled, and then another profile which is for terraform named "terry".
With these, you don't have to 'pre-prompt' with `clai q _in terraform_ ...` or `clai q _in golang_ ...` but instead can use `clai -p terry q ...`/`clai -p gopher q ...` and also restrict which tools are allowed, as opposed to enabling _all_ tools (with `-t`).
## Tools: inspect vs enable

These profiles are saved as json at [os.GetConfigDir()](https://pkg.go.dev/os#UserConfigDir)`/.clai/profiles`.
This means that you can sync them across all of your machines and tweak your prompts wherever you code.

Here I've personally utilized aliases once more.
```bash
clai tools
clai tools rg
clai -t "rg,cat" query "Search for parsing logic and show me the file"
```

`ask` -> Generic profile-less prompt
`gask` -> `clai -p gopher q`, `grask` -> `clai -re -p gopher q` and then `task` -> `clai -p terry q`, etc.
- `clai tools` is inspection only.
- `-t` enables tool calling for that _run_; without it, tool calls are disabled.
- `-t "*"` allows all registered tools.

These aliases are later synced with the rest of my dotfiles and clai profiles, so they're shared on all my development machines.
See: [`TOOLS.md`](./architecture/TOOLS.md), [`TOOLING.md`](./architecture/TOOLING.md), [`CONFIG.md`](./architecture/CONFIG.md).

### Photos
## MCP tools (external tool servers)

```bash
printf "flowers" | clai -i ` # stdin replacement works for photos also` \
--photo-prefix=flowercat ` # Sets the prefix for local photo` \
--photo-dir=/tmp ` # Sets the output dir` \
photo A cat made out of {}
clai setup # stage 3: MCP server configs
clai -t "mcp_linear*" query "List open incidents assigned to me"
```

Since -N alternatives are disabled for many newer OpenAI models, you can use [repeater](https://github.com/baalimago/repeater) to generate several responses from the same prompt:

```bash
NO_COLOR=true repeater -n 10 -w 3 -increment -file out.txt -output BOTH \
clai -pp flower_INC p A cat made of flowers
```
- MCP server configs are stored in `<clai-config>/mcpServers/*.json`.
- MCP tool names are typically `mcp_<server>_<tool>` (and can be globbed).

## Configuration
See: [`TOOLING.md`](./architecture/TOOLING.md), [`SETUP.md`](./architecture/SETUP.md).

`clai` will create configuration files at [os.GetConfigDir()](https://pkg.go.dev/os#UserConfigDir)`/.clai/`.
First time you run `clai`, two default command-related configurations will be created: `textConfig.json` and `photoConfig.json`.
These command config files may also be modified with `clai setup -> 0`.
## Multimodal: photo + video

Then, one configuration file will be created for each specific model (for instance `chat-gpt-4.1` -> `openai_chat-gpt_chat-gpt-4.1`).
The model configuration files may be modified with `clai setup -> 1`.
The configuration precedence is as follows (from lowest to highest):
```bash
clai photo "A minimal architecture diagram"
clai -re photo "Now simplify it further"

1. Default hard-coded configurations [such as this](./internal/text/conf.go), these get written to file the first time you run `clai`
1. `textConfig.json` or `photoConfig.json`
1. Models specific configs (`openai_chat-gpt_chat-gpt-4`, and similar)
1. Profiles
1. Flags
clai video "A slow pan across a terminal showing streaming output"
```

### Models
- `photo`/`video` have separate mode configs: `photoConfig.json`, `videoConfig.json`.
- Output can be saved locally or printed as a URL, depending on config.

There's three ways to configure the models:
See: [`PHOTO.md`](./architecture/PHOTO.md), [`VIDEO.md`](./architecture/VIDEO.md), [`CONFIG.md`](./architecture/CONFIG.md).

1. Set flag `-chat-model` or `-photo-model`
1. Set the `model` field in the `textConfig.json` or `photoConfig.json` file. This will make it default, if not overwritten by flags.
1. [Profiles](#profiles)
## Streaming: one normalized event loop

Then, for each model, a new configuration file will be created.
Since each vendor's model supports quite different configurations, the model configurations aren't exposed as flags.
Instead, modify the model by adjusting its configuration file, found in [os.GetConfigDir()](https://pkg.go.dev/os#UserConfigDir)`/.clai/<vendor>_<model-type>_<model-name>.json`.
This config JSON will in effect be unmarshaled into a request sent to the model's vendor.
- All vendors map their streaming responses to a small set of normalized events:
- `string` text deltas
- tool call events
- stop/no-op/error events
- The querier loop is vendor-agnostic: it prints deltas, executes tools, and finalizes output.

The model-specific configs may be modified with `clai setup -> 1`.
See: [`STREAMING.md`](./architecture/STREAMING.md), [`QUERY.md`](./architecture/QUERY.md).
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,10 @@ Install [Glow](https://github.com/charmbracelet/glow) for formatted markdown out

## Features

<div align="center">
<img src="img/showcase.gif" alt="Showcase">
</div>

- **[MCP client support](./EXAMPLES.md#Tooling)** - Add any MCP server you'd like by simply pasting their configuration.
- **Vendor agnosticism** - Use any functionality in Clai with [most LLM vendors](#supported-vendors) interchangeably.
- **[Conversations](./EXAMPLES.md#Conversations)** - Create, manage and continue conversations.
Expand Down
12 changes: 6 additions & 6 deletions architecture/CHAT.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ There are two related mechanisms:

1. **Conversation transcripts** stored on disk as JSON (so they can be replayed/inspected/edited later).
2. **Reply context pointers**:
- `prevQuery.json` (global reply context)
- `globalScope.json` (global reply context)
- directory-scoped bindings under `conversations/dirs/` (per-CWD reply context)

Important behavioral change vs older versions:
Expand Down Expand Up @@ -105,7 +105,7 @@ This makes chats composable with normal shell tooling (pipes, redirects, history
`clai query <prompt>`:

- creates/updates a transcript
- updates the global previous query (`prevQuery.json`)
- updates the global previous query (`globalScope.json`)
- updates the directory binding for the current working directory (CWD)

Subsequent queries can be threaded using:
Expand Down Expand Up @@ -171,30 +171,30 @@ After selecting a chat, `actOnChat()` prints a details view and offers actions:

- edit messages (via `$EDITOR`)
- delete messages
- save as `prevQuery.json`
- save as `globalScope.json`

No interactive chat session is started from this UI.

## “Previous query” capture and replay

A special chat file is used for the global reply context:

- `<clai-config>/conversations/prevQuery.json`
- `<clai-config>/conversations/globalScope.json`

Implemented in `internal/chat/reply.go`.

### SaveAsPreviousQuery

`SaveAsPreviousQuery(claiConfDir, msgs)` writes:

- always: `prevQuery.json` with ID `prevQuery`
- always: `globalScope.json` with ID `globalScope`
- additionally (when `len(msgs) > 2`): saves a *new conversation file* derived from the first user message using `HashIDFromPrompt(firstUserMsg.Content)`

This preserves one-off queries and optionally promotes richer exchanges into normal conversations.

### LoadPrevQuery

Loads `prevQuery.json` (printing a warning if absent).
Loads `globalScope.json` (printing a warning if absent).

## Directory-scoped replies

Expand Down
87 changes: 0 additions & 87 deletions architecture/CMD.md

This file was deleted.

Loading