Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion dream-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ Both tiers use `qwen2.5:7b` as a bootstrap model for instant startup. The full m
|------|-------------|-------|-------|---------|-----------------|
| 1 (Entry) | 8–24GB | qwen3.5-9b | GGUF Q4_K_M | 16K | M1/M2 base, M4 Mac Mini (16GB) |
| 2 (Prosumer) | 32GB | qwen3.5-9b | GGUF Q4_K_M | 32K | M4 Pro Mac Mini, M3 Max MacBook Pro |
| 3 (Pro) | 48GB | gpt-oss-20b | GGUF Q4_K_M | 32K | M4 Pro (48GB), M2 Max (48GB) |
| 3 (Pro) | 48GB | qwen3.5-27b | GGUF Q4_K_M | 32K | M4 Pro (48GB), M2 Max (48GB) |
| 4 (Enterprise) | 64GB+ | qwen3-30b-a3b (30B MoE) | GGUF Q4_K_M | 131K | M2 Ultra Mac Studio, M4 Max (64GB+) |

Override with: `./install.sh --tier 3`
Expand Down
8 changes: 4 additions & 4 deletions dream-server/agents/templates/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
**Mission:** M7 (OpenClaw Frontier Pushing)
**Status:** 5 templates created, awaiting validation

Validated agent templates that work reliably on local GPT-OSS-20B.
Validated agent templates that work reliably on local Qwen3.5-27B.

## Templates

Expand All @@ -29,12 +29,12 @@ Validated agent templates that work reliably on local GPT-OSS-20B.
agent:
template: code-assistant
override:
model: local-llama/gpt-oss-20b
model: local-llama/qwen3.5-27b
```

## Validation Results (2026-02-11)

Tested on: GPT-OSS-20B-Instruct-AWQ (local)
Tested on: Qwen3.5-27B-Instruct-AWQ (local)
Test command: `python3 tests/validate-agent-templates.py`

| Template | Tests | Passed | Status |
Expand All @@ -55,7 +55,7 @@ Test command: `python3 tests/validate-agent-templates.py`

## Design Principles

1. **Local-first:** Templates optimized for GPT-OSS-20B (free, fast, private)
1. **Local-first:** Templates optimized for Qwen3.5-27B (free, fast, private)
2. **Fallback-aware:** Creative tasks route to Kimi; technical tasks stay local
3. **Tool-appropriate:** Each template gets only the tools it needs
4. **Safety-conscious:** Dangerous operations flagged (system-admin)
Expand Down
4 changes: 2 additions & 2 deletions dream-server/agents/templates/code-assistant.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Code Assistant Agent Template
# Mission: M7 (OpenClaw Frontier Pushing)
# Validated on: GPT-OSS-20B
# Validated on: Qwen3.5-27B
# Purpose: Programming help, debugging, code review

agent:
name: code-assistant
description: "Programming assistant for code generation, debugging, and review"

model: local-llama/gpt-oss-20b
model: local-llama/qwen3.5-27b
# Qwen Coder excels at programming tasks - no fallback needed

system_prompt: |
Expand Down
4 changes: 2 additions & 2 deletions dream-server/agents/templates/data-analyst.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Data Analyst Agent Template
# Mission: M7 (OpenClaw Frontier Pushing)
# Validated on: GPT-OSS-20B
# Validated on: Qwen3.5-27B
# Purpose: CSV/JSON analysis, data processing, visualization guidance

agent:
name: data-analyst
description: "Data analysis assistant for processing CSV, JSON, and structured data"

model: local-llama/gpt-oss-20b
model: local-llama/qwen3.5-27b
# Coder model excels at data manipulation tasks

system_prompt: |
Expand Down
4 changes: 2 additions & 2 deletions dream-server/agents/templates/research-assistant.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Research Assistant Agent Template
# Mission: M7 (OpenClaw Frontier Pushing)
# Validated on: GPT-OSS-20B
# Validated on: Qwen3.5-27B
# Purpose: Web research, summarization, fact-checking

agent:
name: research-assistant
description: "Research assistant for web search, summarization, and analysis"

model: local-llama/gpt-oss-20b
model: local-llama/qwen3.5-27b
# Falls back to Kimi for complex synthesis if needed
fallback_model: moonshot/kimi-k2-0711-preview

Expand Down
4 changes: 2 additions & 2 deletions dream-server/agents/templates/system-admin.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# System Admin Assistant Agent Template
# Mission: M7 (OpenClaw Frontier Pushing)
# Validated on: GPT-OSS-20B
# Validated on: Qwen3.5-27B
# Purpose: Docker management, server administration, troubleshooting

agent:
name: system-admin
description: "System administration assistant for Docker, Linux, and server management"

model: local-llama/gpt-oss-20b
model: local-llama/qwen3.5-27b
# Coder model excels at system commands and scripting

system_prompt: |
Expand Down
4 changes: 2 additions & 2 deletions dream-server/agents/templates/writing-assistant.yaml
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Writing Assistant Agent Template
# Mission: M7 (OpenClaw Frontier Pushing)
# Validated on: GPT-OSS-20B
# Validated on: Qwen3.5-27B
# Purpose: Creative writing, editing, style improvement
# NOTE: Local Qwen has limitations on creative tasks - use with fallback

agent:
name: writing-assistant
description: "Writing assistant for drafting, editing, and improving text"

model: local-llama/gpt-oss-20b
model: local-llama/qwen3.5-27b
# IMPORTANT: Qwen3 is NOT optimized for creative writing
# This template uses fallback for creative generation tasks
fallback_model: moonshot/kimi-k2-0711-preview
Expand Down
10 changes: 5 additions & 5 deletions dream-server/installers/lib/tier-map.sh
Original file line number Diff line number Diff line change
Expand Up @@ -98,10 +98,10 @@ resolve_tier_config() {
;;
3)
TIER_NAME="Pro"
LLM_MODEL="gpt-oss-20b"
GGUF_FILE="gpt-oss-20b-Q4_K_M.gguf"
GGUF_URL="https://huggingface.co/unsloth/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b-Q4_K_M.gguf"
GGUF_SHA256="c27536640e410032865dc68781d80a08b98f8db5e93575919af8ccc0568aeb4f"
LLM_MODEL="qwen3.5-27b"
GGUF_FILE="Qwen3.5-27B-Q4_K_M.gguf"
GGUF_URL="https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/resolve/main/Qwen3.5-27B-Q4_K_M.gguf"
GGUF_SHA256="84b5f7f112156d63836a01a69dc3f11a6ba63b10a23b8ca7a7efaf52d5a2d806"
MAX_CONTEXT=32768
;;
4)
Expand Down Expand Up @@ -132,7 +132,7 @@ tier_to_model() {
0|T0) echo "qwen3.5-2b" ;;
1|T1) echo "qwen3.5-9b" ;;
2|T2) echo "qwen3.5-9b" ;;
3|T3) echo "gpt-oss-20b" ;;
3|T3) echo "qwen3.5-27b" ;;
4|T4) echo "qwen3-30b-a3b" ;;
*) echo "" ;;
esac
Expand Down
8 changes: 4 additions & 4 deletions dream-server/installers/macos/lib/tier-map.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,10 @@ resolve_tier_config() {
;;
3)
TIER_NAME="Pro"
LLM_MODEL="gpt-oss-20b"
GGUF_FILE="gpt-oss-20b-Q4_K_M.gguf"
GGUF_URL="https://huggingface.co/unsloth/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b-Q4_K_M.gguf"
GGUF_SHA256="c27536640e410032865dc68781d80a08b98f8db5e93575919af8ccc0568aeb4f"
LLM_MODEL="qwen3.5-27b"
GGUF_FILE="Qwen3.5-27B-Q4_K_M.gguf"
GGUF_URL="https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/resolve/main/Qwen3.5-27B-Q4_K_M.gguf"
GGUF_SHA256="84b5f7f112156d63836a01a69dc3f11a6ba63b10a23b8ca7a7efaf52d5a2d806"
MAX_CONTEXT=32768
;;
2)
Expand Down
2 changes: 1 addition & 1 deletion dream-server/installers/phases/13-summary.sh
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ if ! $DRY_RUN; then
if [[ -x "$INSTALL_DIR/scripts/repair/repair-perplexica.sh" ]]; then
bash "$INSTALL_DIR/scripts/repair/repair-perplexica.sh" \
"http://localhost:${SERVICE_PORTS[perplexica]:-3004}" \
"${LLM_MODEL:-gpt-oss-20b}" >> "$LOG_FILE" 2>&1 && \
"${LLM_MODEL:-qwen3.5-27b}" >> "$LOG_FILE" 2>&1 && \
ai_ok "Perplexica configured" || \
ai_warn "Perplexica may need manual config at :${SERVICE_PORTS[perplexica]:-3004}"
fi
Expand Down
10 changes: 5 additions & 5 deletions dream-server/installers/windows/lib/tier-map.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -88,10 +88,10 @@ function Resolve-TierConfig {
"3" {
return @{
TierName = "Pro"
LlmModel = "gpt-oss-20b"
GgufFile = "gpt-oss-20b-Q4_K_M.gguf"
GgufUrl = "https://huggingface.co/unsloth/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b-Q4_K_M.gguf"
GgufSha256 = "c27536640e410032865dc68781d80a08b98f8db5e93575919af8ccc0568aeb4f"
LlmModel = "qwen3.5-27b"
GgufFile = "Qwen3.5-27B-Q4_K_M.gguf"
GgufUrl = "https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/resolve/main/Qwen3.5-27B-Q4_K_M.gguf"
GgufSha256 = "84b5f7f112156d63836a01a69dc3f11a6ba63b10a23b8ca7a7efaf52d5a2d806"
MaxContext = 32768
}
}
Expand Down Expand Up @@ -158,7 +158,7 @@ function ConvertTo-ModelFromTier {
"^(0|T0)$" { return "qwen3.5-2b" }
"^(1|T1)$" { return "qwen3.5-9b" }
"^(2|T2)$" { return "qwen3.5-9b" }
"^(3|T3)$" { return "gpt-oss-20b" }
"^(3|T3)$" { return "qwen3.5-27b" }
"^(4|T4)$" { return "qwen3-30b-a3b" }
default { return "" }
}
Expand Down
1 change: 1 addition & 0 deletions dream-server/installers/windows/phases/02-detection.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,7 @@ Write-InfoBox " Capacity:" "$_usersEst"
$_modelGB = $(
if ($tierConfig.GgufFile -match "80B|Coder-Next") { 50 }
elseif ($tierConfig.GgufFile -match "30B") { 20 }
elseif ($tierConfig.GgufFile -match "27B") { 18 }
elseif ($tierConfig.GgufFile -match "20b") { 12 }
elseif ($tierConfig.GgufFile -match "14B") { 12 }
elseif ($tierConfig.GgufFile -match "9B|8B") { 8 }
Expand Down
2 changes: 1 addition & 1 deletion dream-server/scripts/repair/repair-perplexica.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ set -euo pipefail
# Requires: Perplexica container running, python3 available

PERPLEXICA_URL="${1:-http://localhost:3004}"
LLM_MODEL="${2:-gpt-oss-20b}"
LLM_MODEL="${2:-qwen3.5-27b}"

SCRIPT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
PYTHON_CMD="python3"
Expand Down
2 changes: 1 addition & 1 deletion dream-server/tests/test-hardware-compatibility.sh
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ if [[ -f "$TIER_MAP" ]]; then

# Verify tier progression (higher tiers = larger models)
if grep -A5 "^[[:space:]]*1)" "$TIER_MAP" | grep -q "qwen3.5-9b" && \
grep -A5 "^[[:space:]]*3)" "$TIER_MAP" | grep -q "gpt-oss-20b"; then
grep -A5 "^[[:space:]]*3)" "$TIER_MAP" | grep -q "qwen3.5-27b"; then
pass "Tier progression validated (tier 1 < tier 3 model size)"
else
fail "Tier progression should increase model size"
Expand Down
4 changes: 2 additions & 2 deletions dream-server/tests/test-tier-map.sh
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,8 @@ echo ""
echo "Tier 3 (Pro):"
run_tier 3
assert_eq "TIER_NAME" "Pro" "$TIER_NAME"
assert_eq "LLM_MODEL" "gpt-oss-20b" "$LLM_MODEL"
assert_eq "GGUF_FILE" "gpt-oss-20b-Q4_K_M.gguf" "$GGUF_FILE"
assert_eq "LLM_MODEL" "qwen3.5-27b" "$LLM_MODEL"
assert_eq "GGUF_FILE" "Qwen3.5-27B-Q4_K_M.gguf" "$GGUF_FILE"
assert_eq "MAX_CONTEXT" "32768" "$MAX_CONTEXT"
echo ""

Expand Down
Loading