Skip to content

Conversation

@mariano
Copy link
Collaborator

@mariano mariano commented Oct 31, 2025

Summary

  • add environment prerequisites, installation, and CLI verification guidance to the getting started notebook
  • extend the tutorial with sections on model installation, caching, memory indexing, agent scaffolding, and optional AWS deployment

Testing

  • poetry run pytest --verbose -s

https://chatgpt.com/codex/tasks/task_e_690425178bc0832395a7cd656e07063d

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +114 to +118
"When you're ready to try a model locally, download it to the cache or a custom directory:\n",
"\n",
"```\n",
"poetry run avalan model install <model-id> --revision main --local-dir ./models\n",
"```\n",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Remove unsupported --revision flag from model install example

The getting‑started notebook shows poetry run avalan model install <model-id> --revision main --local-dir ./models, but the model install command only accepts model, --workers, --local-dir, and --local-dir-symlinks as arguments (defined in src/avalan/cli/__main__.py). Because --revision is not recognized, running the snippet fails with an argparse error, so the tutorial currently leads readers to a dead end.

Useful? React with 👍 / 👎.

Comment on lines +120 to +124
"For a quick quality check, stream tokens directly from the model without building an agent:\n",
"\n",
"```\n",
"poetry run avalan model run <model-id> --prompt \"Hello there\" --max-new-tokens 64\n",
"```"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Replace nonexistent --prompt option in model run snippet

The token streaming example recommends poetry run avalan model run <model-id> --prompt "Hello there" --max-new-tokens 64, but the CLI does not define a --prompt flag—input is provided interactively or via stdin. Running the command as written results in error: unrecognized arguments: --prompt, preventing users from reproducing the quick quality check.

Useful? React with 👍 / 👎.

@mariano mariano closed this Nov 3, 2025
@mariano mariano deleted the improve-getting_started.ipynb branch November 3, 2025 15:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants