A multipurpose deep learning research agent 🔗
This repository started as a combination of learning process and prototyping. It will remain public as proof of work and for others to use in their own journey if they want.
├── .github/ # Contains CODEOWNERS and GitHub action workflows
├── app/ # Main application files
│ ├── agents/ # Agent creation, tool use, and prompts
│ ├── api/ # Main applications files for server
│ └── shared/ # Shared code modules used throughout server
├── helm/ # Helm charts
│ ├── templates/ #
│ ├── Chart.yaml #
│ └── values.yaml #
├── tests/ # All testing files for server application
└── main.py # Root file for running server applicationThe following installs are required for running this codebase locally:
First, ensure the proper Python version is installed locally. Install pyenv first to manage non-system level Python installations. This can be done with:
# Install on MacOS
brew install pyenv
# Install on Linux
curl https://pyenv.run | bash
# Install on Windows
winget install pyenv.pyenv-winNext, use pyenv to install Python:
# Install on MacOS, Linux, and Windows
pyenv install 3.12.4
pyenv global 3.12.4
# Check install afterwards
python --versionNext, install uv for package management and virtual environment setup for running the server:
# On MacOS and Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"After setting up Python and uv, run the following commands to fully configure local dev dependencies and git hooks for this codebase:
make sync-dev
make install-hooks
Finally, Docker will need to be installed to run this codebase locally. The easiest way to do this is to just install and setup Docker Desktop. To do that, use one of the following links:
Create a new .env file in the root of the repository and set the required environment variables (using the .env.example file as reference).
This repository supports both cloud-based models as well as those run locally (such as with Ollama or vLLM).
Note that setting an Anthropic/OpenAI key and model name does not require a base URL or provider name as those are automatically set by the underlying dependencies. However, locally run models will require all these environment variables to be filled out.
Here are two examples of configurations for cloud-based models and local, respectively. Each agent in the environment variable list can be customized individually:
# Leave the SUPERVISOR_MODEL_BASE_URL and SUPERVISOR_MODEL_PROVIDER blank for Anthropic usage as those are set
# automatically by LangChain.
SUPERVISOR_MODEL_API_KEY=api_key_here
SUPERVISOR_MODEL_BASE_URL=
SUPERVISOR_MODEL_NAME=claude-sonnet-4-5-20250929
SUPERVISOR_MODEL_PROVIDER=
# Setup for a locally-run LLM. The SUPERVISOR_MODEL_API_KEY likely won't matter here (unless it's set on the
# process serving the model) and is just required by LangChain's package. For a list of all possible providers that
# LangChain allows, see: https://python.langchain.com/api_reference/langchain/chat_models/langchain.chat_models.base.init_chat_model.html
SUPERVISOR_MODEL_API_KEY=ollama
SUPERVISOR_MODEL_BASE_URL=http://localhost:11434
SUPERVISOR_MODEL_NAME=gpt-oss:20b
SUPERVISOR_MODEL_PROVIDER=ollamaAt time of this writing, gpt-oss:20b is an open source model that performs well with tool calls, thinking, and overall tasks for this repository's agent.
After all the above environment variable configurations, running make dev will automatically use the local .env file and start up the server in Docker.
Formatting for the codebase can run on save. It's recommended to add the following IDE settings.json setup (in Cursor or VS Code, although it may be possible to figure out an analogous setup in another IDE):
"[python]": {
"editor.defaultFormatter": "ms-python.python",
"editor.formatOnType": true,
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
}
}Claude Code is the recommended choice for AI-assisted coding within this repository. There's already a CLAUDE.md file to govern how the model interacts with this codebase.
For more on Claude Code, see the Anthropic Documentation.
🎯 To see upcoming work for this repository, see this Trello board.
💬 If you want a feature, found a bug, or just want to contribute, read the Contributing Guidelines and then open a new GitHub issue.
🔓 Found a security vulnerability? Responsible and private disclosures are greatly appreciated. See Security for next steps.
- The LangChain team for their exceptional documentation and LangChain Academy courses.