Skip to content

Latest commit

 

History

History
174 lines (124 loc) · 5.03 KB

README.md

File metadata and controls

174 lines (124 loc) · 5.03 KB

Lasagna AI Logo

Lasagna AI

PyPI - Version PyPI - Python Version Test Status Downloads

  • 🥞 Layered agents!

    • Agents for your agents!
    • Tool-use, structured output ("extraction"), and layering FTW 💪
    • Ever wanted a recursive agent? Now you can have one! 🤯
    • Parallel tool-calling by default.
    • Fully asyncio.
    • 100% Python type hints.
    • Functional-style 😎
    • (optional) Easy & pluggable caching! 🏦
  • 🚣 Streamable!

    • Event streams for everything.
    • Asyncio generators are awesome.
  • 🗃️ Easy database integration!

    • Don't rage when trying to store raw messages and token counts. 😡 🤬
    • Yes, you can have both streaming and easy database storage.
  • ↔️ Provider/model agnostic and interoperable!

    • Native support for OpenAI, Anthropic, NVIDIA NIM/NGC (+ more to come).
    • Message representations are canonized. 😇
    • Supports vision!
    • Easily build committees!
    • Swap providers or models mid-conversation.
    • Delegate tasks among model providers or model sizes.
    • Parallelize all the things.

Table of Contents

Installation

pip install -U lasagna-ai[openai,anthropic]

If you want to easily run all the ./examples, then you can install the extra dependencies used by those examples:

pip install -U lasagna-ai[openai,anthropic,example-deps]

Used By

Lasagna is used in production by:

AutoAuto

Quickstart

Here is the most simple agent (it doesn't add anything to the underlying model). More complex agents would add tools and/or use layers of agents, but not this one! Anyway, run it in your terminal and you can chat interactively with the model. 🤩

(taken from ./examples/quickstart.py)

from lasagna import (
    known_models,
    build_simple_agent,
)

from lasagna.tui import (
    tui_input_loop,
)

from typing import List, Callable

import asyncio

from dotenv import load_dotenv; load_dotenv()


MODEL_BINDER = known_models.BIND_OPENAI_gpt_4o_mini()


async def main() -> None:
    system_prompt = "You are grumpy."
    tools: List[Callable] = []
    my_agent = build_simple_agent(name = 'agent', tools = tools)
    my_bound_agent = MODEL_BINDER(my_agent)
    await tui_input_loop(my_bound_agent, system_prompt)


if __name__ == '__main__':
    asyncio.run(main())

Want to add your first tool? LLMs can't natively do arithmetic (beyond simple arithmetic with small numbers), so let's give our model a tool for doing arithmetic! 😎

(full example at ./examples/quickstart_with_math_tool.py)

import sympy as sp

...

def evaluate_math_expression(expression: str) -> float:
    """
    This tool evaluates a math expression and returns the result.
    Pass math expression as a string, for example:
     - "3 * 6 + 1"
     - "cos(2 * pi / 3) + log(8)"
     - "(4.5/2) + (6.3/1.2)"
     - ... etc
    :param: expression: str: the math expression to evaluate
    """
    expr = sp.sympify(expression)
    result = float(expr.evalf())
    return result

...

    ...
    tools: List[Callable] = [
        evaluate_math_expression,
    ]
    my_agent = build_simple_agent(name = 'agent', tools = tools)
    ...

...

Simple RAG: Everyone's favorite tool: Retrieval Augmented Generation (RAG). Let's GO! 📚💨
See: ./examples/demo_rag.py

Debug Logging

This library logs using Python's builtin logging module. It logs mostly to INFO, so here's a snippet of code you can put in your app to see those traces:

import logging

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
)

# ... now use Lasagna as you normally would, but you'll see extra log traces!

Special Thanks

Special thanks to those who inspired this library:

License

lasagna-ai is distributed under the terms of the MIT license.

Joke Acronym

Layered Agents with toolS And aGeNts and Ai