Skip to content

Interoperate between AI providers using the OpenAI Responses API as a common interface.

License

Notifications You must be signed in to change notification settings

DavidKoleczek/interop-router

Repository files navigation

InteropRouter

Seamlessly call major LLMs and image generation models through a unified interface.

uv ty PyPI License: MIT

InteropRouter is designed to seamlessly interoperate between the most common AI providers at a high level of quality. It uses the OpenAI Responses API types as a common denominator for inputs and outputs, allowing you to switch between providers with minimal code changes. See examples/ for detailed notebooks covering interoperability, function calling, image generation, and more.

Getting Started

Installation

# With uv.
uv add interop-router

# With pip.
pip install interop-router

Usage

from anthropic import AsyncAnthropic
from google import genai
from openai import AsyncOpenAI
from openai.types.responses import EasyInputMessageParam

from interop_router.router import Router
from interop_router.types import ChatMessage

router = Router()
router.register("openai", AsyncOpenAI())
router.register("gemini", genai.Client())
router.register("anthropic", AsyncAnthropic())

# InteropRouter is strictly typed, so be sure to use OpenAI's Response types for inputs. 
# See https://platform.openai.com/docs/guides/migrate-to-responses and the library source for more details on typing.
messages = [ChatMessage(message=EasyInputMessageParam(role="user", content="Hello!"))]

response = await router.create(input=messages, model="gpt-5.2")
response = await router.create(input=messages, model="gemini-3-flash-preview")
response = await router.create(input=messages, model="claude-sonnet-4-5-20250929")

Count input tokens before making a request using each provider's native token counting endpoint:

token_count = await router.count_tokens(input=messages, model="gpt-5.2")
token_count = await router.count_tokens(input=messages, model="gemini-3-flash-preview")
token_count = await router.count_tokens(input=messages, model="claude-sonnet-4-5-20250929")

InteropRouter Design Philosophy

The only goal of InteropRouter is to interoperate between the most common AI providers. To make this goal achievable, we make several trade-offs:

  • Only support OpenAI (including Azure OpenAI), Gemini, and Anthropic. Each provider adds a significant amount of possible permutations of features. To maintain high-quality interoperability, we limit the number of providers.
  • Only support async APIs, but not streaming token by token as this adds significant complexity, and for agents the latency is not as important.
  • We do not support stateful features where possible. These features are contradictory to the goal of seamless swapping between providers.
  • We choose the OpenAI Responses API types as the common denominator for creating pivots between providers. The reason is two-fold: a) The Responses API supports most features b) By picking an existing API, we avoid the need to design and maintain our own schema and Responses API support is gained for "free".
  • The supported features will be rigorously tested to ensure seamless swapping between providers within a single conversation.

Development

Prerequisites

Setup

Create uv virtual environment and install dependencies:

uv sync --frozen --all-extras --all-groups

Set up git hooks:

prek install

To update dependencies (updates the lock file):

uv sync --all-extras --all-groups

Run formatting, linting, type checking, and tests in one command:

uv run ruff format && uv run ruff check --fix && uv run ty check && uv run pytest

Further Information

docs/DEVELOPMENT.md

Compatibility and Roadmap

docs/COMPATIBILITY_AND_ROADMAP.md

About

Interoperate between AI providers using the OpenAI Responses API as a common interface.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages