feat(tests): make integration test providers configurable via env vars#1494
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds environment-driven provider configuration to integration tests by introducing Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can make CodeRabbit's review stricter and more nitpicky using the `assertive` profile, if that's what you prefer.Change the |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/icontracts/conftest.py`:
- Around line 14-15: Replace all EN DASH characters (U+2013) in the docstrings
with standard hyphen-minus '-' to satisfy Ruff RUF002; specifically update the
occurrences in the docstring lines that read "TEST_PROVIDER – provider name
(default: openai)", "TEST_PROVIDER_MODEL – model name (default: gpt-4o)"
and the later docstring lines (the other four instances) so they use "-" instead
of "–". Ensure only the punctuation is changed and spacing remains consistent so
tests and linting pass.
- Around line 9-23: Add explicit return type annotations to the config helper
functions: annotate get_provider_config and get_mock_provider_config to return a
mapping of string keys to string values (e.g., Dict[str, str] or Mapping[str,
str]); also add the necessary typing import (from typing import Dict or Mapping)
at the top of the file so the annotations are valid. Ensure the function
signatures (get_provider_config and get_mock_provider_config) include these
return types and that the returned dict shapes remain unchanged.
- Around line 79-87: The test currently always requests creation of 5 validators
via post_request_localhost(payload("sim_createRandomValidators", 5, ...)) even
when existing_validators already has some entries; change the logic to compute
missing = max(0, 5 - len(existing_validators)) and only call
post_request_localhost when missing > 0, passing missing instead of the
hardcoded 5; update any surrounding non-mock check so this applies only in the
non-mock branch and keep references to existing_validators,
post_request_localhost, payload, and "sim_createRandomValidators" to locate the
code.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/icontracts/conftest.py`:
- Around line 129-131: In pytest_configure
(tests/integration/icontracts/conftest.py) change the load_dotenv call to not
overwrite existing environment variables: replace load_dotenv(override=True)
with load_dotenv(override=False) so CLI/CI-provided TEST_* and other env vars
keep precedence; verify the pytest_configure function still runs at session
start and loads .env only for missing keys.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/icontracts/conftest.py`:
- Around line 48-50: Update the docstring for the pytest fixture in conftest.py
that currently says "exactly 5 validators" to accurately reflect its behavior:
indicate it tops up to 5 validators (e.g., "ensures at least 5 validators by
topping up to 5" or "provisions up to 5 additional validators to reach 5 total")
rather than guaranteeing exactly five; modify the text surrounding the phrase
"exactly 5 validators" in that fixture's docstring to match the topping-up logic
implemented in the fixture.
- Around line 81-82: The code currently replaces any falsy mock_response with {}
by using "mock_response if mock_response else {}", which discards intentional
falsy values like "", 0, False, or [], so change the conditional to only treat
None as missing (e.g., use "mock_response if mock_response is not None else {}")
so that explicit falsy responses are preserved; update the expression that
builds the dict entry for "mock_response" accordingly where the dict is created.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/integration/icontracts/conftest.py (1)
123-126: Consider case-insensitive comparison for the mock mode check.The current check requires exactly
"true"(lowercase). Users might setTEST_WITH_MOCK_LLMS=TrueorTRUE, which would unexpectedly disable mock mode.♻️ Suggested improvement
def mock_llms() -> bool: """Return True when mock LLM mode is enabled via TEST_WITH_MOCK_LLMS=true.""" env_var = os.getenv("TEST_WITH_MOCK_LLMS", "false") # default no mocking - return env_var == "true" + return env_var.lower() == "true"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/icontracts/conftest.py` around lines 123 - 126, The mock_llms function currently compares os.getenv("TEST_WITH_MOCK_LLMS", "false") to the literal "true" which is case-sensitive; change the comparison to be case-insensitive (e.g., call .lower() or .casefold() on the env_var before comparing) so values like "True", "TRUE", or "true" enable mock mode; update the mock_llms function to use the case-normalized env var when returning the boolean.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/integration/icontracts/conftest.py`:
- Around line 123-126: The mock_llms function currently compares
os.getenv("TEST_WITH_MOCK_LLMS", "false") to the literal "true" which is
case-sensitive; change the comparison to be case-insensitive (e.g., call
.lower() or .casefold() on the env_var before comparing) so values like "True",
"TRUE", or "true" enable mock mode; update the mock_llms function to use the
case-normalized env var when returning the boolean.
Providers and models were hardcoded in conftest.py, making it
impossible to run the backend e2e tests against a different LLM
provider (e.g. local Ollama) without editing source files.
Introduces two helper functions that read provider settings from
environment variables with the same defaults that were previously
hardcoded:
Non-mock mode (TEST_WITH_MOCK_LLMS != true):
TEST_PROVIDER – provider name (default: openai)
TEST_PROVIDER_MODEL – model name (default: gpt-4o)
Mock mode (TEST_WITH_MOCK_LLMS=true):
TEST_MOCK_PROVIDER (default: openrouter)
TEST_MOCK_MODEL (default: @preset/rally-testnet-gpt-5-1)
TEST_MOCK_API_KEY_ENV_VAR (default: OPENROUTERAPIKEY)
TEST_MOCK_API_URL (default: https://openrouter.ai/api)
Closes genlayerlabs#340
- Add return type annotations to all functions (dict[str, str], bool, None) and import Any for dynamic-typed parameters - Replace EN dash characters with hyphen-minus in docstrings (Ruff RUF002) - Create only the missing validators in non-mock mode instead of always requesting 5, preventing over-provisioning
7e840a8 to
34d75ab
Compare
|
Rebased on latest |
- mock_llms(): use .lower() so TRUE/True/true all work - mock_response: use `is not None` to preserve intentional falsy values
|
🎉 This PR is included in version 0.112.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Summary
Closes #340.
Providers and models in
tests/integration/icontracts/conftest.pywere hardcoded, making it impossible to run the backend e2e tests against a different LLM provider (e.g. local Ollama) without editing source files.This PR introduces two helper functions —
get_provider_config()andget_mock_provider_config()— that read provider settings from environment variables while preserving the existing defaults:Non-mock mode (
TEST_WITH_MOCK_LLMSnot set):TEST_PROVIDERopenaiTEST_PROVIDER_MODELgpt-4oMock mode (
TEST_WITH_MOCK_LLMS=true):TEST_MOCK_PROVIDERopenrouterTEST_MOCK_MODEL@preset/rally-testnet-gpt-5-1TEST_MOCK_API_KEY_ENV_VAROPENROUTERAPIKEYTEST_MOCK_API_URLhttps://openrouter.ai/apiExample — run tests with local Ollama
Test plan
TEST_PROVIDER=ollama TEST_PROVIDER_MODEL=llama3creates validators with the specified providerTEST_MOCK_PROVIDER/TEST_MOCK_MODELin mock mode uses the overridden valuesSummary by CodeRabbit