Skip to content

feat: add Google Gemini provider support#36

Open
do420 wants to merge 1 commit intoAmrDab:mainfrom
do420:feat/gemini-provider-support
Open

feat: add Google Gemini provider support#36
do420 wants to merge 1 commit intoAmrDab:mainfrom
do420:feat/gemini-provider-support

Conversation

@do420
Copy link

@do420 do420 commented Mar 11, 2026

Summary

Adds first-class Google Gemini support as an AI provider using Gemini's OpenAI-compatible endpoint.

Changes

src/providers.ts

src/openclaw-credentials.ts

  • Added Gemini URL pattern to inferProviderFromBaseUrl() so the provider is correctly identified from the base URL
  • Fixed
    esolveApiConfig() to propagate localProvider in both return paths (it was being read to find the right env key but then dropped from the returned object)

src/ai-brain.ts

  • Added gemini to AIBrain.BASE_URLS fallback map so the correct endpoint is used when �aseUrl isn't explicitly set

src/index.ts

  • Fixed �isionModel/model falling back to the pipeline config's layer3/layer2 model names when
    esolveApiConfig() returns no model - prevents an empty string being sent to the API causing silent failures in the vision loop

Configuration

Set these env vars (or add to .env):

.env GEMINI_API_KEY=your_key_here AI_BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai AI_TEXT_MODEL=gemini-2.0-flash AI_VISION_MODEL=gemini-2.0-flash

Or start with explicit flags:
clawdcursor start --provider gemini --api-key your_key_here

Testing

Tested end-to-end on Windows with gemini-2.0-flash:

  • YouTube search task (navigate + type + click) ?
  • Gmail compose + send flow ?

- Add gemini to PROVIDERS map with OpenAI-compatible endpoint
  (https://generativelanguage.googleapis.com/v1beta/openai)
- Add gemini to PROVIDER_ENV_VARS (reads GEMINI_API_KEY from env)
- Add gemini URL detection in inferProviderFromBaseUrl()
- Add gemini to AIBrain.BASE_URLS fallback map
- Fix localProvider not propagated in resolveApiConfig() return values
- Fix visionModel/textModel falling back to pipeline config layer3/layer2
  models when resolvedApi provides no model name (prevented empty model
  string being sent to the API)

No API keys are stored in source files.
Configure via env: GEMINI_API_KEY=your_key AI_BASE_URL=https://generativelanguage.googleapis.com/v1beta/openai
@AmrDab
Copy link
Owner

AmrDab commented Mar 12, 2026

hey @do420 , these problems have been addressed in the V0.7.0 update on branch, check it out. also what OS are you operating on?

@do420
Copy link
Author

do420 commented Mar 12, 2026

hey @do420 , these problems have been addressed in the V0.7.0 update on branch, check it out. also what OS are you operating on?

Thanks! Any plans on merging v0.7.0 to main? I am operating on Windows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants