Skip to content

feat(ai): add MiniMax as third LLM provider#445

Closed
octo-patch wants to merge 1 commit intokite-org:mainfrom
octo-patch:feature/add-minimax-provider
Closed

feat(ai): add MiniMax as third LLM provider#445
octo-patch wants to merge 1 commit intokite-org:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

Summary

Add MiniMax as a first-class AI provider alongside OpenAI and Anthropic. MiniMax models (M2.7, M2.5, M2.5-highspeed) are fully compatible with the OpenAI chat completions API, so the existing openai-go SDK is reused with MiniMax's base URL (https://api.minimax.io/v1).

Changes

Backend (Go)

  • Add GeneralAIProviderMiniMax = "minimax" constant and MiniMax-M2.7 default model
  • Add NewMiniMaxClient() factory that returns an OpenAI-compatible client configured with MiniMax's API endpoint
  • Route MiniMax provider through the existing OpenAI conversation flow (streaming, tool calling, thinking extraction all work out-of-the-box)
  • Update NormalizeGeneralAIProvider(), IsGeneralAIProviderSupported(), DefaultGeneralAIModelByProvider(), and providerLabel() to handle MiniMax

Frontend (React/TypeScript)

  • Add MiniMax option to provider dropdown in general settings UI
  • Update TypeScript types (GeneralSetting, GeneralSettingUpdateRequest) to include minimax
  • Add MiniMax-specific model placeholder (MiniMax-M2.7) and base URL hint (https://api.minimax.io/v1)

Tests

  • 15 new unit tests in pkg/ai/config_test.go covering provider normalization, labels, client factory creation and rejection
  • 12 new unit tests in pkg/model/general_setting_test.go covering provider support validation, default models, constants

Documentation

  • Updated both English and Chinese READMEs to mention MiniMax alongside OpenAI and Anthropic

How to Use

  1. Go to Settings > General > AI Agent
  2. Select MiniMax from the Provider dropdown
  3. Enter your MiniMax API key
  4. The model defaults to MiniMax-M2.7 and the base URL defaults to https://api.minimax.io/v1
  5. Click Save

Test Plan

  • All 27 new unit tests pass (go test ./pkg/ai/... ./pkg/model/...)
  • Existing tests unaffected (language detection, resource matching)
  • Manual verification: configure MiniMax provider in settings UI, verify chat works with MiniMax API key
  • Verify OpenAI and Anthropic providers still work after changes

Files Changed (9 files, 345 additions)

File Description
pkg/model/general_setting.go MiniMax provider constant, default model, normalization
pkg/model/general_setting_test.go 12 unit tests for model layer
pkg/ai/config.go NewMiniMaxClient() factory, provider label
pkg/ai/config_test.go 15 unit tests for config layer
pkg/ai/agent.go MiniMax routing in NewAgent()
ui/src/lib/api.ts TypeScript type union update
ui/src/components/settings/general-management.tsx Provider dropdown, model/URL placeholders
README.md MiniMax mention in features
README_zh.md MiniMax mention in features (Chinese)

Add MiniMax (https://www.minimaxi.com) as a first-class AI provider
alongside OpenAI and Anthropic. MiniMax models are fully compatible
with the OpenAI chat completions API, so the existing OpenAI SDK is
reused with MiniMax's base URL (https://api.minimax.io/v1).

Backend changes:
- Add GeneralAIProviderMiniMax constant and MiniMax-M2.7 default model
- Add NewMiniMaxClient() factory returning an OpenAI-compatible client
- Route MiniMax provider through the existing OpenAI conversation flow
- Update provider normalization, validation, and label functions

Frontend changes:
- Add MiniMax option to provider dropdown in general settings
- Update TypeScript types to include 'minimax' provider
- Add MiniMax-specific model placeholder and base URL hint

Tests:
- 15 unit tests for config (provider normalization, labels, client factories)
- 12 unit tests for model (provider support, default models, constants)
@zxh326 zxh326 closed this Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants