feat: add MiniMax as LLM provider for Python and TypeScript#1416
feat: add MiniMax as LLM provider for Python and TypeScript#1416octo-patch wants to merge 3 commits intoi-am-bee:mainfrom
Conversation
Add MiniMax as a first-class LLM provider in both Python and TypeScript implementations. MiniMax provides an OpenAI-compatible API, making integration straightforward through existing LiteLLM (Python) and Vercel AI SDK (TypeScript) infrastructure. Python: - Add MiniMaxChatModel extending LiteLLMChatModel with OpenAI-compat routing - Register "minimax" provider in BackendProviders with proper ProviderName/ProviderHumanName - Support MINIMAX_API_KEY, MINIMAX_CHAT_MODEL, MINIMAX_API_BASE env vars - Default model: MiniMax-M2.7, default base URL: https://api.minimax.io/v1 - 15 unit tests (all passing) + 3 integration tests (all passing) - Provider example following existing deepseek/qwen pattern TypeScript: - Add MiniMaxChatModel extending VercelChatModel via @ai-sdk/openai - Add MiniMaxClient extending BackendClient with OpenAI provider - Register "MiniMax" in BackendProviders with "minimax" alias - Support MINIMAX_API_KEY, MINIMAX_CHAT_MODEL, MINIMAX_API_BASE env vars - 9 unit tests + provider example following existing xai pattern Documentation: - Add MiniMax to supported providers table in docs/modules/backend.mdx Available models: MiniMax-M2.7, MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the framework's capabilities by integrating MiniMax as a new Large Language Model provider. The changes enable developers to seamlessly utilize MiniMax models within both Python and TypeScript applications, leveraging its OpenAI-compatible API for a consistent experience. This addition enhances the flexibility and choice of LLM backends available to users. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for the MiniMax AI provider, adding new Python and TypeScript adapters, updating provider registration, and including example usage and tests. The review feedback highlights a missing test case in the TypeScript implementation for both MiniMaxClient and MiniMaxChatModel to verify error handling when the MINIMAX_API_KEY is not provided.
| describe("MiniMaxClient", () => { | ||
| const originalEnv = process.env; | ||
|
|
||
| beforeEach(() => { | ||
| process.env = { ...originalEnv }; | ||
| }); | ||
|
|
||
| afterEach(() => { | ||
| process.env = originalEnv; | ||
| }); | ||
|
|
||
| it("should create client with explicit settings", () => { | ||
| const client = new MiniMaxClient({ | ||
| apiKey: "test-key", | ||
| baseURL: "https://api.minimax.io/v1", | ||
| }); | ||
| expect(client).toBeDefined(); | ||
| expect(client.instance).toBeDefined(); | ||
| }); | ||
|
|
||
| it("should create client from env vars", () => { | ||
| process.env.MINIMAX_API_KEY = "test-key-from-env"; | ||
| const client = new MiniMaxClient({}); | ||
| expect(client).toBeDefined(); | ||
| expect(client.instance).toBeDefined(); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
The test suite for MiniMaxClient is missing a case to verify behavior when the API key is not provided. The Python implementation includes a test for this scenario, and it's good practice to ensure the TypeScript implementation also fails gracefully with a clear error.
Please add a test case to ensure that an error is thrown when MINIMAX_API_KEY is missing.
Example:
it("should throw an error if API key is missing", () => {
delete process.env.MINIMAX_API_KEY;
expect(() => new MiniMaxClient()).toThrow();
});| describe("MiniMaxChatModel", () => { | ||
| const originalEnv = process.env; | ||
|
|
||
| beforeEach(() => { | ||
| process.env = { ...originalEnv }; | ||
| process.env.MINIMAX_API_KEY = "test-api-key"; | ||
| }); | ||
|
|
||
| afterEach(() => { | ||
| process.env = originalEnv; | ||
| }); | ||
|
|
||
| it("should instantiate with default model", () => { | ||
| const model = new MiniMaxChatModel(); | ||
| expect(model).toBeInstanceOf(MiniMaxChatModel); | ||
| expect(model.modelId).toBe("MiniMax-M2.7"); | ||
| }); | ||
|
|
||
| it("should instantiate with custom model id", () => { | ||
| const model = new MiniMaxChatModel("MiniMax-M2.5"); | ||
| expect(model).toBeInstanceOf(MiniMaxChatModel); | ||
| expect(model.modelId).toBe("MiniMax-M2.5"); | ||
| }); | ||
|
|
||
| it("should accept highspeed model", () => { | ||
| const model = new MiniMaxChatModel("MiniMax-M2.7-highspeed"); | ||
| expect(model).toBeInstanceOf(MiniMaxChatModel); | ||
| expect(model.modelId).toBe("MiniMax-M2.7-highspeed"); | ||
| }); | ||
|
|
||
| it("should use env var for model id", () => { | ||
| process.env.MINIMAX_CHAT_MODEL = "MiniMax-M2.5-highspeed"; | ||
| const model = new MiniMaxChatModel(); | ||
| expect(model.modelId).toBe("MiniMax-M2.5-highspeed"); | ||
| }); | ||
|
|
||
| it("should accept custom parameters", () => { | ||
| const model = new MiniMaxChatModel("MiniMax-M2.7", { temperature: 0.5 }); | ||
| expect(model).toBeInstanceOf(MiniMaxChatModel); | ||
| }); | ||
|
|
||
| it("should accept custom client settings", () => { | ||
| const model = new MiniMaxChatModel("MiniMax-M2.7", {}, { | ||
| apiKey: "custom-key", | ||
| baseURL: "https://proxy.example.com/v1", | ||
| }); | ||
| expect(model).toBeInstanceOf(MiniMaxChatModel); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Similar to the MiniMaxClient, the tests for MiniMaxChatModel should include a case for a missing API key to ensure it fails as expected. This ensures consistent and robust error handling across the provider implementation.
Please add a test case to verify that instantiating MiniMaxChatModel without an API key throws an error.
Example:
it("should throw an error if API key is missing", () => {
delete process.env.MINIMAX_API_KEY;
expect(() => new MiniMaxChatModel()).toThrow();
});
Tomas2D
left a comment
There was a problem hiding this comment.
Hello @octo-patch, thank you for the PR.
Before I merge it, please make the following changes:
- Modify
tests/examples/test_examples.pyto run the the minimax test only if credentials are set - Modify
tests/examples/examples.test.tsto run the the minimax test only if credentials are set
Add conditional exclusion for minimax provider examples in both Python and TypeScript test suites, consistent with how other providers handle missing credentials.
|
Thanks @Tomas2D for the review! I've made both changes:
This ensures the MiniMax examples only run in CI environments where the API key is configured. |
|
All good. Can you please address the failing pipeline? |
Summary
Add MiniMax as a first-class LLM provider in both Python and TypeScript implementations. MiniMax provides an OpenAI-compatible API, making integration straightforward through existing LiteLLM (Python) and Vercel AI SDK (TypeScript) infrastructure.
Python
MiniMaxChatModelextendingLiteLLMChatModelwith OpenAI-compat routing"minimax"provider inBackendProviderswith properProviderName/ProviderHumanNameMINIMAX_API_KEY,MINIMAX_CHAT_MODEL,MINIMAX_API_BASE,MINIMAX_API_HEADERSMiniMax-M2.7, default base URL:https://api.minimax.io/v1TypeScript
MiniMaxChatModelextendingVercelChatModelvia@ai-sdk/openaiMiniMaxClientextendingBackendClientwith OpenAI provider"MiniMax"inBackendProviderswith"minimax"aliasDocumentation
docs/modules/backend.mdxAvailable Models
MiniMax-M2.7(latest, 1M context)MiniMax-M2.7-highspeed(faster variant)MiniMax-M2.5(204K context)MiniMax-M2.5-highspeed(faster variant)Changes
Test plan
Usage