Description
MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.
API Usage Examples
OpenAI Compatible Endpoint
Use this endpoint with any OpenAI-compatible library. Model: MiniMax: MiniMax M1 (minimax/minimax-m1)
curl https://api.ridvay.com/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_API_KEY" -d '{
"model": "minimax/minimax-m1",
"messages": [
{
"role": "user",
"content": "Explain the capabilities of the MiniMax: MiniMax M1 model"
}
],
"temperature": 0.7,
"max_tokens": 1024
}'
Supported Modalities
- Text
API Pricing
- Input: 0.3$ / 1M tokens
- Output: 1.65$ / 1M tokens
Token Limits
- Max Output: 40,000 tokens
- Max Context: 1,000,000 tokens
Subscription Tiers
- free
- pro
- ultimate