40% OFF

SUMMER SALE - LIMITED TIME

All Pro & Ultimate Plans - Use code SUMMER40

15days
:
15hrs
:
55min
:
49sec
Claim Offer
Back to Models

Xiaomi: MiMo-V2-Flash

xiaomi/mimo-v2-flash

Description

MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K context window, and excels at reasoning, coding, and agent scenarios. On SWE-bench Verified and SWE-bench Multilingual, MiMo-V2-Flash ranks as the top #1 open-source model globally, delivering performance comparable to Claude Sonnet 4.5 while costing only about 3.5% as much. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config).

API Usage Examples

OpenAI Compatible Endpoint

Use this endpoint with any OpenAI-compatible library. Model: Xiaomi: MiMo-V2-Flash (xiaomi/mimo-v2-flash)

curl https://api.ridvay.com/v1/chat/completions   -H "Content-Type: application/json"   -H "Authorization: Bearer YOUR_API_KEY"   -d '{
    "model": "xiaomi/mimo-v2-flash",
    "messages": [
      {
        "role": "user",
        "content": "Explain the capabilities of the Xiaomi: MiMo-V2-Flash model"
      }
    ],
    "temperature": 0.7,
    "max_tokens": 1024
  }'

Supported Modalities

  • Text

API Pricing

  • Input: 0.09$ / 1M tokens
  • Output: 0.29$ / 1M tokens

Token Limits

  • Max Output: 65,536 tokens
  • Max Context: 262,144 tokens

Subscription Tiers

  • free
  • pro
  • ultimate