Back to Models

Inception: Mercury Coder

inception/mercury-coder

Description

Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku and GPT-4o Mini while matching their performance. Mercury Coder's speed means that developers can stay in the flow while coding, enjoying rapid chat-based iteration and responsive code completion suggestions. On Copilot Arena, Mercury Coder ranks 1st in speed and ties for 2nd in quality. Read more in the [blog post here](https://www.inceptionlabs.ai/introducing-mercury).

API Usage Examples

OpenAI Compatible Endpoint

Use this endpoint with any OpenAI-compatible library. Model: Inception: Mercury Coder (inception/mercury-coder)

curl https://api.ridvay.com/v1/chat/completions   -H "Content-Type: application/json"   -H "Authorization: Bearer YOUR_API_KEY"   -d '{
    "model": "inception/mercury-coder",
    "messages": [
      {
        "role": "user",
        "content": "Explain the capabilities of the Inception: Mercury Coder model"
      }
    ],
    "temperature": 0.7,
    "max_tokens": 1024
  }'

Supported Modalities

  • Text

API Pricing

  • Input: 0.25$ / 1M tokens
  • Output: 1$ / 1M tokens

Token Limits

  • Max Output: 16,384 tokens
  • Max Context: 128,000 tokens

Subscription Tiers

  • free
  • pro
  • ultimate