Back to Models

Meituan: LongCat Flash Chat

meituan/longcat-flash-chat

Description

LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-connected MoE design to reduce communication overhead and achieve high throughput while maintaining training stability through advanced scaling strategies such as hyperparameter transfer, deterministic computation, and multi-stage optimization. This release, LongCat-Flash-Chat, is a non-thinking foundation model optimized for conversational and agentic tasks. It supports long context windows up to 128K tokens and shows competitive performance across reasoning, coding, instruction following, and domain benchmarks, with particular strengths in tool use and complex multi-step interactions.

API Usage Examples

OpenAI Compatible Endpoint

Use this endpoint with any OpenAI-compatible library. Model: Meituan: LongCat Flash Chat (meituan/longcat-flash-chat)

curl https://api.ridvay.com/v1/chat/completions   -H "Content-Type: application/json"   -H "Authorization: Bearer YOUR_API_KEY"   -d '{
    "model": "meituan/longcat-flash-chat",
    "messages": [
      {
        "role": "user",
        "content": "Explain the capabilities of the Meituan: LongCat Flash Chat model"
      }
    ],
    "temperature": 0.7,
    "max_tokens": 1024
  }'

Supported Modalities

  • Text

API Pricing

  • Input: 0.15$ / 1M tokens
  • Output: 0.75$ / 1M tokens

Token Limits

  • Max Output: 131,072 tokens
  • Max Context: 131,072 tokens

Subscription Tiers

  • free
  • pro
  • ultimate