Back to Models
Google: Gemini 2.5 Flash Lite Preview 06-17 AI Model Icon

Google: Gemini 2.5 Flash Lite Preview 06-17

google/gemini-2.5-flash-lite-preview-06-17

Description

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.

API Usage Examples

OpenAI Compatible Endpoint

Use this endpoint with any OpenAI-compatible library. Model: Google: Gemini 2.5 Flash Lite Preview 06-17 (google/gemini-2.5-flash-lite-preview-06-17)

curl https://api.ridvay.com/v1/chat/completions   -H "Content-Type: application/json"   -H "Authorization: Bearer YOUR_API_KEY"   -d '{
    "model": "google/gemini-2.5-flash-lite-preview-06-17",
    "messages": [
      {
        "role": "user",
        "content": "Explain the capabilities of the Google: Gemini 2.5 Flash Lite Preview 06-17 model"
      }
    ],
    "temperature": 0.7,
    "max_tokens": 1024
  }'

Supported Modalities

  • Text
  • Images

API Pricing

  • Input: 0.1$ / 1M tokens
  • Output: 0.4$ / 1M tokens

Token Limits

  • Max Output: 65,535 tokens
  • Max Context: 1,048,576 tokens

Subscription Tiers

  • free
  • pro
  • ultimate