Description
Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks such as text generation, speech recognition, translation, and image analysis. Leveraging innovations like Per-Layer Embedding (PLE) caching and the MatFormer architecture, Gemma 3n dynamically manages memory usage and computational load by selectively activating model parameters, significantly reducing runtime resource requirements. This model supports a wide linguistic range (trained in over 140 languages) and features a flexible 32K token context window. Gemma 3n can selectively load parameters, optimizing memory and computational efficiency based on the task or device capabilities, making it well-suited for privacy-focused, offline-capable applications and on-device AI solutions. [Read more in the blog post](https://developers.googleblog.com/en/introducing-gemma-3n/)
API Usage Examples
OpenAI Compatible Endpoint
Use this endpoint with any OpenAI-compatible library. Model: Google: Gemma 3n 4B (google/gemma-3n-e4b-it)
curl https://api.ridvay.com/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_API_KEY" -d '{
"model": "google/gemma-3n-e4b-it",
"messages": [
{
"role": "user",
"content": "Explain the capabilities of the Google: Gemma 3n 4B model"
}
],
"temperature": 0.7,
"max_tokens": 1024
}'
Supported Modalities
- Text
API Pricing
- Input: 0.02$ / 1M tokens
- Output: 0.04$ / 1M tokens
Token Limits
- Max Output: 32,768 tokens
- Max Context: 32,768 tokens
Subscription Tiers
- free
- pro
- ultimate