Description
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
API Usage Examples
OpenAI Compatible Endpoint
Use this endpoint with any OpenAI-compatible library. Model: Mistral: Mixtral 8x7B Instruct (mistralai/mixtral-8x7b-instruct)
curl https://api.ridvay.com/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_API_KEY" -d '{
"model": "mistralai/mixtral-8x7b-instruct",
"messages": [
{
"role": "user",
"content": "Explain the capabilities of the Mistral: Mixtral 8x7B Instruct model"
}
],
"temperature": 0.7,
"max_tokens": 1024
}'Supported Modalities
- Text
API Pricing
- Input: 0.54$ / 1M tokens
- Output: 0.54$ / 1M tokens
Token Limits
- Max Output: 16,384 tokens
- Max Context: 32,768 tokens
Subscription Tiers
- free
- pro
- ultimate