Mixtral 8x22B
text-to-text
Efficient Sparse MoE architecture with 39B active parameters, excels in multilingual tasks, math, coding, and handles 64K token contexts.
Model Parameters
Advanced settings to control the behavior of the model.
A system prompt that will be prepended to all chat messages to guide the model's behavior. Automatically gets saved to your playground repository.
Controls randomness: 0 = deterministic, 2 = very creative
Maximum length of the response (1-128000 tokens)
Mixtral 8x22B
Log in to chat with any model.