?Mistral Small 3.1
Released: 3/17/2025texttext
Input: $0.10 / Output: $0.30
Mistral Small 3.1 is a 24 billion parameter Multimodal LLM designed for a wide range of generative AI tasks.
It excels in instruction following, conversational assistance, image understanding, and function calling, while being lightweight enough to run on a single RTX 4090 or a Mac with 32GB RAM when quantized.
Some other noteworthy features of Mistral Small 3.1 include fast-response conversational assistance, low-latency function calling, and the ability to be fine-tuned for specialized domains such as legal advice, medical diagnostics, and technical support.
Metric | Value |
---|---|
Parameter Count | 24 billion |
Mixture of Experts | No |
Context Length | 128,000 tokens |
Multilingual | Yes |
Quantized* | Yes |
Precision* | Unknown |
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.