• Datasets
  • Models
  • Docs
  • Blog
  • Community
  • Pricing
  • Log in
  • Get Started
meta-llama/
Llama-3-2-11B-Vision-Instruct
Datasets
Models
Docs
Blog
Pricing
Search
DataBranchesEvaluationsNotebooks
Llama-3-2-11B-Vision-Instruct
public
21.3 gb
214
Llama-3-2-11B-Vision-Instruct
/
1 branch
Loading...
About

The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out).

1 commit
1 contributor
0 downloads
21.3 gb
0 stars
Repository contents
14 text files 87.5%
2 binary files 12.5%
Contributors
Ox Data Bot 🤖
@oxbot
Copyright © 2025 Oxen.ai, All Rights Reserved
CareersPrivacy PolicyTerms and Conditions