A herd of enthusiastic oxen running towards the future

Oxen.ai Blog

Welcome to the Oxen.ai blog 🐂

The team at Oxen.ai is dedicated to helping AI practictioners go from research to production. To help enable this, we host a research paper club on Fridays called ArXiv Dives, where we go over state of the art research and how you can apply it to your own work.

Take a look at our Arxiv Dives, Practical ML Dives as well as a treasure trove of content on how to go from raw datasets to production ready AI/ML systems. We cover everything from prompt engineering, fine-tuning, computer vision, natural language understanding, generative ai, data engineering, to best practices when versioning your data. So, dive in and explore – we're excited to share our journey and learnings with you 🚀

Recent
ArXiv Dives - Lumiere
Feb 27, 2024

This paper introduces Lumiere – a text-to-video diffusion model designed for synthesizing videos that portray realistic, diverse and coherent motion – a pivotal challenge in video synthesis. To this end, we introduce a Space-Time U-Net architecture t...

Arxiv Dives
ArXiv Dives - Depth Anything
Feb 19, 2024

This paper presents Depth Anything, a highly practical solution for robust monocular depth estimation. Depth estimation traditionally requires extra hardware and algorithms such as stereo cameras, lidar, or structure from motion. In this paper they c...

Arxiv Dives
Arxiv Dives - Toolformer: Language models can teach themselves to use tools
Feb 12, 2024

Large Language Models (LLMs) show remarkable capabilities to solve new tasks from a few textual instructions, but they also paradoxically struggle with basic functionality such as math, dates on a calendar, or replying with up to date information abo...

Arxiv Dives
Arxiv Dives - Self-Rewarding Language Models
Feb 6, 2024

The goal of this paper is to see if we can create a self-improving feedback loop to achieve “superhuman agents”. Current language models are bottlenecked by labeled data from humans. Not only is the quantity of labels a bottleneck, but also the quali...

Arxiv Dives
Arxiv Dives - Direct Preference Optimization (DPO)
Jan 30, 2024

This paper provides a simple and stable alternative to RLHF for aligning Large Language Models with human preferences called "Direct Preference Optimization" (DPO). They reformulate the loss function as a classification task between prompt completion...

Arxiv Dives
Arxiv Dives - Efficient Streaming Language Models with Attention Sinks
Jan 20, 2024

This paper introduces the concept of an Attention Sink which helps Large Language Models (LLMs) maintain the coherence of text into the millions of tokens while also maintaining a finite memory footprint and latency. Transformer based language model...

Arxiv Dives
Arxiv Dives - How Mixture of Experts works with Mixtral 8x7B
Jan 13, 2024

Mixtral 8x7B is an open source mixture of experts large language model released by the team at Mistral.ai that outperforms Llama-2 70B and GPT-3.5 on a variety natural language understanding tasks. The magic of the model is that it only uses 13B par...

Arxiv Dives
Arxiv Dives - LLaVA 🌋 an open source Large Multimodal Model (LMM)
Jan 7, 2024

What is LLaVA? LLaVA is a Multi-Modal model that connects a Vision Encoder and an LLM for general purpose visual and language understanding. Paper: https://arxiv.org/abs/2304.08485 Team: Wisconsin-Madison, Microsoft Research, Columbia University ...

Arxiv Dives
Practical ML Dive - Building RAG from Open Source Pt 1
Jan 6, 2024

RAG was introduced by the Facebook AI Research (FAIR) team in May of 2020 as an end-to-end way to include document search into a sequence-to-sequence neural network architecture. What is RAG? For those of you who missed it, we covered the RAG pape...

practical ml
Arxiv Dives - How Mistral 7B works
Dec 23, 2023

What is Mistral 7B? Mistral 7B is an open weights large language model by Mistral.ai that was build for performance and efficiency. It outshines models that are twice it's size, including Llama-2 13B and Llama-1 34B on both automated benchmarks and ...

Arxiv Dives