Oxen.ai Blog

Welcome to the Oxen.ai blog 🐂

The team at Oxen.ai is dedicated to helping AI practictioners go from research to production. To help enable this, we host a research paper club on Fridays called ArXiv Dives, where we go over state of the art research and how you can apply it to your own work.

Take a look at our Arxiv Dives, Practical ML Dives as well as a treasure trove of content on how to go from raw datasets to production ready AI/ML systems. We cover everything from prompt engineering, fine-tuning, computer vision, natural language understanding, generative ai, data engineering, to best practices when versioning your data. So, dive in and explore – we're excited to share our journey and learnings with you 🚀

ArXiv Dives: I-JEPA
ArXiv Dives: I-JEPA

Today, we’re diving into the I-JEPA paper. JEPA stands for Joint-Embedding Predictive Architecture and if you have been following Yann LeCunn, is a technique he has been hyping up ...

Greg Schoeninger
Greg Schoeninger
Mar 26, 2024
- Arxiv Dives
13 min read
How to train Mistral 7B as a "Self-Rewarding Language Model"
How to train Mistral 7B as a "Self-Rewarding Language Model"

About a month ago we went over the "Self-Rewarding Language Models" paper by the team at Meta AI with the Oxen.ai Community. The paper felt very approachable and reproducible, so w...

Greg Schoeninger
Greg Schoeninger
Mar 20, 2024
- Practical ML
17 min read
Downloading Datasets with Oxen.ai
Downloading Datasets with Oxen.ai

Oxen.ai makes it quick and easy to download any version of your data wherever and whenever you need it. When we say quick, we mean raw speed. Oxen chunks and transfers data faster...

Greg Schoeninger
Greg Schoeninger
Mar 18, 2024
- Getting Started
4 min read
Uploading Datasets to Oxen.ai
Uploading Datasets to Oxen.ai

Oxen.ai makes it quick and easy to upload your datasets, keep track of every version and share them with your team or the world. Oxen datasets can be as small as a single csv or as...

Greg Schoeninger
Greg Schoeninger
Mar 18, 2024
- Getting Started
4 min read
ArXiv Dives - Diffusion Transformers
ArXiv Dives - Diffusion Transformers

Diffusion transformers achieve state-of-the-art quality generating images by replacing the commonly used U-Net backbone with a transformer that operates on latent patches. They rec...

Greg Schoeninger
Greg Schoeninger
Mar 12, 2024
- Arxiv Dives
14 min read
"Road to Sora" Paper Reading List
"Road to Sora" Paper Reading List

This post is an effort to put together a reading list for our Friday paper club called ArXiv Dives. Since there has not been an official paper released yet for Sora, the goal is fo...

Greg Schoeninger
Greg Schoeninger
Mar 5, 2024
- Arxiv Dives
21 min read
ArXiv Dives - Medusa
ArXiv Dives - Medusa

Abstract In this paper, they present MEDUSA, an efficient method that augments LLM inference by adding extra decoding heads to predict multiple subsequent tokens in parallel. The ...

Greg Schoeninger
Greg Schoeninger
Mar 4, 2024
- Arxiv Dives
5 min read
ArXiv Dives - Lumiere
ArXiv Dives - Lumiere

This paper introduces Lumiere – a text-to-video diffusion model designed for synthesizing videos that portray realistic, diverse and coherent motion – a pivotal challenge in video ...

Greg Schoeninger
Greg Schoeninger
Feb 27, 2024
- Arxiv Dives
11 min read
ArXiv Dives - Depth Anything
ArXiv Dives - Depth Anything

This paper presents Depth Anything, a highly practical solution for robust monocular depth estimation. Depth estimation traditionally requires extra hardware and algorithms such as...

Greg Schoeninger
Greg Schoeninger
Feb 19, 2024
- Arxiv Dives
16 min read
Arxiv Dives - Toolformer: Language models can teach themselves to use tools
Arxiv Dives - Toolformer: Language models can teach themselves to use tools

Large Language Models (LLMs) show remarkable capabilities to solve new tasks from a few textual instructions, but they also paradoxically struggle with basic functionality such as ...

Greg Schoeninger
Greg Schoeninger
Feb 12, 2024
- Arxiv Dives
10 min read