Oxen.ai Blog

Welcome to the Oxen.ai blog 🐂

The team at Oxen.ai is dedicated to helping AI practictioners go from research to production. To help enable this, we host a research paper club on Fridays called ArXiv Dives, where we go over state of the art research and how you can apply it to your own work.

Take a look at our Arxiv Dives, Practical ML Dives as well as a treasure trove of content on how to go from raw datasets to production ready AI/ML systems. We cover everything from prompt engineering, fine-tuning, computer vision, natural language understanding, generative ai, data engineering, to best practices when versioning your data. So, dive in and explore – we're excited to share our journey and learnings with you 🚀

The Prompt Report Part 1: A Systematic Survey of Prompting Techniques
The Prompt Report Part 1: A Systematic Survey of Prompting Techniques

For this blog we are switching it up a bit. In past Arxiv Dives, we have gone deep into the underlying model architectures and techniques that make large language models and other ...

Mathias
Mathias
Oct 9, 2024
- Arxiv Dives
12 min read
arXiv Dive: How Flux and Rectified Flow Transformers Work
arXiv Dive: How Flux and Rectified Flow Transformers Work

Flux made quite a splash with its release on August 1st, 2024 as the new state of the art generative image model outperforming SDXL, SDXL-Turbo, Pixart, and DALL-E. While the model...

Mathias
Mathias
Sep 18, 2024
- Arxiv Dives
9 min read
How Well Can Llama 3.1 8B Detect Political Spam? [4/4]
How Well Can Llama 3.1 8B Detect Political Spam? [4/4]

It only took about 11 minutes to fine-tuned Llama 3.1 8B on our political spam synthetic dataset using ReFT. While this is extremely fast, beating out our previous record of 14 min...

Eric Laurence
Eric Laurence
Sep 14, 2024
3 min read
Fine-Tuning Llama 3.1 8B in Under 12 Minutes [3/4]
Fine-Tuning Llama 3.1 8B in Under 12 Minutes [3/4]

Meta has recently released Llama 3.1, including their 405 billion parameter model which is the most capable open model to date and the first open model on the same level as GPT 4. ...

Eric Laurence
Eric Laurence
Sep 5, 2024
3 min read
arXiv Dive: How Meta Trained Llama 3.1
arXiv Dive: How Meta Trained Llama 3.1

Llama 3.1 is a set of Open Weights Foundation models released by Meta, which marks the first time an open model has caught up to GPT-4, Anthropic, or other closed models in the eco...

Mathias
Mathias
Aug 27, 2024
12 min read
How to De-duplicate and Clean Synthetic Data [2/4]
How to De-duplicate and Clean Synthetic Data [2/4]

Synthetic data has shown promising results for training and fine tuning large models, such as Llama 3.1 and the models behind Apple Intelligence, and to produce datasets from minim...

Eric Laurence
Eric Laurence
Aug 23, 2024
6 min read
Create Your Own Synthetic Data With Only 5 Political Spam Texts [1/4]
Create Your Own Synthetic Data With Only 5 Political Spam Texts [1/4]

With the 2024 elections coming up, spam and political texts are more prevalent than ever as political campaigns increasingly turn towards texting potential voters. Over 15 billion ...

Eric Laurence
Eric Laurence
Aug 1, 2024
5 min read
Fine-tuning Llama 3 in 14 minutes using ReFT
Fine-tuning Llama 3 in 14 minutes using ReFT

If you have been fine-tuning models recently, you have most likely used LoRA. While LoRA has been the dominant PEFT technique for a long time thanks to its efficiency and effective...

Eric Laurence
Eric Laurence
Jul 25, 2024
8 min read
ArXiv Dives: How ReFT works
ArXiv Dives: How ReFT works

ArXiv Dives is a series of live meetups that take place on Fridays with the Oxen.ai community. We believe that it is not only important to read the papers, but dive into the code t...

Greg Schoeninger
Greg Schoeninger
Jul 21, 2024
- Arxiv Dives
10 min read
ArXiv Dives:💃 Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
ArXiv Dives:💃 Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling

Modeling sequences with infinite context length is one of the dreams of Large Language models. Some LLMs such as Transformers suffer from quadratic computational complexity, making...

Mathias
Mathias
Jun 26, 2024
- Arxiv Dives
4 min read