Oxen.ai Blog

Welcome to the Oxen.ai blog 🐂

The team at Oxen.ai is dedicated to helping AI practictioners go from research to production. To help enable this, we host a research paper club on Fridays called ArXiv Dives, where we go over state of the art research and how you can apply it to your own work.

Take a look at our Arxiv Dives, Practical ML Dives as well as a treasure trove of content on how to go from raw datasets to production ready AI/ML systems. We cover everything from prompt engineering, fine-tuning, computer vision, natural language understanding, generative ai, data engineering, to best practices when versioning your data. So, dive in and explore – we're excited to share our journey and learnings with you 🚀

🧠 GRPO VRAM Requirements For the GPU Poor
🧠 GRPO VRAM Requirements For the GPU Poor

Since the release of DeepSeek-R1, Group Relative Policy Optimization (GRPO) has become the talk of the town for Reinforcement Learning in Large Language Models due to its effective...

Greg Schoeninger
Greg Schoeninger
2/6/2025
- Practical ML
9 min read
ArXiv Dives: How ReFT works
ArXiv Dives: How ReFT works

ArXiv Dives is a series of live meetups that take place on Fridays with the Oxen.ai community. We believe that it is not only important to read the papers, but dive into the code t...

Greg Schoeninger
Greg Schoeninger
7/21/2024
- Arxiv Dives
10 min read
How to Train Diffusion for Text from  Scratch
How to Train Diffusion for Text from Scratch

This is part two of a series on Diffusion for Text with Score Entropy Discrete Diffusion (SEDD) models. Today we will be diving into the code for diffusion models for text, and see...

Greg Schoeninger
Greg Schoeninger
4/30/2024
- Arxiv Dives
16 min read
ArXiv Dives: Text Diffusion with SEDD
ArXiv Dives: Text Diffusion with SEDD

Diffusion models have been popular for computer vision tasks. Recently models such as Sora show how you can apply Diffusion + Transformers to generate state of the art videos with ...

Greg Schoeninger
Greg Schoeninger
4/16/2024
- Arxiv Dives
11 min read
ArXiv Dives: The Era of 1-bit LLMs, All Large Language Models are in 1.58 Bits
ArXiv Dives: The Era of 1-bit LLMs, All Large Language Models are in 1.58 Bits

This paper presents BitNet b1.58 where every weight in a Transformer can be represented as a {-1, 0, 1} instead of a floating point number. The model matches full precision transfo...

Greg Schoeninger
Greg Schoeninger
4/8/2024
- Arxiv Dives
9 min read
How to train Mistral 7B as a "Self-Rewarding Language Model"
How to train Mistral 7B as a "Self-Rewarding Language Model"

About a month ago we went over the "Self-Rewarding Language Models" paper by the team at Meta AI with the Oxen.ai Community. The paper felt very approachable and reproducible, so w...

Greg Schoeninger
Greg Schoeninger
3/20/2024
- Practical ML
17 min read
Practical ML Dive - Building RAG from Open Source Pt 1
Practical ML Dive - Building RAG from Open Source Pt 1

RAG was introduced by the Facebook AI Research (FAIR) team in May of 2020 as an end-to-end way to include document search into a sequence-to-sequence neural network architecture. ...

Greg Schoeninger
Greg Schoeninger
1/6/2024
- Practical ML
14 min read
Practical ML Dive - How to train Mamba for Question Answering
Practical ML Dive - How to train Mamba for Question Answering

What is Mamba 🐍? There is a lot of hype about Mamba being a fast alternative to the Transformer architecture. The paper released in December of 2023 claims 5x faster throughput w...

Greg Schoeninger
Greg Schoeninger
12/21/2023
- Practical ML
22 min read