Oxen.ai Blog

Welcome to the Oxen.ai blog 🐂

The team at Oxen.ai is dedicated to helping AI practictioners go from research to production. To help enable this, we host a research paper club on Fridays called ArXiv Dives, where we go over state of the art research and how you can apply it to your own work.

Take a look at our Arxiv Dives, Practical ML Dives as well as a treasure trove of content on how to go from raw datasets to production ready AI/ML systems. We cover everything from prompt engineering, fine-tuning, computer vision, natural language understanding, generative ai, data engineering, to best practices when versioning your data. So, dive in and explore – we're excited to share our journey and learnings with you 🚀

ArXiv Dives:💃 Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
ArXiv Dives:💃 Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling

Modeling sequences with infinite context length is one of the dreams of Large Language models. Some LLMs such as Transformers suffer from quadratic computational complexity, making...

Mathias
Mathias
Jun 26, 2024
- Arxiv Dives
4 min read
ArXiv Dives: Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
ArXiv Dives: Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet

The ability to interpret and steer large language models is an important topic as they become more and more a part of our daily lives. As the leader in AI safety, Anthropic takes o...

Mathias
Mathias
Jun 4, 2024
- Arxiv Dives
9 min read
ArXiv Dives: Efficient DiT Fine-Tuning with PixART for Text to Image Generation
ArXiv Dives: Efficient DiT Fine-Tuning with PixART for Text to Image Generation

Diffusion Transformers have been gaining a lot of steam since OpenAI's demo of Sora back in March. The problem, when we think of training text-to-image models, we usually think mil...

Mathias
Mathias
May 29, 2024
- Arxiv Dives
8 min read
ArXiv Dives: Evaluating LLMs for Code Completion with HumanEval
ArXiv Dives: Evaluating LLMs for Code Completion with HumanEval

Large Language Models have shown very good ability to generalize within a distribution, and frontier models have shown incredible flexibility under prompting. Now that there is so...

Alex owen
Alex owen
May 17, 2024
- Arxiv Dives
15 min read
How to Train Diffusion for Text from  Scratch
How to Train Diffusion for Text from Scratch

This is part two of a series on Diffusion for Text with Score Entropy Discrete Diffusion (SEDD) models. Today we will be diving into the code for diffusion models for text, and see...

Greg Schoeninger
Greg Schoeninger
Apr 30, 2024
- Arxiv Dives
16 min read
ArXiv Dives: Text Diffusion with SEDD
ArXiv Dives: Text Diffusion with SEDD

Diffusion models have been popular for computer vision tasks. Recently models such as Sora show how you can apply Diffusion + Transformers to generate state of the art videos with ...

Greg Schoeninger
Greg Schoeninger
Apr 16, 2024
- Arxiv Dives
11 min read
ArXiv Dives: The Era of 1-bit LLMs, All Large Language Models are in 1.58 Bits
ArXiv Dives: The Era of 1-bit LLMs, All Large Language Models are in 1.58 Bits

This paper presents BitNet b1.58 where every weight in a Transformer can be represented as a {-1, 0, 1} instead of a floating point number. The model matches full precision transfo...

Greg Schoeninger
Greg Schoeninger
Apr 8, 2024
- Arxiv Dives
9 min read
ArXiv Dives: Evolutionary Optimization of Model Merging Recipes
ArXiv Dives: Evolutionary Optimization of Model Merging Recipes

Today, we’re diving into a fun paper by the team at Sakana.ai called “Evolutionary Optimization of Model Merging Recipes”. The high level idea is that we have so many open weights ...

Greg Schoeninger
Greg Schoeninger
Apr 1, 2024
- Arxiv Dives
10 min read
ArXiv Dives: I-JEPA
ArXiv Dives: I-JEPA

Today, we’re diving into the I-JEPA paper. JEPA stands for Joint-Embedding Predictive Architecture and if you have been following Yann LeCunn, is a technique he has been hyping up ...

Greg Schoeninger
Greg Schoeninger
Mar 26, 2024
- Arxiv Dives
13 min read
How to train Mistral 7B as a "Self-Rewarding Language Model"
How to train Mistral 7B as a "Self-Rewarding Language Model"

About a month ago we went over the "Self-Rewarding Language Models" paper by the team at Meta AI with the Oxen.ai Community. The paper felt very approachable and reproducible, so w...

Greg Schoeninger
Greg Schoeninger
Mar 20, 2024
- Practical ML
17 min read