Oxen.ai Blog

Welcome to the Oxen.ai blog 🐂

The team at Oxen.ai is dedicated to helping AI practictioners go from research to production. To help enable this, we host a research paper club on Fridays called ArXiv Dives, where we go over state of the art research and how you can apply it to your own work.

Take a look at our Arxiv Dives, Practical ML Dives as well as a treasure trove of content on how to go from raw datasets to production ready AI/ML systems. We cover everything from prompt engineering, fine-tuning, computer vision, natural language understanding, generative ai, data engineering, to best practices when versioning your data. So, dive in and explore – we're excited to share our journey and learnings with you 🚀

Arxiv Dives - Self-Rewarding Language Models
Arxiv Dives - Self-Rewarding Language Models

The goal of this paper is to see if we can create a self-improving feedback loop to achieve “superhuman agents”. Current language models are bottlenecked by labeled data from human...

Greg Schoeninger
Greg Schoeninger
Feb 6, 2024
- Arxiv Dives
13 min read
Arxiv Dives - Direct Preference Optimization (DPO)
Arxiv Dives - Direct Preference Optimization (DPO)

This paper provides a simple and stable alternative to RLHF for aligning Large Language Models with human preferences called "Direct Preference Optimization" (DPO). They reformulat...

Greg Schoeninger
Greg Schoeninger
Jan 30, 2024
- Arxiv Dives
12 min read
Arxiv Dives - Efficient Streaming Language Models with Attention Sinks
Arxiv Dives - Efficient Streaming Language Models with Attention Sinks

This paper introduces the concept of an Attention Sink which helps Large Language Models (LLMs) maintain the coherence of text into the millions of tokens while also maintaining a ...

Greg Schoeninger
Greg Schoeninger
Jan 20, 2024
- Arxiv Dives
12 min read
Arxiv Dives - How Mixture of Experts works with Mixtral 8x7B
Arxiv Dives - How Mixture of Experts works with Mixtral 8x7B

Mixtral 8x7B is an open source mixture of experts large language model released by the team at Mistral.ai that outperforms Llama-2 70B and GPT-3.5 on a variety natural language und...

Greg Schoeninger
Greg Schoeninger
Jan 13, 2024
- Arxiv Dives
12 min read
Arxiv Dives - LLaVA 🌋 an open source Large Multimodal Model (LMM)
Arxiv Dives - LLaVA 🌋 an open source Large Multimodal Model (LMM)

What is LLaVA? LLaVA is a Multi-Modal model that connects a Vision Encoder and an LLM for general purpose visual and language understanding. Paper: https://arxiv.org/abs/2304.084...

Greg Schoeninger
Greg Schoeninger
Jan 7, 2024
- Arxiv Dives
12 min read
Practical ML Dive - Building RAG from Open Source Pt 1
Practical ML Dive - Building RAG from Open Source Pt 1

RAG was introduced by the Facebook AI Research (FAIR) team in May of 2020 as an end-to-end way to include document search into a sequence-to-sequence neural network architecture. ...

Greg Schoeninger
Greg Schoeninger
Jan 6, 2024
- Practical ML
14 min read
Arxiv Dives - How Mistral 7B works
Arxiv Dives - How Mistral 7B works

What is Mistral 7B? Mistral 7B is an open weights large language model by Mistral.ai that was build for performance and efficiency. It outshines models that are twice it's size, i...

Greg Schoeninger
Greg Schoeninger
Dec 23, 2023
- Arxiv Dives
10 min read
Practical ML Dive - How to train Mamba for Question Answering
Practical ML Dive - How to train Mamba for Question Answering

What is Mamba 🐍? There is a lot of hype about Mamba being a fast alternative to the Transformer architecture. The paper released in December of 2023 claims 5x faster throughput w...

Greg Schoeninger
Greg Schoeninger
Dec 21, 2023
- Practical ML
22 min read
Mamba: Linear-Time Sequence Modeling with Selective State Spaces - Arxiv Dives
Mamba: Linear-Time Sequence Modeling with Selective State Spaces - Arxiv Dives

What is Mamba 🐍? Mamba at it's core is a recurrent neural network architecture, that outperforms Transformers with faster inference and improved handling of long sequences of len...

Greg Schoeninger
Greg Schoeninger
Dec 15, 2023
- Arxiv Dives
15 min read
Practical ML Dive - How to customize a Vision Transformer on your own data
Practical ML Dive - How to customize a Vision Transformer on your own data

Welcome to Practical ML Dives, a series spin off of Arxiv Dives. In Arxiv Dives, we cover state of the art research papers, and dive into the gnitty gritty details of how AI model...

Greg Schoeninger
Greg Schoeninger
Dec 14, 2023
- Arxiv Dives
20 min read