Docs
Evaluations
Label datasets and evaluate model performance
Fork this repository to run your own evaluations

Oxen.ai allows you to run models row by row over your datasets. This allows you to label data, or evaluate how well a model is performing. Once the model has run over your dataset, you can save the output to a new file or branch, comparing it to the original dataset.