Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
7ab590a2-3ea9-42dd-8f72-9a64d2d5c5c2

ox
8 months agoExtract the answer from the question and the context. Only respond with answer strings that are contained in the context. Question: {prompt} Context: {context}
3d567cdd-1bd8-4a13-a847-60bf60b7c219

ox
8 months agoWhat is the answer, given the question and context? Question: {prompt} Context: {context} Answer:
main