Super excited to launch Hugging Face Sheets: Spreadsheets meet AI and unstructured data.
A few months ago, we started imagining new ways to build and transform datasets with the latest open-source models.
Today, I'm thrilled to introduce our first step in this direction.
In a nutshell:
📁 Effortlessly run prompts and models over your data. 🌐 Agentic search for accuracy and real-time information. 🖼️ Familiar, minimalistic interface for interacting with data. 🎯 Human feedback 2.0: Your input directly improves generated data. 💯 Access hundreds of open models and leading inference providers.
📜 Accepted at ACL 2025! Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs We propose to fine-tune LLMs to generate diverse chains of thought (DCoT) in a single inference step. This enables within-inference refinement of the cots, no external feedback needed! 🔗 https://arxiv.org/abs/2407.03181
We've added a new chapter about the very basics of Argilla to the Hugging Face NLP course. Learn how to set up an Argilla instance, load & annotate datasets, and export them to the Hub.
How do your annotations for FineWeb2 compare to your teammates'?
I started contributing some annotations to the FineWeb2 collaborative annotation sprint and I wanted to know if my labelling trends were similar to those of my teammates.
I did some analysis and I wasn't surprised to see that I'm being a bit harsher on my evaluations than my mates 😂
Do you want to see how your annotations compare to others? 👉 Go to this Gradio space: nataliaElv/fineweb2_compare_my_annotations ✍️ Enter the dataset that you've contributed to and your Hugging Face username.