
Datasets:
The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Ettin Decay Phase Data
Phase 3 of 3: Premium data sources for final training phase (100B tokens) following the ProLong recipe.
This dataset contains the decay phase data used to train all Ettin encoder and decoder models. This final phase uses premium data sources with emphasis on long-form content and educational materials. The data is provided in MDS format ready for use with Composer and the ModernBERT training repository.
Abstract
The large language model (LLM) community focuses almost exclusively on decoder-only language models, since they are easier to use for text generation. However, a large subset of the community still uses encoder-only models for tasks such as classification or retrieval. Previous work has attempted to compare these architectures, but is forced to make comparisons with models that have different numbers of parameters, training techniques, and datasets. We introduce the SOTA open-data Ettin suite of models: paired encoder-only and decoder-only models ranging from 17 million parameters to 1 billion, trained on up to 2 trillion tokens. Using the same recipe for both encoder-only and decoder-only models produces SOTA recipes in both categories for their respective sizes, beating ModernBERT as an encoder and Llama 3.2 and SmolLM2 as decoders. Like previous work, we find that encoder-only models excel at classification and retrieval tasks while decoders excel at generative tasks. However, we show that adapting a decoder model to encoder tasks (and vice versa) through continued training is subpar compared to using only the reverse objective (i.e. a 400M encoder outperforms a 1B decoder on MNLI, and vice versa for generative tasks). We open-source all artifacts of this study including training data, training order segmented by checkpoint, and 200+ checkpoints to allow future work to analyze or extend all aspects of training.
π Data Composition
Data Source | Tokens (B) | Percentage | Description |
---|---|---|---|
DCLM (Dolmino) | 26.0 | 31.9% | Highest-quality web crawl data |
Code Repos | 20.2 | 24.7% | Premium code repositories |
Books | 10.5 | 12.9% | Literature and reference books |
Math (Dolmino) | 5.0 | 6.1% | Mathematical content (premium) |
StackExchange (Dolmino) | 4.0 | 4.9% | High-quality Q&A content |
Tulu Flan | 4.1 | 5.0% | Instruction-following data |
Arxiv | 3.0 | 3.7% | Academic preprints |
Wikipedia | 3.0 | 3.7% | Encyclopedia articles |
Textbooks | 0.5 | 0.6% | Educational textbooks |
Total | 81.6 | 100.0% | Premium quality mixture |
π― Key Features of Decay Phase
Training Characteristics
- Aggressive LR Decay: Learning rate decays to 0.02 of peak
- Long Context: Maintains 8K token sequences from mid-training
- Lower Masking: 5% masking ratio (vs 30% earlier) for encoders
- Quality Over Quantity: Focus on premium sources rather than scale
π Usage
Please see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT
Direct Access
from streaming import StreamingDataset
# Load the streaming dataset
dataset = StreamingDataset(
remote='https://huggingface.co/datasets/jhu-clsp/ettin-decay-data',
local='/tmp/ettin-decay-data',
shuffle=True
)
# Access premium quality samples
for sample in dataset:
text = sample['text'] # High-quality, long-form content
# Process your data...
π Structure
Each folder contains premium quality data sources in MDS format:
arxiv/
- Academic papers from ArXivbooks/
- Literature and reference books (expanded)books_2/
- Additional book collectionscode_repos/
- Premium code repositoriesdclm_dolmino/
- Highest-quality filtered web datamath_dolmino/
- Premium mathematical contentstackexchange_dolmino/
- Top-quality Q&A contentstackexchange_dolmino_dup/
- Additional curated Q&Astackexchange_dolmino_dup_2/
- Extra Q&A collectionstextbooks/
- Educational textbook contenttextbooks_2/
- Additional textbook collectionstulu_flan/
- Instruction-following exampleswikipedia/
- Wikipedia articles
π‘ Usage in Cross-Objective Training
This decay phase data is also used for cross-objective training experiments:
- Decoder β Encoder: Training decoders with MLM on this premium data
- Encoder β Decoder: Training encoders with CLM on this premium data
- Extended Training: 50B additional tokens for cross-objective experiments
π Related Resources
- Models: Ettin Model Suite (17M-1B parameters)
- Phase 1: Pre-training Data (1.7T tokens)
- Phase 2: Mid-training Data (250B tokens)
- Training Order: Batch-level Data Order
- Paper: Arxiv link
- Code: GitHub Repository
Citation
@misc{weller2025seqvsseqopen,
title={Seq vs Seq: An Open Suite of Paired Encoders and Decoders},
author={Orion Weller and Kathryn Ricci and Marc Marone and Antoine Chaffin and Dawn Lawrie and Benjamin Van Durme},
year={2025},
eprint={2507.11412},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.11412},
}
- Downloads last month
- 256
Models trained or fine-tuned on jhu-clsp/ettin-decay-data
