Datasets:

Languages:
English
ArXiv:
License:
Dataset Viewer
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Ettin Decay Phase Data

License: MIT Paper Models GitHub

Phase 3 of 3: Premium data sources for final training phase (100B tokens) following the ProLong recipe.

This dataset contains the decay phase data used to train all Ettin encoder and decoder models. This final phase uses premium data sources with emphasis on long-form content and educational materials. The data is provided in MDS format ready for use with Composer and the ModernBERT training repository.

Abstract

The large language model (LLM) community focuses almost exclusively on decoder-only language models, since they are easier to use for text generation. However, a large subset of the community still uses encoder-only models for tasks such as classification or retrieval. Previous work has attempted to compare these architectures, but is forced to make comparisons with models that have different numbers of parameters, training techniques, and datasets. We introduce the SOTA open-data Ettin suite of models: paired encoder-only and decoder-only models ranging from 17 million parameters to 1 billion, trained on up to 2 trillion tokens. Using the same recipe for both encoder-only and decoder-only models produces SOTA recipes in both categories for their respective sizes, beating ModernBERT as an encoder and Llama 3.2 and SmolLM2 as decoders. Like previous work, we find that encoder-only models excel at classification and retrieval tasks while decoders excel at generative tasks. However, we show that adapting a decoder model to encoder tasks (and vice versa) through continued training is subpar compared to using only the reverse objective (i.e. a 400M encoder outperforms a 1B decoder on MNLI, and vice versa for generative tasks). We open-source all artifacts of this study including training data, training order segmented by checkpoint, and 200+ checkpoints to allow future work to analyze or extend all aspects of training.

πŸ“Š Data Composition

Data Source Tokens (B) Percentage Description
DCLM (Dolmino) 26.0 31.9% Highest-quality web crawl data
Code Repos 20.2 24.7% Premium code repositories
Books 10.5 12.9% Literature and reference books
Math (Dolmino) 5.0 6.1% Mathematical content (premium)
StackExchange (Dolmino) 4.0 4.9% High-quality Q&A content
Tulu Flan 4.1 5.0% Instruction-following data
Arxiv 3.0 3.7% Academic preprints
Wikipedia 3.0 3.7% Encyclopedia articles
Textbooks 0.5 0.6% Educational textbooks
Total 81.6 100.0% Premium quality mixture

🎯 Key Features of Decay Phase

Training Characteristics

  • Aggressive LR Decay: Learning rate decays to 0.02 of peak
  • Long Context: Maintains 8K token sequences from mid-training
  • Lower Masking: 5% masking ratio (vs 30% earlier) for encoders
  • Quality Over Quantity: Focus on premium sources rather than scale

πŸš€ Usage

Please see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT

Direct Access

from streaming import StreamingDataset

# Load the streaming dataset
dataset = StreamingDataset(
    remote='https://huggingface.co/datasets/jhu-clsp/ettin-decay-data',
    local='/tmp/ettin-decay-data',
    shuffle=True
)

# Access premium quality samples
for sample in dataset:
    text = sample['text']  # High-quality, long-form content
    # Process your data...

πŸ“ Structure

Each folder contains premium quality data sources in MDS format:

  • arxiv/ - Academic papers from ArXiv
  • books/ - Literature and reference books (expanded)
  • books_2/ - Additional book collections
  • code_repos/ - Premium code repositories
  • dclm_dolmino/ - Highest-quality filtered web data
  • math_dolmino/ - Premium mathematical content
  • stackexchange_dolmino/ - Top-quality Q&A content
  • stackexchange_dolmino_dup/ - Additional curated Q&A
  • stackexchange_dolmino_dup_2/ - Extra Q&A collections
  • textbooks/ - Educational textbook content
  • textbooks_2/ - Additional textbook collections
  • tulu_flan/ - Instruction-following examples
  • wikipedia/ - Wikipedia articles

πŸ’‘ Usage in Cross-Objective Training

This decay phase data is also used for cross-objective training experiments:

  • Decoder β†’ Encoder: Training decoders with MLM on this premium data
  • Encoder β†’ Decoder: Training encoders with CLM on this premium data
  • Extended Training: 50B additional tokens for cross-objective experiments

πŸ”— Related Resources

Citation

@misc{weller2025seqvsseqopen,
      title={Seq vs Seq: An Open Suite of Paired Encoders and Decoders}, 
      author={Orion Weller and Kathryn Ricci and Marc Marone and Antoine Chaffin and Dawn Lawrie and Benjamin Van Durme},
      year={2025},
      eprint={2507.11412},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.11412}, 
}
Downloads last month
256

Models trained or fine-tuned on jhu-clsp/ettin-decay-data

Collection including jhu-clsp/ettin-decay-data