metadata
base_model:
- google/t5-v1_1-xxl
datasets:
- allenai/tulu-v2-sft-mixture
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- language-modeling
- causal-lm
- bias-analysis
- cognitive-bias
Model Card for T5-Tulu
Model Details
Model Description
This 🤗 Transformers model was finetuned using LoRA adapters for the paper:
"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs" (Hugging Face Paper, arXiv)
We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
This is one of 3 identical versions trained with different random seeds.
- Model type: encoder-decoder based transformer
- Language(s): English
- License: Apache 2.0
- Finetuned from:
google/t5-v1_1-xxl
- Paper: https://arxiv.org/abs/2507.07186
- Project Page: https://itay1itzhak.github.io/planted-in-pretraining
- Repository: https://github.com/itay1itzhak/planted-in-pretraining
Uses
Direct Use
For research on cognitive biases in LLMs. Used to test causal impact of pretraining vs instruction tuning.
Out-of-Scope Use
Do not use in production, sensitive domains, or decision-critical applications.
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("itay1itzhak/T5-Tulu-Seed-2")
tokenizer = AutoTokenizer.from_pretrained("itay1itzhak/T5-Tulu-Seed-2")
inputs = tokenizer("Example input?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
Training Details
- Finetuning method: LoRA (high-rank, rank ∈ [64, 512])
- Instruction data: Tulu-2
- Seeds: 3 per setting to evaluate randomness effects
- Batch size: 128 (OLMo) / 64 (T5)
- Learning rate: 1e-6 to 1e-3
- Steps: ~5.5k (OLMo) / ~16k (T5)
- Mixed precision: fp16 (OLMo) / bf16 (T5)
Evaluation
- Evaluated on 32 cognitive biases from Itzhak et al. (2024) and Malberg et al. (2024)
- Metrics: mean bias score, PCA clustering, MMLU accuracy
- Findings: Biases primarily originate in pretraining; randomness introduces moderate variation
Environmental Impact
- Hardware: 4× NVIDIA A40
- Estimated time: ~120 GPU hours/model
Technical Specifications
- Architecture: T5-11B
- Instruction dataset: Tulu-2
Citation
@misc{itzhak2025plantedpretrainingswayedfinetuning,
title={Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs},
author={Itay Itzhak and Yonatan Belinkov and Gabriel Stanovsky},
year={2025},
eprint={2507.07186},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.07186},
}