Model Card for T5-Tulu

Model Details

Model Description
This 🤗 Transformers model was finetuned using LoRA adapters for the paper:
"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs" (Hugging Face Paper, arXiv) We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness. This is one of 3 identical versions trained with different random seeds.

Uses

Direct Use

For research on cognitive biases in LLMs. Used to test causal impact of pretraining vs instruction tuning.

Out-of-Scope Use

Do not use in production, sensitive domains, or decision-critical applications.

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("itay1itzhak/T5-Tulu-Seed-2")
tokenizer = AutoTokenizer.from_pretrained("itay1itzhak/T5-Tulu-Seed-2")

inputs = tokenizer("Example input?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))

Training Details

  • Finetuning method: LoRA (high-rank, rank ∈ [64, 512])
  • Instruction data: Tulu-2
  • Seeds: 3 per setting to evaluate randomness effects
  • Batch size: 128 (OLMo) / 64 (T5)
  • Learning rate: 1e-6 to 1e-3
  • Steps: ~5.5k (OLMo) / ~16k (T5)
  • Mixed precision: fp16 (OLMo) / bf16 (T5)

Evaluation

  • Evaluated on 32 cognitive biases from Itzhak et al. (2024) and Malberg et al. (2024)
  • Metrics: mean bias score, PCA clustering, MMLU accuracy
  • Findings: Biases primarily originate in pretraining; randomness introduces moderate variation

Environmental Impact

  • Hardware: 4× NVIDIA A40
  • Estimated time: ~120 GPU hours/model

Technical Specifications

  • Architecture: T5-11B
  • Instruction dataset: Tulu-2

Citation

@misc{itzhak2025plantedpretrainingswayedfinetuning,
      title={Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs}, 
      author={Itay Itzhak and Yonatan Belinkov and Gabriel Stanovsky},
      year={2025},
      eprint={2507.07186},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.07186}, 
}
Downloads last month
12
Safetensors
Model size
11.1B params
Tensor type
F32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for itay1itzhak/T5-Tulu-Seed-2

Base model

google/t5-v1_1-xxl
Finetuned
(8)
this model
Quantizations
1 model

Dataset used to train itay1itzhak/T5-Tulu-Seed-2

Collection including itay1itzhak/T5-Tulu-Seed-2