Model Card for schaeff/gpt2-small_LNFree300

Associated publication: Transformers Don’t Need LayerNorm at Inference Time: Scaling LayerNorm Removal to GPT-2 XL and the Implications for Mechanistic Interpretability (arXiv TBD)

Associated GitHub: removing-layer-norm

This model is based on openai-community/gpt2 and was finetuned on OpenWebText for 300 iterations with 0.5M tokens per iteration. During the finetuning, LayerNorm modules were sequentially disabled. More details on the disabling procedure can be found in the associated publication.

Usage

This model uses the standard GPT2LMHeadModel architecture to avoid requiring trust_remote_code=True. While LayerNorm blocks are technically present, they have been effectively disabled through parameter manipulation.

How LayerNorm is disabled:

  • Epsilon values: Set to 1e12 (extremely large), so LayerNorm has no normalizing effect
  • Scale parameters: Set to 1e6 to counteract the large epsilon value

This approach maintains compatibility with the standard GPT-2 architecture while effectively creating a LayerNorm-free model.

Complete LayerNorm removal: If you want to fully remove LayerNorm operations, you can replace ln_1, ln_2 and ln_f modules with identity functions.

Loading instructions:

You can load the model with transformers:

model = GPT2LMHeadModel.from_pretrained("schaeff/gpt2-small_LNFree300")

The LayerNorm module inside transformers will not affect the model due to the parameter manipulation. Howevr, this is a bit hacky and we recommend properly the replacing LayerNorm modules with the identity in either TransformerLens or NNSight.

TransformerLens and NNSight loading code

import torch
from transformers import GPT2LMHeadModel
from transformer_lens import HookedTransformer

model = GPT2LMHeadModel.from_pretrained("schaeff/gpt2-small_LNFree300").to("cpu")

# Undo hacky LayerNorm removal
for block in model.transformer.h:
    block.ln_1.weight.data = block.ln_1.weight.data / 1e6
    block.ln_1.eps = 1e-5
    block.ln_2.weight.data = block.ln_2.weight.data / 1e6
    block.ln_2.eps = 1e-5
model.transformer.ln_f.weight.data = model.transformer.ln_f.weight.data / 1e6
model.transformer.ln_f.eps = 1e-5

# Properly replace LayerNorms by Identities
def removeLN(transformer_lens_model):
    for i in range(len(transformer_lens_model.blocks)):
        transformer_lens_model.blocks[i].ln1 = torch.nn.Identity()
        transformer_lens_model.blocks[i].ln2 = torch.nn.Identity()
    transformer_lens_model.ln_final = torch.nn.Identity()

# transformer_lens
hooked_model = HookedTransformer.from_pretrained("gpt2", hf_model=model, fold_ln=True, center_unembed=False).to("cpu")
removeLN(hooked_model)

# NNSight:
from nnsight.models.UnifiedTransformer import UnifiedTransformer

model_nnsight = UnifiedTransformer(model="gpt2", hf_model=model, fold_ln=True, center_unembed=False).to("cpu")
removeLN(model_nnsight)

This example code is based on Logan Riggs' comment.

We recommend to look at removing-layer-norm for seeing the entire workflow of removal, upload, and loading LN free models. In particular, the function remove_layernorm in utils.py for details on the parameter hack and eval.py for loading.

Model Collection

This model is part of a collection of LayerNorm-free models. The table below provides links and details.

Evaluation results of LN-free, vanilla fine-tuned, and original GPT-2 models

Reported values are mean cross-entropy losses for 10.2M tokens for The Pile and The Pile filtered and 4.5M tokens for the OpenWebText (WT) validation set. For each model size and dataset, the lowest loss is highlighted in bold, and the loss difference between the LN-free model and the best-performing model is shown in brackets.

Model FT steps OWT (val) The Pile The Pile-filtered
OpenAI GPT-2 Small original 0 3.1006 2.8450 2.7899
schaeff GPT-2 Small vanilla 300 3.0126 2.8511 2.8112
schaeff GPT-2 Small LN-free 300 3.0797 [+0.0671] 2.8852 [+0.0402] 2.8757 [+0.0858]
OpenAI GPT-2 Medium original 0 2.8145 2.5163 2.5390
schaeff GPT-2 Medium vanilla 500 2.7390 2.5752 2.5724
schaeff GPT-2 Medium LN-free 500 2.7642 [+0.0252] 2.6579 [+0.1416] 2.6352 [+0.0962]
OpenAI GPT-2 Large original 0 2.6623 2.5320 2.4347
schaeff GPT-2 Large vanilla 600 2.6240 2.6233 2.5074
schaeff GPT-2 Large LN-free 600 2.6384 [+0.0144] 2.7504 [+0.2184] 2.5159 [+0.0812]
OpenAI GPT-2 XL original 0 2.5567 2.4436¹ 2.3739
schaeff GPT-2 XL vanilla 800 2.4799 2.4673 2.3821
schaeff GPT-2 XL LN-free 800 2.5052 [+0.0253] 130.2197² 2.3992 [+0.0253]

Footnotes:

  1. GPT-2 XL original: Median: 1.0103, 95 Percentile Range: [0.0005, 10.6193], 99.9% Percentile Range [≈0.0000, 43.0064]
  2. GPT-2 XL LN-free: Median: 1.0937, 95 Percentile Range: [0.0004, 10.7548], 99.9% Percentile Range [≈0.0000, 48.6459]

Citation

If you have found our work useful please cite as:

@misc{gpt2layernorm2025,
  author = {Baroni, Luca and Khara, Galvin and Schaeffer, Joachim and Subkhankulov, Marat and Heimersheim, Stefan},
  title = {Transformers Don't Need LayerNorm at Inference Time: Scaling LayerNorm Removal to GPT-2 XL and the Implications for Mechanistic Interpretability},
  year = {2025},
  eprint = {2507.02559},
  archivePrefix = {arXiv},
  primaryClass = {cs.LG},
  url = {https://arxiv.org/abs/2507.02559v1}
}
Downloads last month
9
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for schaeff/gpt2-small_LNFree300

Finetuned
(1776)
this model