Text Generation
Transformers
PyTorch
Safetensors
German
llama
text-generation-inference

LLäMmlein 1B

LLäMmlein 1B is a German LLaMa model trained from scratch using our adapted Tinyllama codebase on the German portion of RedPajama V2. To enhance data quality, we additionally deduplicated the dataset on paragraph level and filtered it using a token-to-word ratio filter. The resulting dataset can be found here.

We provide three model sizes:

Find more details on our page our page and our preprint!

Usage

You can use LLäMmlein with the transformers library. (Optional: install flash-attn to achieve highest efficiency.)

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "LSX-UniWue/LLaMmlein_1B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

Intermediate Checkpoints

In addition to the final model checkpoint, we publish intermediate checkpoints throughout the full training process as unique branches in this repository. A specific checkpoint can be loaded like this:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "LSX-UniWue/LLaMmlein_1B"
revision = "iter-00420000-ckpt"
tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision)
model = AutoModelForCausalLM.from_pretrained(model_id, revision=revision)

Next to the model itself each branch contains all datapoints that were used to train the model up to that point. In the correspinding folder, named after the checkpoint, you can find several .log files (depending on the number of GPUs) of the following format:

{"time": 1739809392.679516, 
  "iter_num": 0, 
  "data_id": ["sha1:EDQMBYDCYBLDAZH3MGYM276BM2DEHPPJ", "sha1:SAJCI75DRHZZFGQORV66NB5FVWUAVLFH", "sha1:7RBZV2MCEM4TUGBBWGTFQAKTWUOGETZU", "sha1:234M32IMLZF7455AKOFWDP6HT6YXAYB4", "sha1:2BIZ7LLSHRK5GUGPZM2GM55APTDKBUG2", "sha1:OF7OI77ZT7ROXGMB6LL4RSRANX7REAYK", "sha1:LGPUOCOV3MKETI5F3IHVGZPD4M26NNJL", "sha1:SHIHUW7FJTP5YHFFV2JZ2CAHUVMKK7XG"], 
  "file_id": [0, 0, 0, 0, 0, 0, 0, 0], 
  "process_rank": 0}

Note: Our earlier models from the paper, which do not include data logging, are available at:

License

We release the LLäMmlein models under a research-only RAIL-M license. See license.md for details.

Downloads last month
1,768
Safetensors
Model size
1.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LSX-UniWue/LLaMmlein_1B

Finetunes
1 model

Datasets used to train LSX-UniWue/LLaMmlein_1B

Space using LSX-UniWue/LLaMmlein_1B 1

Collection including LSX-UniWue/LLaMmlein_1B