Llama-3-8b-tagalog-v1:

USAGE

This is meant to be mainly a chat model.

Use "Human" and "Assistant" and prompt with Tagalog:

"\nHuman: INPUT\nAssistant:"

HYPERPARAMS

  • Trained for 1 epochs
  • rank: 32
  • lora alpha: 32
  • lr: 2e-4
  • batch size: 2
  • grad steps: 4

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

WARNINGS AND DISCLAIMERS

Note that there is a chance that the model may switch back to English (albeit still understand Tagalog inputs) or output clunky results.

Finally, this model is not guaranteed to output aligned or safe outputs nor is it meant for production use - use at your own risk!

Downloads last month
56
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for 922-Narra/Llama-3-8b-tagalog-v1

Finetuned
(2804)
this model
Quantizations
2 models

Dataset used to train 922-Narra/Llama-3-8b-tagalog-v1