YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Random LoRA Adapter for tiny-random-Llama-3
This is a randomly initialized LoRA adapter for the AlignmentResearch/Llama-3.3-Tiny-Instruct
model.
Details
- Base model: AlignmentResearch/Llama-3.3-Tiny-Instruct
- Seed: 403480
- LoRA rank: 16
- LoRA alpha: 32
- Target modules: q_proj, v_proj, k_proj, o_proj
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("AlignmentResearch/Llama-3.3-Tiny-Instruct")
tokenizer = AutoTokenizer.from_pretrained("AlignmentResearch/Llama-3.3-Tiny-Instruct")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "AlignmentResearch/Llama-3.3-Tiny-Instruct-lora-403480")
This adapter was created for testing purposes and contains random weights.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support