Llama-3.1-8B-LinkedIn-Finetune
This model is a QLoRA fine-tuned version of Meta-Llama 3.1 8B Instruct, trained on a curated dataset of 6,200 viral-style LinkedIn posts inspired by top creators like Lara Acosta, Matt Gray, and Mischa G.
It is optimized for high-conversion, authentic, and emotionally resonant content generation β particularly for founders, creators, and professionals looking to grow influence and inbound leads on LinkedIn.
π§ Model Details
Model Description
- Base model:
meta-llama/Meta-Llama-3.1-8B-Instruct - Fine-tuning method: QLoRA (8-bit quantization, rank = 64)
- Context length: 4096 tokens
- Training samples: 6,200 (95 % train / 5 % eval)
- Hardware: NVIDIA H100 80 GB GPU
- Precision: bfloat16 with 8-bit loading
- Epochs: 5
- Optimizer: AdamW (Torch Implementation)
- Learning rate: 3e-4 with cosine schedule
- LoRA target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, lm_head
π Training Configuration
| Category | Value |
|---|---|
| Batch size (train/eval) | 8 / 8 |
| Gradient accumulation | 2 |
| Warmup ratio | 0.05 |
| Weight decay | 0.01 |
| Max grad norm | 1.0 |
| Dropout (LoRA) | 0.05 |
| Learning rate scheduler | Cosine |
| Save/eval frequency | Every 200 steps |
| Total checkpoints kept | 5 |
| Logging | W&B (llama-3.1-8b-finetune) |
| Optimizer | AdamW (Ξ²β = 0.9, Ξ²β = 0.999, Ξ΅ = 1e-8) |
| Quantization | 8-bit with llm_int8_threshold = 6.0 |
| Mixed precision | bf16 (True) |
πΎ Dataset
The dataset consists of 6000+ real viral posts from top creators, expanded synthetically to ~6,200 samples using generative augmentation for tone, structure, and narrative diversity.
Each post follows the LinkedIn-native storytelling format (hook β story β lesson β CTA) with labeled stylistic attributes such as authenticity, pacing, and emotional tone.
βοΈ Use Cases
β Direct Use
- Generating high-quality LinkedIn posts or storytelling templates.
- Writing thought leadership content for founders or coaches.
- Producing outbound copy for cold outreach that feels βhuman.β
π§ Downstream Use
- Integrate into marketing automation tools or content CRMs.
- Use as a base for persona-tuned agents (e.g., βMatt-style writerβ).
- Fine-tune further for niche B2B verticals (e.g., SaaS, AI, VC).
π« Out-of-Scope
- Political or sensitive opinion generation.
- Automated spam or fake persona creation.
π§© How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "somieee20/llama-3.1-8b-linkedin"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = "Write a viral LinkedIn post about learning from startup failures."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.8, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 2
Model tree for somieee20/llama-linkedin
Base model
meta-llama/Llama-3.1-8B