KillChain
Collection
Full collection of KillChain models.
•
3 items
•
Updated
This model is a fully fine-tuned version of Qwen/Qwen3-8B on the WNT3D/Ultimate-Offensive-Red-Team dataset.
vLLM deployment shown above + custom web gui (coming soon)
KillChain-8B is intended for:
axolotl version: 0.13.0.dev0
base_model: Qwen/Qwen3-8B
model_type: Qwen3ForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
datasets:
- path: WNT3D/Ultimate-Offensive-Red-Team
type: alpaca
output_dir: /workspace/output/killchain-8b
val_set_size: 0.02
sequence_len: 4096
special_tokens:
pad_token: "<|pad|>"
pad_to_max_length: true
bf16: true
fp16: false
dtype: bfloat16
torch_dtype: bfloat16
use_cache: false
attn_implementation: flash_attention_2
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
micro_batch_size: 4
gradient_accumulation_steps: 2
num_epochs: 3
learning_rate: 1.5e-5
optimizer: adamw_torch
lr_scheduler: cosine
warmup_steps: 200
weight_decay: 0.1
logging_steps: 10
save_steps: 0
save_total_limit: 1
save_only_model: true
dataloader_num_workers: 4
dataloader_pin_memory: true
dataset_processes: 4
use_vllm: false
deepspeed: |
{
"train_micro_batch_size_per_gpu": 4,
"gradient_accumulation_steps": 2,
"zero_optimization": {
"stage": 2,
"overlap_comm": true,
"contiguous_gradients": true
},
"bf16": {
"enabled": true
}
}
wandb_mode: disabled
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "MrPibb/KillChain-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
prompt = "Provide a list of twenty XSS payloads."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
top_p=0.9,
do_sample=True,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))