qwen3-30b-a3b-abliterated-lora

This is a LoRA extracted from a language model. It was extracted using mergekit.

LoRA Details

This LoRA adapter was extracted from mlabonne/Qwen3-30B-A3B-abliterated and uses Qwen/Qwen3-30B-A3B as a base.

Parameters

The following command was used to extract this LoRA adapter:

/venv/main/bin/mergekit-extract-lora --model mlabonne/Qwen3-30B-A3B-abliterated --base-model Qwen/Qwen3-30B-A3B --out-path qwen3-30b-a3b-abliterated-lora --cuda --max-rank 4
Downloads last month
10
GGUF
Model size
212M params
Architecture
qwen3moe
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for chenrm/qwen3-30b-a3b-abliterated-lora

Finetuned
Qwen/Qwen3-30B-A3B
Adapter
(11)
this model

Collection including chenrm/qwen3-30b-a3b-abliterated-lora