Q3.5-27B-Opus-DA
Q3.5-27B-Opus-DA (Qwen3.5 Distilled-Abliterated) is a reasoning-focused model built on Qwen/Qwen3.5-27B. This model is optimized for rich, detailed, and context-aware reasoning, leveraging multi-stage training on Opus reasoning traces combined with advanced refusal direction analysis and ablation-based training strategies to reduce internal refusal behaviors while preserving strong reasoning and instruction-following performance.
This model is intended strictly for research and learning purposes. Due to reduced internal refusal mechanisms, it may generate sensitive or unrestricted content. Users assume full responsibility for how the model is used. The authors and hosting platform disclaim any liability for generated outputs.
Note: This model is experimental and may generate artifacts.
Key Highlights
- Multi-Stage Training on Opus Reasoning Traces: Fine-tuned across multiple stages using high-quality reasoning traces from Claude Opus, enabling deep reasoning alignment.
- Distilled Abliterated (DA): Applies advanced refusal direction analysis and ablation-based strategies to reduce internal refusal behaviors while preserving reasoning quality.
- Qwen3.5 Base: Built on the powerful Qwen/Qwen3.5-27B backbone for strong reasoning and generation performance.
- Instruction + Reasoning Fusion: Handles both instruction-following and complex multi-step reasoning tasks seamlessly.
- High-Coherence Outputs: Maintains consistency across long generations with improved contextual grounding.
Datasets Used and Training Details
| Category | Details |
|---|---|
| Base Model | Qwen/Qwen3.5-27B |
| Final Model Size | 27B Parameters |
| Training Type | Multi-stage distillation + abliteration |
| Objective | Preserve reasoning quality from larger models; reduce refusal behaviors via ablation strategies; improve instruction-following reliability |
| Reasoning Dataset 1 | nohurry/Opus-4.6-Reasoning-3000x-filtered |
| Reasoning Dataset 2 | Jackrong/Qwen3.5-reasoning-700x |
| Reasoning Dataset 3 | Roman1111111/claude-opus-4.6-10000x |
| Alignment / Evaluation Dataset | prithivMLmods/harm_bench |
| Training Focus | Structured reasoning, long-chain thinking, robustness across diverse prompts |
| Training Inspiration 1 | Jackrong-llm-finetuning-guide |
| Training Inspiration 2 | Unsloth Documentation |
Quick Start with Transformers
pip install transformers==5.7.0
# or latest
pip install git+https://github.com/huggingface/transformers.git
from transformers import Qwen3_5ForConditionalGeneration, AutoProcessor
import torch
model = Qwen3_5ForConditionalGeneration.from_pretrained(
"prithivMLmods/Q3.5-27B-Opus-DA",
torch_dtype="auto",
device_map="auto"
)
processor = AutoProcessor.from_pretrained(
"prithivMLmods/Q3.5-27B-Opus-DA"
)
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "Generate a highly detailed caption of a futuristic city skyline at sunset."}
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
inputs = processor(
text=[text],
padding=True,
return_tensors="pt"
).to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(output_text)
Intended Use
- Reasoning & Chain-of-Thought Tasks: Deep multi-step reasoning powered by Opus-distilled traces
- Instruction Following: Hybrid prompts requiring both instruction adherence and reasoning
- Red-Teaming & Alignment Research: Evaluating reduced-refusal systems and refusal direction analysis
- Local High-Performance Deployment: Multi-GPU or quantized inference setups
- Research on Abliteration: Studying the effects of ablation-based training on reasoning preservation
Limitations & Risks
Important Note: This model intentionally minimizes built-in safety refusals.
- Sensitive Content Risk: May produce unrestricted or controversial outputs
- User Responsibility: Requires careful and ethical usage
- High Compute Demand: Large models need significant VRAM or optimized inference
- Abliteration Trade-offs: Reduced refusal may impact safety alignment and output filtering
- Downloads last month
- 17
