R3-Qwen3-8B_merged_linear_6model
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Linear merge method.
Models Merged
The following models were included in the merge:
- rubricreward/R3-Qwen3-8B-14k
- rubricreward/R3-Qwen3-8B-LoRA-5K-v1.1
- HLeiTR/qwen3_8b_full_15k_ckpt_125
- rubricreward/R3-Qwen3-8B-LoRA-4k
- rubricreward/R3-Qwen3-8B-4k
- HLeiTR/qwen3_8b_full_5k_ckpt_45
Configuration
The following YAML configuration was used to produce this model:
models:
- model: rubricreward/R3-Qwen3-8B-LoRA-4k
parameters:
weight: 0.2
- model: rubricreward/R3-Qwen3-8B-4k
parameters:
weight: 0.15
- model: rubricreward/R3-Qwen3-8B-14k
parameters:
weight: 0.15
- model: rubricreward/R3-Qwen3-8B-LoRA-5K-v1.1
parameters:
weight: 0.2
- model: HLeiTR/qwen3_8b_full_15k_ckpt_125
parameters:
weight: 0.15
- model: HLeiTR/qwen3_8b_full_5k_ckpt_45
parameters:
weight: 0.15
merge_method: linear
dtype: bfloat16
- Downloads last month
- 111
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support