Andromeda-70B

Andromeda-70B

Andromeda-70B is the result of an experimental SLERP merge of Cassiopeia-70B and Sao10K/Llama-3.3-70B-Vulpecula-r1. It is a coherent, unaligned model intended to be used for creative tasks such as storywriting, brainstorming, interactive roleplay, etc.

Merge composition

models:
  - model: /opt/workspace/hf/Cassiopeia-70B
  - model: /opt/workspace/hf/Llama-3.3-70B-Vulpecula-r1
merge_method: slerp
base_model: /opt/workspace/hf/Cassiopeia-70B
parameters:
  t: 0.7
dtype: bfloat16

Feedback

If you like this model, please support Sao10k.

Feedback on this merge is very welcome, good or bad! Please leave a comment in this discussion with your thoughts: Andromeda-70B/discussions/1

Downloads last month
9
Safetensors
Model size
70.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ddh0/Andromeda-70B

Merge model
this model
Quantizations
2 models