Merge overview

This is a merge of pre-trained language models created using mergekit. There are no higher quants available for this model, only one due to hardware limitations.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using PocketDoc/Dans-PersonalityEngine-V1.3.0-24b as a base. The intention of this merge is to essensially make a model that is capable of roleplaying in NSFW,SFW settings all in one, with smarts involved. Moreover overlap multiple unique writing styles.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Delta-Vector/MS3.2-Austral-24B-KTO
    parameters:
      weight: 0.3
      density: 0.3
  - model: Delta-Vector/Austral-24B-Winton
    parameters:
      weight: 0.3
      density: 0.5
  - model: ReadyArt/The-Omega-Directive-M-24B-v1.1
    parameters:
      weight: 0.2
      density: 0.3
merge_method: ties
base_model: PocketDoc/Dans-PersonalityEngine-V1.3.0-24b
dtype: bfloat16
outtype: bfloat16

Usage guide

The prompting format is refered to as Dan's chat for this model.

<|system|>system prompt<|endoftext|><|user|>Hi there!<|endoftext|><|assistant|>Hey, how can I help?<|endoftext|>

I would advise starting with adaptive.p at the settings: target: 0,55 decay: 0.9 min_p: 0.04

You are free to experiment to set samplers to your liking, this model is under testing phase and users are free to experiment with samplers. The only main requirement is Prompt guide.

Downloads last month
13
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Nesy1/Nesys_engine

Paper for Nesy1/Nesys_engine