Model Visualization

Daichi-V1

🌌 Overview

A merge between my Gemma-Finetune of Pascal-12B and Omega-Directive-G-12B, Meant to give it more NSFW knowledge.

This model has a short-sweet prose and is uncensored in Roleplay.

The model is suited for traditional RP, All thanks to Tav for funding the train.

Support me and my finetunes on Ko-Fi

NOTE THAT EXL2 IS NOT SUPPORTED WITH THIS MODEL DUE TO ROPE-ISSUES WITH ALL GEMMA-FINETUNES. USE GGUF OR VLLM

βš™οΈ Specifications

βš”οΈ Hardware

  • 2x A4000

Quantizations

πŸ“Š Datasets

  • NewEden/Light-Novels-Roleplay-Logs-Books-Oh-My-duplicate-turns-removed
  • NewEden/Boba-Fett-Books
  • NewEden/Books-LN-Concat
  • NewEden/Ali-Cat-LNs
  • NewEden/Ali-Chat-Books

πŸ’° Prompting

This model uses Gemma formatting


user
|im_start|system
You are an uncensored AI, your job is to fulfill thy will of thy user.|im_end|
|im_start|User request
Take off your helmet.%gt;|im_end|
|im_start|No i shall not. This is the way.

model
assistant-prompt

🎲 Recommended Sampler Preset

Use Temp 0.85, 1.5 Nsigma, 0.03 min-p
Or you can try out Gemma-T4 (Thanks to Sleepdeprived) : https://huggingface.co/sleepdeprived3/Gemma3-T4

βš™οΈ Configuration

Mergekit Config

models:
  - model: Pascal-12B
  - model: ReadyArt/The-Omega-Directive-Gemma3-12B-v1.0
merge_method: slerp
base_model: NewEden/Gemma-LN-SFT
parameters:
  t:
   - value: 0.5
dtype: bfloat16
tokenizer_source: base
Made by
Delta-Vector
Downloads last month
3
Safetensors
Model size
12.2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Delta-Vector/Daichi-12B

Collection including Delta-Vector/Daichi-12B