β–„β–„β–„β–„β–„   β–„β–ˆβ–ˆβ–ˆβ–„   β–„β–ˆβ–„      β–„      β–„   β–ˆβ–ˆβ–„   β–ˆβ–ˆ       β–ˆβ–ˆβ–ˆβ–ˆβ–„     β–„β–ˆ 
       β–ˆ     β–€β–„ β–ˆβ–€   β–€  β–ˆβ–€ β–€β–„     β–ˆ      β–ˆ  β–ˆ  β–ˆ  β–ˆ β–ˆ      β–ˆ   β–ˆ     β–ˆβ–ˆ 
    β–„   β–€β–€β–€β–€β–„   β–ˆβ–ˆβ–„β–„    β–ˆ   β–€  β–ˆ   β–ˆ β–ˆβ–ˆ   β–ˆ β–ˆ   β–ˆ β–ˆβ–„β–„β–ˆ     β–ˆ   β–ˆ     β–ˆβ–ˆ 
     β–€β–€β–„β–„β–„β–„β–€    β–ˆβ–„   β–„β–€ β–ˆβ–„  β–„β–€ β–ˆ   β–ˆ β–ˆ β–ˆ  β–ˆ β–ˆ  β–ˆ  β–ˆ  β–ˆ     β–ˆ   β–ˆ     β–β–ˆ 
               β–€β–ˆβ–ˆβ–ˆβ–€   β–€β–ˆβ–ˆβ–ˆβ–€  β–ˆβ–„ β–„β–ˆ β–ˆ  β–ˆ β–ˆ β–ˆβ–ˆβ–ˆβ–€     β–ˆ     β–€β–ˆβ–ˆβ–ˆβ–ˆ  β–β–ˆ  ▐   
                               β–€β–€β–€  β–ˆ   β–ˆβ–ˆ         β–ˆ                          
                                                 β–€                           
   
                      β‹†β‹†ΰ­¨ΰ­§Λš THE PRIMΓ‰TOILE ENGINE Λšΰ­¨ΰ­§β‹†ο½‘Λšβ‹†
                  β€” Visual Novel generation under starlight β€”
Version Type Strengths Weaknesses Recommended Use
Secunda-0.1-GGUF / RAW Instruction - Most precise
- Coherent code
- Perfected Modelfile
- Smaller context / limited flexibility Production / Baseline
Secunda-0.3-F16-QA QA-based Input - Acceptable for question-based generation - Less accurate than 0.1
- Not as coherent
Prototyping (QA mode)
Secunda-0.3-F16-TEXT Text-to-text - Flexible for freeform tasks - Slightly off
- Modelfile-dependent
Experimental / Text rewrite
Secunda-0.3-GGUF GGUF build - Portable GGUF of 0.3 - Inherits 0.3 weaknesses Lightweight local testing
Secunda-0.5-RAW QA Natural - Best QA understanding
- Long-form generation potential
- Inconsistent output length
- Some instability
Research / Testing LoRA
Secunda-0.5-GGUF GGUF build - Portable, inference-ready version of 0.5 - Shares issues of 0.5 Offline experimentation
Secunda-0.1-RAW Instruction - Same base as 0.1-GGUF - Same as 0.1 Production backup

πŸŒ™ Overview

Secunda-0.1-RAW is the original release of the Secunda fine-tuned model family, trained to produce polished Ren'Py .rpy scripts from structured instructions!

The model outputs:

  • define blocks for named characters (with colors!)
  • image declarations for scenes & sprites
  • A clear label start: structure
  • Emotional dialogue, branching menus, jumps, and proper return

This version is the most stable so far β€” often more reliable than 0.3!


/!\ NO HUMAN-MADE DATA WAS USED TO TRAIN THIS AI ! Secunda takes much pride in making sure the training data is scripted ! /!\

If you like Visual Novels, please visit itch.io and support independant creators !

✨ Moonlight Specs

  • Base model: meta-llama/Meta-Llama-3.1-8B
  • Fine-tuning: QLoRA (r=64, alpha=16, dropout=0.1)
  • Precision: Float16 (FP16)
  • Max tokens: 4096
  • Hardware used: RTX 4070, 64GB RAM

πŸͺ„ Inference in the Starlight

Setup

πŸš€ Quick Start

Installation

pip install transformers accelerate peft bitsandbytes

Inference Script Example

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch

BASE_MODEL = "meta-llama/Meta-Llama-3.1-8B"
LORA_PATH = "path/to/Secunda-0.1-RAW"

model = AutoModelForCausalLM.from_pretrained(BASE_MODEL, torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, LORA_PATH)
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)

def build_prompt(idea):
    return f"""You are an expert writer of visual novels in Ren'Py. 
Generate a complete and polished Ren'Py script based on the following concept:

\"\"\"{idea}\"\"\"

Your output should include:
- `define` blocks for all characters (with names and color codes)
- `image` blocks for key backgrounds and character sprites
- `label start:` with a clear beginning
- Proper `scene`, `show`, `menu`, `play music/sound`, and `jump` statements
- Emotional dialogue and natural pacing
- A proper ending (`return`) or narrative closure

Structure the script as a `.rpy` file β€” do not include explanations, comments, or placeholder text."""

prompt = build_prompt("A young girl finds a photo album that shows moments that haven't happened yet.")
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=2048, temperature=0.85, top_p=0.95)
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))

🌌 Evaluation

This model has:

  • Generated 1000+ .rpy files
  • Passed human review for structure, creativity & syntax
  • 90% valid output with minimal manual tweaks


☁️ Talking to the Moon

If you use Secunda-0.1-RAW, please star and cite:

@misc{secunda2025,
  title={Secunda-0.1-RAW},
  author={Yaroster},
  year={2025},
  note={https://huggingface.co/Yaroster/Secunda-0.1-RAW}
}

πŸͺ From the Cosmos


⋆°.☾ Secunda-0.1-RAW ☽.°⋆

✧ Because every visual novel deserves to begin with a spark of magic ✧

⚠️ This repo contains only the LoRA adapter weights. To use the model, download the base LLaMA 3.1 from Meta (terms apply): https://ai.meta.com/resources/models-and-libraries/llama-downloads/

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support