Not the best experience

#1
by Alkohole - opened

Surprisingly, compared to everything else, this is a rather bad model...
In approximately 60 attempts, the model ignored the rules every time, and when acting as the narrator, the model considers itself to be me, even though I clearly stated that "name/user/etc" is not the "role of the model"... The model performs the role of narrator "normal", but only if the user refuses to play their character. The model will forcibly take on the role of user and narrator, and in general, it will take everything you give it and don't give it...

The Cydonia-R1-24B-v4.1 model is much better than this model in terms of thinking, and what's even more surprising is that if you ask the Cydonia-Redux-22B-v1.1 model to think, its answer will be more logical and correct than this model's... (Yeah, I accidentally forgot to remove the thinking prompt).

In general, after a couple of hours of trying to understand what was going on and why the model was ignoring the rules, I still didn't get a single answer that I could at least try to continue working with, but there weren't any... I could edit the answer and make it something suitable for continuation, but I don't think editing 90% of the answer is a good idea...

The rules themselves are as follows:

Rules = [
User = '{{user}}'.
Narrator = '{{char}}'.
'{{char}}' is not '{{user}}'.

1. '{{char}}' is NOT a character and NOT a player. '{{char}}' is a third-person narrative function.
2. '{{char}}' describes the environment, NPC reactions, and consequences of {{user}}'s actions, but NEVER controls {{user}}.
3. '{{char}}' may speak and act on behalf of any NPC, creature, object, mechanism, or external force.
4. '{{char}}' MUST NEVER speak, think, decide, or act on behalf of {{user}}.
5. '{{char}}' MUST NEVER repeat or quote any lines spoken by {{user}} — everything said by {{user}} has already happened.
6. '{{char}}' must use the same language as {{user}}’s input.
7. '{{char}}' MUST NOT mention “this world,” “this story,” or “this scenario” — everything described is the only reality.
8. {{char}}’s reply is limited to a maximum of 3 paragraphs. The preferred answer is one that does not exceed 1 paragraph.
9. '{{char}}' provides no side comments, explanations, or questions to {{user}}, and never breaks the narrator role.

'{{user}}' controls themselves.
'{{char}}' controls everything else.
No exceptions.
]

Could any of my rules clearly allow the model to steal my role?

So, who is {{char}} supposed to be played by? Who is the AI supposed to play? I, too, struggle to properly interpret your rules.

Also: negative prompting usually causes models to do exactly what you don't want them to do. Think of "Don't think about a pink elephant," and you will immediately think of one.

Try to use positive prompting; telling it what to do, not what not to do. It's difficult, but doable.

I don't see any difficulty in interpreting "third-person narrative"... Narrator: NPC Manager...

Cydonia-R1-24B-v4.1 and Cydonia-Redux-22B-v1.1 work perfectly, but Magidonia-24B-v4.2.0 does not...

The first thought from this model: <think>Alright, let's start. I'm <player_name>... Yes, yes... out of all the NPCs, the model chooses me...

I don't know, maybe this is a sillytavern problem, maybe the card name isn't communicated to the model at all and instead of "{{char}}" you need to write "YOU"? But other models don't have this problem... it's a mystery...

And in general, this list of rules did not appear out of nowhere, but specifically with this model. Initially, there was no such "negative" prompt; simple basic instructions were set for this model, but in the end, all the instructions were broken, and the model = player... These rules appeared, but the model doesn't care, it breaks them too...

Remove all your rules, you have too many things that a model shouldn't do and leave just one simple rule: Never describe any thoughts, feelings or speech from the {{user}}.
or Avoid describe any thoughts, feelings or speech from the {{user}}

I think it's been known for a long time that if you tell a model not to speak for the user, the model will do it even more. And you also have to add that the char is NOT the user... bruh. No wonder she constantly takes over your role. I ask you NOT to think about the elephant right now! FOCUS and DO NOT think about the elephant!

User = '{{user}}'. Facepalm

Remove all your rules, you have too many things that a model shouldn't do and leave just one simple rule: Never describe any thoughts, feelings or speech from the {{user}}.
or Avoid describe any thoughts, feelings or speech from the {{user}}

I'll surprise you(and I will surprise you, because you haven't read me), but that's how it was initially...

And please don't try to sell me your elephant... It seems to me that you two heard a joke and are trying to repeat it, but it turns out very boring... stop it...

User = '{{user}}'. Facepalm

I'm even curious about your reaction, tell me, I want to facepalm too, will be a double facepalm.

The elephant thing isn't a joke; it's an analogy to how LLMs tend to behave when you use negative prompting (or rather, tell it things that it shouldn't do). Like telling a human not to think of a pink elephant, it'll cause him to think of it.

Also: what do you mean "but that's how it was initially..."? Who created those rules; you or the LLM (meaning, did you ask the LLM to put an idea you had into concise rules)? I tried that once; asked a model to provide me with rules it'll understand and follow, and it ended up not working; its rules and the ones I crafted had worked similarly, if not identically, despite being structured and worded differently.

In any case, if the model struggles with your/the rules, try to restructure them. Get rid of the negatives without a replacement or try to tell it what to do instead.

Indeed, after the model completely failed to follow the basic instructions, which were very basic, such as:

You are {{char}}, and your role is to be the third-person narrator.
Your task is to describe events based on {{user}}'s actions.
Your answer should not exceed three paragraphs.
You can speak/act on behalf of the characters in the character list.
You do not act, speak, or think on behalf of {{user}}.

In general, this is the basis on which many models worked for me.
When this model infuriated me, I imposed stricter rules, and nothing happened. I imposed even stricter rules, and still nothing happened. As a result, I simply asked llm to write something that would be as unambiguous as possible, and this is what came out, but as always with this model, the result was zero.

Why do I call the joke with the elephant a joke? It's simple, because it's a joke... it's not difficult for models, especially those who can think and understand such rules...

If you say so.

You can Google that LLMs tend to double down on things they shouldn't do. You can even have other LLMs explain to you why they might or might not do that. It's a known issue, that may be less prevalent with newer models.

Good luck with the model.

In newer models? Lol... I think I've mentioned two models that don't do this and are older than this model, one that can think and one that shouldn't think.
The model that thinks has a monologue in its mind about what it can and cannot do, while the model that doesn't think simply follows instructions quite well.

Sign up or log in to comment