Discrepency with tokenizer that causes model to never terminate by itself
It looks like the tokenizer packaged with the model is from deepseek-distilled-qwen model, but the training data and chat template used is as per Qwen2's tokenizer.
Case in point: Qwen tokenizer has '<|im_end|>' token, but deepseek-distilled-qwen does not have this token. It looks like the model has the tendency to predict the '<|im_end|>' token one at a time, like '<', '|', 'im', etc (maybe due to supervised fine tuning on a dataset with these tokens). Because this token does not exist in the model's tokenizer, and the model's eos_token_id is not getting generated due the fine-tuning, the inference script is published with some other stop tokens to circumvent this. This workaround makes it hard to deploy with predictable results on other inference providers.
It would be much appreciated if the next version of this model does not have this discrepency, where model tries to predict some string sequence to terminate, rather than it's own eos_token_id.
We also noticed this issue, but it was too late since the model had already gone through a lot of training by then, hence we decided to continue with the same tokenizer for this model. But for the next versions we have fixed the tokenizer!
thanks