Adding `safetensors` variant of this model

#3
Language Technology Group (University of Oslo) org

This is an automated PR created with https://huggingface.co/spaces/safetensors/convert

This new file is equivalent to pytorch_model.bin but safe in the sense that
no arbitrary code can be put into it.

These files also happen to load much faster than their pytorch counterpart:
https://colab.research.google.com/github/huggingface/notebooks/blob/main/safetensors_doc/en/speed.ipynb

The widgets on your model page will run using this model even if this is not merged
making sure the file actually works.

If you find any issues: please report here: https://huggingface.co/spaces/safetensors/convert/discussions

Feel free to ignore this PR.

Language Technology Group (University of Oslo) org

Merging these PRs leads to a bug during loading, the model fails to import the embedding:

Some weights of GptBertForCausalLM were not initialized from the model checkpoint at ltg/norbert4-large and are newly initialized: ['embedding.word_embedding.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

Please stop bombarding me with these useless PRs while also pretending that it's me who made them. At the very least fix the conversion, it's very annoying!

davda54 changed pull request status to closed

Sign up or log in to comment