Add library name and clarify license

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
2
  license: llama3
 
 
3
  ---
 
4
  This model is a fine-tuned Llama3 model, trained on the training set of PromptEvals (https://huggingface.co/datasets/reyavir/PromptEvals). It is fine-tuned to generate high quality assertion criteria for prompt templates.
5
 
6
  Model Card:
@@ -10,7 +13,7 @@ Model Details
10
  – Model version: 3.1
11
  – Model type: decoder-only Transformer
12
  – Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: 8 billion parameters, fine-tuned by us using Axolotl (https://github.com/axolotl-ai-cloud/axolotl)
13
- – Paper or other resource for more information: [Llama 3](https://arxiv.org/abs/2407.21783), [PromptEvals](https://arxiv.org/abs/2504.14738)
14
  – Citation details:
15
  ```bibtex
16
  @inproceedings{
@@ -83,4 +86,4 @@ Ethical Considerations:
83
  PromptEvals is open-source and is intended to be used as a benchmark to evaluate models' ability to identify and generate assertion criteria for prompts. However, because it is open-source, it may be used in pre-training models, which can impact the effectiveness of the benchmark.
84
  Additionally, PromptEvals uses prompts contributed by a variety of users, and the prompts may not represent all domains equally.
85
  However, we believe that despite this, our benchmark still provides value and can be useful in evaluating models on generating assertion criteria.
86
- Caveats and Recommendations: None
 
1
  ---
2
  license: llama3
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
  ---
6
+
7
  This model is a fine-tuned Llama3 model, trained on the training set of PromptEvals (https://huggingface.co/datasets/reyavir/PromptEvals). It is fine-tuned to generate high quality assertion criteria for prompt templates.
8
 
9
  Model Card:
 
13
  – Model version: 3.1
14
  – Model type: decoder-only Transformer
15
  – Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: 8 billion parameters, fine-tuned by us using Axolotl (https://github.com/axolotl-ai-cloud/axolotl)
16
+ – Paper or other resource for more information: [Llama 3](https://arxiv.org/abs/2407.21783), [PromptEvals](https://huggingface.co/papers/2504.14738)
17
  – Citation details:
18
  ```bibtex
19
  @inproceedings{
 
86
  PromptEvals is open-source and is intended to be used as a benchmark to evaluate models' ability to identify and generate assertion criteria for prompts. However, because it is open-source, it may be used in pre-training models, which can impact the effectiveness of the benchmark.
87
  Additionally, PromptEvals uses prompts contributed by a variety of users, and the prompts may not represent all domains equally.
88
  However, we believe that despite this, our benchmark still provides value and can be useful in evaluating models on generating assertion criteria.
89
+ Caveats and Recommendations: None