nielsr HF Staff commited on
Commit
6d29251
Β·
verified Β·
1 Parent(s): f8679a0

Improve model card: Add `transformers` library, update `pipeline_tags`, and add model tags

Browse files

This PR enhances the model card by:
- Adding `library_name: transformers` to ensure the model's compatibility with the Hugging Face Transformers library is correctly recognized, enabling the "Use in Transformers" widget.
- Updating `pipeline_tags` to include both `fill-mask` (as per the underlying `ModernBertForMaskedLM` architecture) and `feature-extraction` to accurately reflect the model's primary use cases for classification, retrieval, and generating embeddings.
- Adding descriptive `tags` such as `modernbert`, `ettin` (for the model suite), `encoder`, `text-embeddings`, `retrieval`, and `classification` to improve discoverability and provide more detailed context about the model's capabilities.

Files changed (1) hide show
  1. README.md +18 -6
README.md CHANGED
@@ -1,9 +1,20 @@
1
  ---
2
- license: mit
3
  language:
4
  - en
5
- pipeline_tag: fill-mask
 
 
 
 
 
 
 
 
 
 
 
6
  ---
 
7
  # Ettin: an Open Suite of Paired Encoders and Decoders
8
 
9
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
@@ -118,7 +129,7 @@ The training data is publicly available and split across different phases:
118
  |:-----|:------|:-----------|:---------|:---------|
119
  | XXS | [ettin-decoder-17m](https://huggingface.co/jhu-clsp/ettin-decoder-17m) | 17M | Lightweight generation | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-17m) |
120
  | XS | [ettin-decoder-32m](https://huggingface.co/jhu-clsp/ettin-decoder-32m) | 32M | Quick prototyping | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-32m) |
121
- | Small | [ettin-decoder-68m](https://huggingface.co/jhu-clsp/ettin-decoder-68m) | 68M | Efficient generation | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-68m) |
122
  | Base | [ettin-decoder-150m](https://huggingface.co/jhu-clsp/ettin-decoder-150m) | 150M | Standard generation | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-150m) |
123
  | Large | [ettin-decoder-400m](https://huggingface.co/jhu-clsp/ettin-decoder-400m) | 400M | Quality generation | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-400m) |
124
  | XL | [ettin-decoder-1b](https://huggingface.co/jhu-clsp/ettin-decoder-1b) | 1B | Best generation | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-1b) |
@@ -262,7 +273,7 @@ All training artifacts are publicly available:
262
 
263
  ### Encoder: Masked Language Modeling
264
  <details>
265
- <summary>Click to expand <strong>encoder</strong> usage examples</summary>
266
 
267
  ```python
268
  from transformers import AutoTokenizer, AutoModelForMaskedLM
@@ -296,7 +307,7 @@ print(f"Predictions: {predictions}")
296
  ### Decoder: Text Generation
297
 
298
  <details>
299
- <summary>Click to expand <strong>decoder text generation</strong></summary>
300
 
301
  ```python
302
  from transformers import AutoTokenizer, AutoModelForCausalLM
@@ -783,7 +794,8 @@ def main():
783
  model.push_to_hub(run_name)
784
  except Exception:
785
  logging.error(
786
- f"Error uploading model to the Hugging Face Hub:\n{traceback.format_exc()}To upload it manually, you can run "
 
787
  f"`huggingface-cli login`, followed by loading the model using `model = CrossEncoder({final_output_dir!r})` "
788
  f"and saving it using `model.push_to_hub('{run_name}')`."
789
  )
 
1
  ---
 
2
  language:
3
  - en
4
+ license: mit
5
+ pipeline_tags:
6
+ - fill-mask
7
+ - feature-extraction
8
+ library_name: transformers
9
+ tags:
10
+ - modernbert
11
+ - ettin
12
+ - encoder
13
+ - text-embeddings
14
+ - retrieval
15
+ - classification
16
  ---
17
+
18
  # Ettin: an Open Suite of Paired Encoders and Decoders
19
 
20
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
 
129
  |:-----|:------|:-----------|:---------|:---------|
130
  | XXS | [ettin-decoder-17m](https://huggingface.co/jhu-clsp/ettin-decoder-17m) | 17M | Lightweight generation | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-17m) |
131
  | XS | [ettin-decoder-32m](https://huggingface.co/jhu-clsp/ettin-decoder-32m) | 32M | Quick prototyping | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-32m) |
132
+ | Small | [ettin-decoder-68m](https://huggingface.co/jku-clsp/ettin-decoder-68m) | 68M | Efficient generation | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-68m) |
133
  | Base | [ettin-decoder-150m](https://huggingface.co/jhu-clsp/ettin-decoder-150m) | 150M | Standard generation | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-150m) |
134
  | Large | [ettin-decoder-400m](https://huggingface.co/jhu-clsp/ettin-decoder-400m) | 400M | Quality generation | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-400m) |
135
  | XL | [ettin-decoder-1b](https://huggingface.co/jhu-clsp/ettin-decoder-1b) | 1B | Best generation | [![Download](https://img.shields.io/badge/πŸ€—-Download-blue)](https://huggingface.co/jhu-clsp/ettin-decoder-1b) |
 
273
 
274
  ### Encoder: Masked Language Modeling
275
  <details>
276
+ <summary>Click to expand **encoder** usage examples</summary>
277
 
278
  ```python
279
  from transformers import AutoTokenizer, AutoModelForMaskedLM
 
307
  ### Decoder: Text Generation
308
 
309
  <details>
310
+ <summary>Click to expand **decoder text generation**</summary>
311
 
312
  ```python
313
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
794
  model.push_to_hub(run_name)
795
  except Exception:
796
  logging.error(
797
+ f"Error uploading model to the Hugging Face Hub:
798
+ {traceback.format_exc()}To upload it manually, you can run "
799
  f"`huggingface-cli login`, followed by loading the model using `model = CrossEncoder({final_output_dir!r})` "
800
  f"and saving it using `model.push_to_hub('{run_name}')`."
801
  )