Improve dataset card: update task categories, add relevant tags, language, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +128 -0
README.md CHANGED
@@ -1,11 +1,16 @@
1
  ---
2
  license: mit
3
  task_categories:
 
4
  - fill-mask
 
 
5
  tags:
6
  - pretraining
7
  - encoder
8
  - multilingual
 
 
9
  ---
10
 
11
  # mmBERT Training Data (Ready-to-Use)
@@ -19,6 +24,129 @@ tags:
19
 
20
  This dataset is part of the complete, pre-shuffled training data used to train the [mmBERT encoder models](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4). Unlike the individual phase datasets, this version is ready for immediate use but **the mixture cannot be modified easily**. The data is provided in **decompressed MDS format** ready for use with [ModernBERT's Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ## Licensing & Attribution
23
 
24
  This dataset aggregates multiple open-source datasets under permissive licenses. See individual source datasets for specific attribution requirements.
 
1
  ---
2
  license: mit
3
  task_categories:
4
+ - feature-extraction
5
  - fill-mask
6
+ language:
7
+ - mul
8
  tags:
9
  - pretraining
10
  - encoder
11
  - multilingual
12
+ - text-classification
13
+ - text-retrieval
14
  ---
15
 
16
  # mmBERT Training Data (Ready-to-Use)
 
24
 
25
  This dataset is part of the complete, pre-shuffled training data used to train the [mmBERT encoder models](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4). Unlike the individual phase datasets, this version is ready for immediate use but **the mixture cannot be modified easily**. The data is provided in **decompressed MDS format** ready for use with [ModernBERT's Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
26
 
27
+ ## Sample Usage
28
+
29
+ Models trained on this dataset can be used for various tasks, including generating multilingual embeddings, masked language modeling, classification, and multilingual retrieval.
30
+
31
+ ### Installation
32
+ ```bash
33
+ pip install torch>=1.9.0
34
+ pip install transformers>=4.48.0
35
+ ```
36
+
37
+ ### Small Model for Fast Inference (Multilingual Embeddings)
38
+ ```python
39
+ from transformers import AutoTokenizer, AutoModel
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-small")
42
+ model = AutoModel.from_pretrained("jhu-clsp/mmbert-small")
43
+
44
+ # Example: Get multilingual embeddings
45
+ inputs = tokenizer("Hello world! 你好世界! Bonjour le monde!", return_tensors="pt")
46
+ outputs = model(**inputs)
47
+ embeddings = outputs.last_hidden_state.mean(dim=1)
48
+ ```
49
+
50
+ ### Base Model for Masked Language Modeling
51
+ ```python
52
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
53
+ import torch
54
+
55
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-base")
56
+ model = AutoModelForMaskedLM.from_pretrained("jhu-clsp/mmbert-base")
57
+
58
+ # Example: Multilingual masked language modeling
59
+ text = "The capital of [MASK] is Paris."
60
+ inputs = tokenizer(text, return_tensors="pt")
61
+ with torch.no_grad():
62
+ outputs = model(**inputs)
63
+
64
+ # Get predictions for [MASK] tokens
65
+ mask_indices = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)
66
+ predictions = outputs.logits[mask_indices]
67
+ top_tokens = torch.topk(predictions, 5, dim=-1)
68
+ predicted_words = [tokenizer.decode(token) for token in top_tokens.indices[0]]
69
+ print(f"Predictions: {predicted_words}")
70
+ ```
71
+
72
+ ### Classification Task
73
+ ```python
74
+ from transformers import AutoTokenizer, AutoModel
75
+ import torch.nn as nn
76
+ import torch
77
+
78
+ # Load model for classification
79
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-base")
80
+ encoder = AutoModel.from_pretrained("jhu-clsp/mmbert-base")
81
+
82
+ # Add classification head
83
+ class MultilingualClassifier(nn.Module):
84
+ def __init__(self, encoder, num_classes):
85
+ super().__init__()
86
+ self.encoder = encoder
87
+ self.classifier = nn.Linear(encoder.config.hidden_size, num_classes)
88
+ self.dropout = nn.Dropout(0.1)
89
+
90
+ def forward(self, input_ids, attention_mask=None):
91
+ outputs = self.encoder(input_ids, attention_mask=attention_mask)
92
+ pooled_output = outputs.last_hidden_state[:, 0] # Use [CLS] token
93
+ pooled_output = self.dropout(pooled_output)
94
+ return self.classifier(pooled_output)
95
+
96
+ # Initialize classifier
97
+ model = MultilingualClassifier(encoder, num_classes=3)
98
+
99
+ # Example multilingual inputs
100
+ texts = [
101
+ "This is a positive review.",
102
+ "Ceci est un avis négatif.",
103
+ "这是一个中性评价。"
104
+ ]
105
+ inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True)
106
+ predictions = model(**inputs)
107
+ ```
108
+
109
+ ### Multilingual Retrieval
110
+ ```python
111
+ from transformers import AutoTokenizer, AutoModel
112
+ import torch
113
+ import numpy as np
114
+
115
+ tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/mmbert-base")
116
+ model = AutoModel.from_pretrained("jhu-clsp/mmbert-base")
117
+
118
+ def get_embeddings(texts):
119
+ inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True)
120
+ with torch.no_grad():
121
+ outputs = model(**inputs)
122
+ # Mean pooling
123
+ embeddings = outputs.last_hidden_state.mean(dim=1)
124
+ return embeddings.numpy()
125
+
126
+ # Multilingual document retrieval
127
+ documents = [
128
+ "Artificial intelligence is transforming healthcare.",
129
+ "L'intelligence artificielle transforme les soins de santé.",
130
+ "人工智能正在改变医疗保健。",
131
+ "Climate change requires immediate action.",
132
+ "El cambio climático requiere acción inmediata."
133
+ ]
134
+
135
+ query = "AI in medicine"
136
+
137
+ # Get embeddings
138
+ doc_embeddings = get_embeddings(documents)
139
+ query_embedding = get_embeddings([query])
140
+
141
+ # Compute similarities
142
+ similarities = np.dot(doc_embeddings, query_embedding.T).flatten()
143
+ ranked_docs = np.argsort(similarities)[::-1]
144
+
145
+ print("Most similar documents:")
146
+ for i, doc_idx in enumerate(ranked_docs[:3]):
147
+ print(f"{i+1}. {documents[doc_idx]} (score: {similarities[doc_idx]:.3f})")
148
+ ```
149
+
150
  ## Licensing & Attribution
151
 
152
  This dataset aggregates multiple open-source datasets under permissive licenses. See individual source datasets for specific attribution requirements.