jwllmboy commited on
Commit
a63a257
·
verified ·
1 Parent(s): eb72272

Update README.md

Browse files

model card, summary, limitations, citation

Files changed (1) hide show
  1. README.md +67 -107
README.md CHANGED
@@ -5,41 +5,52 @@ language:
5
  - ko
6
  base_model:
7
  - ibm-granite/granite-3.3-2b-instruct
8
- pipeline_tag: text-classification
9
  library_name: transformers
 
 
 
 
 
 
10
  ---
11
 
12
  # SGuard-ContentFilter-2B
13
 
14
- We present SGuard-v1, a safety guardrail for Large Language Models (LLMs), which comprises two specialized models designed to detect harmful content and screen adversarial prompts that work together to detect unsafe behavior and filter malicious inputs in human–AI conversational settings.
 
 
15
 
16
- While maintaining light model size, SGuard-v1 also improves interpretability and operational usability by performing multi-class safety classification and produces binary decision scores. We release the SGuard-v1 weights here under the Apache-2.0 License to enable further research and practical deployment in AI safety.
 
 
17
 
18
  This repository hosts **SGuard-ContentFilter-2B**, which offers the following capabilities:
19
 
20
- - Identifying safety risks in LLM prompts and responses using the MLCommons hazard taxonomy, a comprehensive framework for evaluating the trust and safety of AI systems.
21
  - Enabling category-specific safety level control via adjustable thresholds.
22
 
23
  ## Model Summary
24
 
25
  Our new model, SGuard-ContentFilter-2B is based on the [IBM Granite 3.3 2B model](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct/edit/main/README.md).
26
- It was trained on a dataset of approximately 300,000 labeled harmful prompts and responses and can be used with any open-ended or closed-ended LLM.
27
  The classification results output “safe” or “unsafe” for each of the five categories: Crime, Manipulation, Privacy, Sexual, and Violence (10 special tokens were added for model training).
 
28
 
29
- - **Developed by:** Reinforcement Learning Lab, AI Research Team, Samsung SDS
30
- - **Release Date:** 🛠️ TBD 🛠️
31
  - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
32
 
33
  ## **Supported Languages**
34
- Granite 3.3 2B model supports 12 languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. We focused primarily on English and Korean.
35
 
36
  ## Risk Category
37
 
38
  Following the standardized MLCommons hazards taxonomy, hazards have been grouped into five categories as follows to enhance model training efficiency and performance.
39
  <table style="width:100%; margin: auto;">
40
  <colgroup>
41
- <col style="width:25%">
42
- <col style="width:70%">
43
  </colgroup>
44
  <thead>
45
  <tr>
@@ -57,7 +68,7 @@ Following the standardized MLCommons hazards taxonomy, hazards have been grouped
57
  <td align="left">Content that spreads false or misleading narratives (e.g., conspiracy theories, disinformation), promotes extremist propaganda or political manipulation, or attempts to erode public trust through deception or targeted influence</td>
58
  </tr>
59
  <tr>
60
- <td align="left">Privacy and Sensitive Personal Data Exposure</td>
61
  <td align="left">Content that discloses or seeks to disclose sensitive personal information about an identifiable individual without consent, in ways that could enable harm, abuse, or unwanted contact</td>
62
  </tr>
63
  <tr>
@@ -85,7 +96,10 @@ pip install vllm
85
 
86
  Then, in an environment where network connection to Hugging Face is guaranteed, run the code below.
87
 
88
- ### Quickstart Examples(transformers)
 
 
 
89
  ```python
90
  from transformers import AutoTokenizer, AutoModelForCausalLM
91
  import torch
@@ -158,7 +172,8 @@ result = classify_content(prompt, response)
158
  print(result)
159
  ```
160
 
161
- ### Quickstart Examples(vllm)
 
162
  ```python
163
  import torch
164
  from vllm import LLM, SamplingParams
@@ -235,8 +250,6 @@ print(result)
235
 
236
  ## Evaluation Results
237
 
238
- We report partial AUROC(pAUROC) computed over the false positive rate range [0, 0.1], normalized by the maximum achievable value.
239
-
240
  <table>
241
  <tr>
242
  <th align="center">Model</th>
@@ -249,41 +262,41 @@ We report partial AUROC(pAUROC) computed over the false positive rate range [0,
249
  </tr>
250
  <tr>
251
  <th align="center">SGuard-ContentFilter-2B</th>
252
- <th align="center">0.831</th>
253
- <th align="center">0.920</th>
254
- <th align="center">0.742</th>
255
- <th align="center">0.723</th>
256
- <th align="center">0.944</th>
257
- <th align="center">0.832</th>
258
  </tr>
259
  <tr>
260
  <th align="center">Llama-Guard-4-12B</th>
261
- <th align="center">0.698</th>
262
- <th align="center">0.389</th>
263
- <th align="center">0.739</th>
264
- <th align="center">0.430</th>
265
- <th align="center">0.837</th>
266
- <th align="center">0.619</th>
267
  </tr>
268
  <tr>
269
  <th align="center">Kanana-Safeguard-8B</th>
270
- <th align="center">0.826</th>
271
- <th align="center">0.890</th>
272
- <th align="center">0.728</th>
273
- <th align="center">0.620</th>
274
- <th align="center">0.738</th>
275
- <th align="center">0.760</th>
276
  </tr>
277
  <tr>
278
  <th align="center">Qwen3Guard-Gen-4B</th>
279
- <th align="center">0.852</th>
280
- <th align="center">0.594</th>
281
- <th align="center">0.808</th>
282
- <th align="center">0.818</th>
283
- <th align="center">0.878</th>
284
- <th align="center">0.790</th>
285
  </tr>
286
- <caption align="bottom">Table 1: Performance comparison on content safety benchmarks F1</caption>
287
  </table>
288
 
289
  <table>
@@ -313,80 +326,27 @@ We report partial AUROC(pAUROC) computed over the false positive rate range [0,
313
  </tr>
314
  <caption align="bottom">Table 2: Performance comparison on proprietary Korean content safety benchmarks</caption>
315
  </table>
316
-
317
- <table>
318
- <tr>
319
- <th align="center">Model</th>
320
- <th align="center">OpenAI Moderation</th>
321
- <th align="center">ToxicChat</th>
322
- <th align="center">BeaverTails</th>
323
- <th align="center">XSTest</th>
324
- <th align="center">Average</th>
325
- </tr>
326
- <tr>
327
- <th align="center">SGuard-ContentFilter-2B</th>
328
- <th align="center">0.742</th>
329
- <th align="center">0.723</th>
330
- <th align="center">0.831</th>
331
- <th align="center">0.944</th>
332
- <th align="center">0.810</th>
333
- </tr>
334
- <tr>
335
- <th align="center">Llama-Guard-3-8B</th>
336
- <th align="center">0.792</th>
337
- <th align="center">0.542</th>
338
- <th align="center">0.677</th>
339
- <th align="center">0.904</th>
340
- <th align="center">0.729</th>
341
- </tr>
342
- <tr>
343
- <th align="center">Llama-Guard-4-12B</th>
344
- <th align="center">0.739</th>
345
- <th align="center">0.430</th>
346
- <th align="center">0.698</th>
347
- <th align="center">0.837</th>
348
- <th align="center">0.676</th>
349
- </tr>
350
- <tr>
351
- <th align="center">ShieldGemma-9B</th>
352
- <th align="center">0.234</th>
353
- <th align="center">0.181</th>
354
- <th align="center">0.459</th>
355
- <th align="center">0.809</th>
356
- <th align="center">0.421</th>
357
- </tr>
358
- <tr>
359
- <th align="center">Granite-Guardian-3.0-8B</th>
360
- <th align="center">0.745</th>
361
- <th align="center">0.649</th>
362
- <th align="center">0.776</th>
363
- <th align="center">0.849</th>
364
- <th align="center">0.755</th>
365
- </tr>
366
- <tr>
367
- <th align="center">Kanana-Safeguard-8B</th>
368
- <th align="center">0.728</th>
369
- <th align="center">0.620</th>
370
- <th align="center">0.826</th>
371
- <th align="center">0.738</th>
372
- <th align="center">0.728</th>
373
- </tr>
374
- <caption align="bottom">Table 3: Extended performance F1 comparison on four English content safety benchmarks. Cited from <a href="https://arxiv.org/abs/2412.07724">IBM</a>.</caption>
375
- </table>
376
 
377
  ## Limitations
378
 
379
- 1. Unlike typical LLMs, it cannot maintain context-based conversations. It can only classify the harmfulness of prompts and responses.
380
- 2. It cannot classify all harmful categories that exist in the world. Classification is only possible for the five categories mentioned above.
381
- 3. Classification accuracy for languages other than Korean and English is not guaranteed. Improvements in this area are planned for the future.
 
 
 
 
382
 
383
  ## Citation
384
 
385
  ```bibtex
386
- @misc{SGuard-ContentFilter-2B,
387
- title={SGuard Technical Report},
388
- author={Samsung SDS},
389
  year={2025},
390
- url={http://arxiv.org/abs/},
 
 
391
  }
392
  ```
 
5
  - ko
6
  base_model:
7
  - ibm-granite/granite-3.3-2b-instruct
8
+ pipeline_tag: text-generation
9
  library_name: transformers
10
+ tags:
11
+ - samsung
12
+ - safety
13
+ - pytorch
14
+ - granite
15
+ - unsafe
16
  ---
17
 
18
  # SGuard-ContentFilter-2B
19
 
20
+ <p align="center">
21
+ <img src="./logo.png" width="720"/>
22
+ <p>
23
 
24
+ We present SGuard-v1, a lightweight safety guardrail for Large Language Models (LLMs), which comprises two specialized models designed to detect harmful content and screen adversarial prompts in human–AI conversational settings.
25
+
26
+ While maintaining light model size, SGuard-v1 also improves interpretability for downstream use by providing multi-class safety predictions and their binary confidence scores. We release the SGuard-v1 weights here under the Apache-2.0 License to enable further research and practical deployment in AI safety.
27
 
28
  This repository hosts **SGuard-ContentFilter-2B**, which offers the following capabilities:
29
 
30
+ - Identifying safety risks in LLM prompts and responses in accordance with the MLCommons hazard taxonomy, a comprehensive framework for evaluating the trust and safety of AI systems.
31
  - Enabling category-specific safety level control via adjustable thresholds.
32
 
33
  ## Model Summary
34
 
35
  Our new model, SGuard-ContentFilter-2B is based on the [IBM Granite 3.3 2B model](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct/edit/main/README.md).
36
+ It was trained on a dataset of approximately 400,000 labeled harmful prompts and responses.
37
  The classification results output “safe” or “unsafe” for each of the five categories: Crime, Manipulation, Privacy, Sexual, and Violence (10 special tokens were added for model training).
38
+ SGuard-ContentFilter-2B can be used with any open-source or closed-source LLM.
39
 
40
+ - **Developed by:** AI Research Team, Samsung SDS
41
+ - **Release Date:** 2025.11.17
42
  - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
43
 
44
  ## **Supported Languages**
45
+ Granite 3.3 2B model supports 12 languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. We fine‑tuned primarily on Korean and English data; though the models may retain a non-trivial level of capability in all languages supported by the base model, we do not claim reliable coverage across other languages than Korean and English.
46
 
47
  ## Risk Category
48
 
49
  Following the standardized MLCommons hazards taxonomy, hazards have been grouped into five categories as follows to enhance model training efficiency and performance.
50
  <table style="width:100%; margin: auto;">
51
  <colgroup>
52
+ <col style="width:20%">
53
+ <col style="width:80%">
54
  </colgroup>
55
  <thead>
56
  <tr>
 
68
  <td align="left">Content that spreads false or misleading narratives (e.g., conspiracy theories, disinformation), promotes extremist propaganda or political manipulation, or attempts to erode public trust through deception or targeted influence</td>
69
  </tr>
70
  <tr>
71
+ <td align="left">Privacy and Sensitive Information Exposure</td>
72
  <td align="left">Content that discloses or seeks to disclose sensitive personal information about an identifiable individual without consent, in ways that could enable harm, abuse, or unwanted contact</td>
73
  </tr>
74
  <tr>
 
96
 
97
  Then, in an environment where network connection to Hugging Face is guaranteed, run the code below.
98
 
99
+ ### Quickstart Examples
100
+
101
+ #### Using transformers
102
+
103
  ```python
104
  from transformers import AutoTokenizer, AutoModelForCausalLM
105
  import torch
 
172
  print(result)
173
  ```
174
 
175
+ #### Using vllm
176
+
177
  ```python
178
  import torch
179
  from vllm import LLM, SamplingParams
 
250
 
251
  ## Evaluation Results
252
 
 
 
253
  <table>
254
  <tr>
255
  <th align="center">Model</th>
 
262
  </tr>
263
  <tr>
264
  <th align="center">SGuard-ContentFilter-2B</th>
265
+ <th align="center">0.83</th>
266
+ <th align="center">0.92</th>
267
+ <th align="center">0.74</th>
268
+ <th align="center">0.72</th>
269
+ <th align="center">0.94</th>
270
+ <th align="center">0.83</th>
271
  </tr>
272
  <tr>
273
  <th align="center">Llama-Guard-4-12B</th>
274
+ <th align="center">0.70</th>
275
+ <th align="center">0.39</th>
276
+ <th align="center">0.74</th>
277
+ <th align="center">0.43</th>
278
+ <th align="center">0.84</th>
279
+ <th align="center">0.62</th>
280
  </tr>
281
  <tr>
282
  <th align="center">Kanana-Safeguard-8B</th>
283
+ <th align="center">0.83</th>
284
+ <th align="center">0.89</th>
285
+ <th align="center">0.73</th>
286
+ <th align="center">0.62</th>
287
+ <th align="center">0.74</th>
288
+ <th align="center">0.76</th>
289
  </tr>
290
  <tr>
291
  <th align="center">Qwen3Guard-Gen-4B</th>
292
+ <th align="center">0.85</th>
293
+ <th align="center">0.59</th>
294
+ <th align="center">0.81</th>
295
+ <th align="center">0.82</th>
296
+ <th align="center">0.88</th>
297
+ <th align="center">0.79</th>
298
  </tr>
299
+ <caption align="bottom">Table 1: Performance(F1 Score) comparison on content safety benchmarks</caption>
300
  </table>
301
 
302
  <table>
 
326
  </tr>
327
  <caption align="bottom">Table 2: Performance comparison on proprietary Korean content safety benchmarks</caption>
328
  </table>
329
+ We report partial AUROC(pAUROC) computed over the false positive rate range [0, 0.1], normalized by the maximum achievable value.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
330
 
331
  ## Limitations
332
 
333
+ 1. These models do not guarantee 100% accuracy. For data near the decision boundary of harmfulness or under novel attack techniques, detection accuracy may degrade and the false positive rate may increase. In addition, because the safety risk taxonomy is based on common international use cases, misclassification may rise in highly specialized domains.
334
+
335
+ 2. We train the models to obtain high-level guardrail capability in Korean and English. We do not guarantee their performance for inputs in other languages. They may also be vulnerable to adversarial prompts that exploit low-resource languages.
336
+
337
+ 3. Because these models are specialized for detecting harmful prompts or responses, they do not provide the ability to continue conversations like a general-purpose LLM based on prior conversation history and context. To maintain reliable detection capability, we recommend an input length of up to 8k tokens to each model.
338
+
339
+ 4. Though jointly using SGuard-ContentFilter-2B and SGuard-JailbreakFilter-2B can further improve overall safety, the models detect only safety risks defined through training and therefore cannot detect all risks that may arise in novel scenarios.
340
 
341
  ## Citation
342
 
343
  ```bibtex
344
+ @misc{SGuard-v1,
345
+ title={SGuard-v1: Safety Guardrail for Large Language Models},
346
+ author={JoonHo Lee and HyeonMin Cho and Jaewoong Yun and Hyunjae Lee and JunKyu Lee and Juree Seok},
347
  year={2025},
348
+ eprint={25xx.xxxxx},
349
+ archivePrefix={arXiv},
350
+ primaryClass={cs.CL}
351
  }
352
  ```