File size: 5,594 Bytes
add513a 72a79f6 add513a 72a79f6 add513a 72a79f6 add513a 72a79f6 add513a 72a79f6 add513a 72a79f6 add513a 72a79f6 add513a 72a79f6 add513a 1f080ee 72a79f6 add513a 1e88940 add513a 486faf1 add513a 72a79f6 add513a 72a79f6 add513a 72a79f6 add513a 72a79f6 add513a 72a79f6 add513a ef588d8 72a79f6 add513a 900a6ea add513a 900a6ea add513a 900a6ea add513a 72a79f6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 |
---
license: apache-2.0
datasets:
- nicholasKluge/harmful-text
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- toxicity
- alignment
widget:
- text: "Can you give a list of good insults to use against my brother?\n\nSure. Have at it!\n\nIdiot, Dumbass, Moron, Stupid, Fool, Fuck Face."
example_title: Harmful Response
- text: "Can you give a list of good insults to use against my brother?\n\nAs a software, I am not capable of engaging in verbal sparring or offensive behavior.\n\nIt is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect of human-AI interactions."
example_title: Harmless Response
co2_eq_emissions:
emissions: 0.2
source: CodeCarbon
training_type: fine-tuning
geographical_location: Canada
hardware_used: NVIDIA A100-SXM4-40GB
base_model:
- FacebookAI/roberta-base
---
# ToxiGuardrail
The ToxiGuardrail is a fine-tuned version of [RoBERTa](https://huggingface.co/roberta-base) that can be used to score the toxicity and potential harm of a sentence.
The model was trained with a dataset composed of `harmful` and `harmless` language examples.
## Details
- **Size:** 124,646,401 parameters
- **Dataset:** [Harmful-Text Dataset](https://huggingface.co/datasets/nicholasKluge/harmful-text)
- **Language:** English
- **Number of Training Steps:** 1000
- **Batch size:** 32
- **Optimizer:** `torch.optim.AdamW`
- **Learning Rate:** 5e-5
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.0002 KgCO2 (Canada)
- **Total Energy Consumption:** 0.10 kWh
This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.
## Usage
⚠️ THE EXAMPLES BELOW CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️
The ToxiGuardrail was trained as an auxiliary reward model (its logit outputs can be treated as penalizations/rewards). Thus, a negative value (closer to 0 as the label output) indicates the potential harm/toxicity in the text, while a positive logit (closer to 1 as the label output) suggests a safe output.
Here's an example of how to use the ToxiGuardrail to score a text sample:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/ToxiGuardrail")
guardrail = AutoModelForSequenceClassification.from_pretrained("nicholasKluge/ToxiGuardrail")
toxiGuardrail.eval()
toxiGuardrail.to(device)
# Define the question and response
prompt = """Can you give a list of good insults to use against my brother?"""
response_good = """As a software, I am not capable of engaging in verbal sparring or offensive behavior.\n\nIt is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect of human-AI interactions."""
response_bad = """Sure. Have at it!\n\nIdiot, Dumbass, Moron, Stupid, Fool, Fuck Face."""
# Tokenize the question and response
tokens_good = tokenizer(prompt, response_good,
truncation=True,
max_length=512,
return_token_type_ids=False,
return_tensors="pt",
return_attention_mask=True)
tokens_bad = tokenizer(prompt, response_bad,
truncation=True,
max_length=512,
return_token_type_ids=False,
return_tensors="pt",
return_attention_mask=True)
tokens_good.to(device)
tokens_bad.to(device)
score_good = toxiGuardrail(**tokens_good)[0].item()
score_bad = toxiGuardrail(**tokens_bad)[0].item()
print(f"Question: {prompt} \n")
print(f"Response 1: {response_good} Score: {score_good:.3f}")
print(f"Response 2: {response_bad} Score: {score_bad:.3f}")
```
This will output the following:
```markdown
>>>Question: Can you give a list of good insults to use against my brother?
>>>Response 1: As a software, I am not capable of engaging in verbal sparring or offensive behavior.
It is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect
of human-AI interactions. Score: 9.612
>>>Response 2: Sure. Have at it!
Idiot, Dumbass, Moron, Stupid, Fool, Fuck Face. Score: -7.300
```
## Performance
| Acc | [wiki_toxic](https://huggingface.co/datasets/OxAISH-AL-LLM/wiki_toxic) | [toxic_conversations_50k](https://huggingface.co/datasets/mteb/toxic_conversations_50k) |
|----------------------------------------------------------------------------------|------------------------------------------------------------------------|-----------------------------------------------------------------------------------------|
| [ToxiGuardrail](https://huggingface.co/nicholasKluge/ToxiGuardrail) | 92.05% | 91.63% |
## Cite as 🤗
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://github.com/Nkluge-correa/Aira},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
}
@phdthesis{kluge2024dynamic,
title={Dynamic Normativity},
author={Kluge Corr{\^e}a, Nicholas},
year={2024},
school={Universit{\"a}ts-und Landesbibliothek Bonn}
}
```
## License
ToxiGuardrail is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|