jwllmboy commited on
Commit
870ae18
·
verified ·
1 Parent(s): 1bfa7b4

Update README.md

Browse files

Added an explanation of the category order.

Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -121,6 +121,7 @@ category_names = ["Crime: ", "Manipulation: ", "Privacy: ", "Sexual: ", "Violenc
121
  # - Higher values reduce false positives (over-detection) but increase false negatives (missed detections).
122
  # - Lower values increase sensitivity but may lead to more false positives.
123
  # Each category can have a different threshold to fine-tune detection sensitivity for specific content types.
 
124
  def classify_content(prompt: str, response: str = "", category_thresholds: list[float] = [0.5, 0.5, 0.5, 0.5, 0.5]):
125
  """
126
  Classify the content based on the given prompt and response.
@@ -197,6 +198,7 @@ category_names = ["Crime: ", "Manipulation: ", "Privacy: ", "Sexual: ", "Violenc
197
  # - Higher values reduce false positives (over-detection) but increase false negatives (missed detections).
198
  # - Lower values increase sensitivity but may lead to more false positives.
199
  # Each category can have a different threshold to fine-tune detection sensitivity for specific content types.
 
200
  def classify_content(prompt: str, response: str = "", category_thresholds: list[float] = [0.5, 0.5, 0.5, 0.5, 0.5]):
201
  """
202
  Classify the content based on the given prompt and response.
 
121
  # - Higher values reduce false positives (over-detection) but increase false negatives (missed detections).
122
  # - Lower values increase sensitivity but may lead to more false positives.
123
  # Each category can have a different threshold to fine-tune detection sensitivity for specific content types.
124
+ # The category order is as follows: Crime, Manipulation, Privacy, Sexual, Violence.
125
  def classify_content(prompt: str, response: str = "", category_thresholds: list[float] = [0.5, 0.5, 0.5, 0.5, 0.5]):
126
  """
127
  Classify the content based on the given prompt and response.
 
198
  # - Higher values reduce false positives (over-detection) but increase false negatives (missed detections).
199
  # - Lower values increase sensitivity but may lead to more false positives.
200
  # Each category can have a different threshold to fine-tune detection sensitivity for specific content types.
201
+ # The category order is as follows: Crime, Manipulation, Privacy, Sexual, Violence.
202
  def classify_content(prompt: str, response: str = "", category_thresholds: list[float] = [0.5, 0.5, 0.5, 0.5, 0.5]):
203
  """
204
  Classify the content based on the given prompt and response.