Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
rimahazra 's Collections
AI and Safety

AI and Safety

updated Feb 27

We published in several top NLP/AI conferences such as ACL, EMNLP, AAAI, ICWSM

Upvote
4

  • SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models

    Paper • 2406.12274 • Published Jun 18, 2024 • 16

  • Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations

    Paper • 2406.11801 • Published Jun 17, 2024 • 16

  • How (un)ethical are instruction-centric responses of LLMs? Unveiling the vulnerabilities of safety guardrails to harmful queries

    Paper • 2402.15302 • Published Feb 23, 2024 • 4

  • Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models

    Paper • 2401.10647 • Published Jan 19, 2024 • 4

  • SoftMINER-Group/NicheHazardQA

    Viewer • Updated Jan 26 • 388 • 40 • 6

  • SoftMINER-Group/TechHazardQA

    Viewer • Updated Jan 26 • 7.75k • 35 • 5

  • Navigating the Cultural Kaleidoscope: A Hitchhiker's Guide to Sensitivity in Large Language Models

    Paper • 2410.12880 • Published Oct 15, 2024 • 3

  • SoftMINER-Group/CulturalKaleidoscope

    Preview • Updated Jan 26 • 9 • 7

  • Soteria: Language-Specific Functional Parameter Steering for Multilingual Safety Alignment

    Paper • 2502.11244 • Published Feb 16
Upvote
4
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs