Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
陈博远's picture
1 13

陈博远

xToM2
https://github.com/cby-pku
  • cby-pku

AI & ML interests

RLHF\stragetic reasoning\game theory\MAS

Organizations

None yet

Collections 1

alignment
  • Safe RLHF: Safe Reinforcement Learning from Human Feedback

    Paper • 2310.12773 • Published Oct 19, 2023 • 28
alignment
  • Safe RLHF: Safe Reinforcement Learning from Human Feedback

    Paper • 2310.12773 • Published Oct 19, 2023 • 28

models 0

None public yet

datasets 0

None public yet
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs