Secrets of RLHF in Large Language Models Part II: Reward Modeling Paper • 2401.06080 • Published Jan 11, 2024 • 29
RMB: Comprehensively Benchmarking Reward Models in LLM Alignment Paper • 2410.09893 • Published Oct 13, 2024
Secrets of RLHF in Large Language Models Part I: PPO Paper • 2307.04964 • Published Jul 11, 2023 • 29