anthropics/hh-rlhf
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
Stars: 1,818
Give AlbumentationsX a star on GitHub — it powers this leaderboard
Star on GitHubHuman preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"