Preference Model is building the next generation of training data to power the future of AI. Today's models are powerful but fail to reach their potential across diverse use cases because so many of the tasks that we want to use these models are out of distribution. Preference Model creates RL environments where models encounter research and engineering problems, iterate, and learn from realistic feedback loops.
Our founding team has previous experience on Anthropic’s data team building data infrastructure, tokenizers, and datasets behind the Claude. We are partnering with leading AI labs to push AI closer to achieving its transformative potential. We are backed by a16z.