Putting our models out in the real world – if done carefully – could help advance AI safety. We could collect feedback on human preferences, test how aligned our models are in practice, and be a shining example of responsible deployment of AI to the rest of the AI ecosystem.
You will lead product management for deployment, from working with engineers on rapid prototyping to setting product strategy and facilitating conversations among senior stakeholders around deployment.
Our deployment efforts have to actively contribute to AI safety; we’re looking for someone who would be happy to put their tools down in case we find that it doesn’t.
This position is up to 50% remote.
– Along with engineering team, rapidly prototype different products and services to learn how generative models can help solve real problems for users.
– Work closely with our alignment team to define how to make our models more aligned, and with ML researchers to guide what capabilities are needed to serve users
– Help build the team: hiring, setting strategy and goals, defining culture and supporting folks to do their best work.
– Guide product strategy, from target scale, target audience, differentiation pillars, approach to monetization, roadmap.
Everything else it takes to get a new product off the ground.
You might be a good fit if you
– Enjoy tackling unusual and engaging product challenges, including exploring a potentially broad application space with safety as a primary goal.
– Love to think creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems.
– Enjoy empirical, data-driven approaches to deployment
– Have experience gathering feedback, input and data from users and running beta tests.
– Are willing and excited to play many deployment-related positions, including those we haven’t scoped yet.
– Have successfully launched products before, finding ‘product-market fit’ or executing an ‘effective intervention’.
– Have built effective product, engineering, design and/or data teams
– Enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
How we’re different
– We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles.
– We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science.
– We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.