Putting our models out in the real world – if done carefully – could help advance AI safety and create a lot of social value. We could collect feedback on human preferences, test how aligned our models are in practice, and be a shining example of responsible deployment of AI to the rest of the AI ecosystem. You will own product management for our socially beneficial deployments, from working with engineers on rapid prototyping, to working with partners from scratch until we go live and deliver value, and helping set our product strategy for socially beneficial deployments. This position is up to 50% remote.
– Along with the engineering team, rapidly prototype different products and services to learn how generative models can help solve real problems for users.
– Work closely with non-profit partners to explore how we can make them more effective, either by improving their services or reducing their costs.
– Improve model performance based on user feedback.
– Influence strategy around product value, target audience and developing new partner relationships.
– Everything else it takes to get new partnerships, projects and products off the ground.
You might be a good fit if you:
– Enjoy tackling unusual and engaging product challenges, including exploring a potentially broad application space with safety as a primary goal.
– Love to think creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems.
– Enjoy empirical, data-driven approaches to deployment.
– Have experience gathering feedback, input and data from users and running beta tests.
– Are willing and excited to play many deployment-related positions, including those we haven’t scoped yet.
– Have successfully launched products before, finding ‘product-market fit’ or executing an ‘effective intervention’.
– Have worked in effective product, engineering, design and/or data teams.
– Enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities.