Researcher, Recursive Self-Improvement Preparedness
OpenAI
🔍 Observed
81
Hiring Activity Score
Very Active (80-100)
- Base score
- Just posted (1d, flat to 14d)
- has location, quality description (6002 chars)
- 417 new listings in 30d (×0.98 age 1d)
- Tier 1 reputable (OpenAI) ×0.98 age 1d
- Low confidence (30%)
- Direct ATS (ashby)
San Francisco
First seen 1 day, 17 hours ago
Last seen 5 hours, 2 minutes ago
Ashby
Apply on Ashby
Search Google for This Role
ATS links often expire — Google search finds the latest posting
Job Description
AI Summary
• Develop technical solutions to prepare for future AI risks including data-poisoning defenses, model transparency tools, safety measurement frameworks, and AI safety agreement verification mechanisms.
• Convert abstract long-term safety challenges into concrete near-term projects by prioritizing strategic research initiatives with limited feedback loops.
• Build rapid prototypes and iteratively improve them while securing stakeholder buy-in across OpenAI teams and managing scaling as needed.
• Requires exceptional technical execution skills, strong strategic judgment in ambiguous domains, and demonstrated passion for AI safety and recursive self-improvement risks.
• Prior experience in ML research, AI alignment, AI verification, or related safety domains is a valuable bonus.
Skills
rust
aws
Quick Actions
Job Information
-
Company:
OpenAI -
Location:
San Francisco -
Job Type:
Full-Time -
Experience Level:
Mid -
Source:
Ashby -
Status:
Active
Activity Score
81
/100
Very Active (81)
Higher scores indicate more likely active hiring based on listing freshness, company activity, and other signals. Learn more →
More from OpenAI
-
Senior Staff Backend Software Engineer, Codex for Finance
San Francisco -
Staff / Senior Staff Product Engineer, Full Stack - Codex for Finance
San Francisco -
Manager, AI Deployment Engineering (Korea)
Seoul, South Korea -
Solutions Architect, Digital Natives
San Francisco -
AI Systems Engineer, Codex Agents
San Francisco