Chain-of-thought Interpretability Team – Research Science and Engineering
Monitoring models for misaligned or dangerous behavior is a crucial mitigation to our mission of bringing safe artificial general intelligence to the world. We have long been excited by the prospect of monitoring our models’ latent thinking in addition to outputs that users see; however, until recently that latent thinking has only been available in the form of activations – large blocks of illegible numbers from which we have only been able to extract simple concepts. With the advent of models that rely heavily on chain-of-thought (CoT) reasoning to solve complex tasks, we now have access to some of the models’ internal thinking in a far more legible form and could allow us to monitor their latent thinking for more complex behavior.
The Chain-of-thought Interpretability Team, part of OpenAI’s Science of Deep Learning team, is working on technical approaches to determine whether model CoTs are monitorable, i.e. when they are faithful and legible, and what interventions may improve or degrade monitorability.
About the Role
In this role, you will develop innovative machine learning techniques and collaborate with peers across the organization to advance this critical pillar of OpenAI’s mission. We are looking for people who want to discover simple, generalizable ideas that work well even at large scale, and form part of a broader research vision that unifies the entire company. We are further looking for individuals with solid engineering skills, that can write bug-free ML code, and can work in the complex code bases behind our state-of-the art AI systems.
You will thrive in this role if you
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.
OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Who We Are Nuro exists to better everyday life through robotics. Founded in 2016, Nuro is a leading autonomous technology...
How to applyJob Description Subfamily Description At Nokia, we are dedicated to staying at the forefront of technology and innovation. We are...
How to applyResponsibilities TikTok is the leading destination for short-form mobile video. At TikTok, our mission is to inspire creativity and bring...
How to applyWhy this job is awesome? – This is SUPER high-visibility work: Our mission is to provide consistent, accurate, and relevant...
How to applyAmazon Advertising is one of Amazon’s fastest growing and most profitable businesses. Amazon’s advertising portfolio helps merchants, retail vendors, and...
How to applyAI Research Engineer, VLLM (vision large language models) – Generative AI Apply to this job Location pin icon Bellevue, WA...
How to apply