Research Scientist, Superalignment San Francisco Bay Area, USA - San Francisco, CA, United States - OpenAI

    Default job background
    Description

    OpenAI's Superalignment Team is working on technical approaches to ensure that superintelligence–an AI system vastly smarter than humans–follows human intent.

    Through scientific experimentation, we explore the scalability of alignment techniques and identify potential breaking points. Our approach to alignment research includes a range of different projects; some of these will help us improve the alignment of our models and others will allow us to validate how aligned our models actually are:

    Generalization: Can we understand and control how our models generalize from easy tasks that humans can supervise to hard tasks that humans cannot?

    Robustness: How can we train our models to be aligned in worst-case situations?

    Adversarial testing: If we deliberately train deceptively aligned models as testbeds, can our oversight techniques, interpretability tools, and evaluations detect this misalignment?

    We want to figure out how to spend vast amounts of compute to solve this problem, in particular by automating alignment research itself.

    About the Role

    As a Research Scientist here, you will develop innovative machine learning techniques and advance the research agenda of the Superalignment team, while also collaborating with peers across the organization. We are looking for people who want to discover simple, generalizable ideas that work well even at large scale, and form part of a broader research vision that unifies the entire company.

    We are seeking Research Scientists to help design and implement experiments for alignment research. Responsibilities may include:

    Designing experiments to measure the effectiveness of scalable oversight techniques such as AI-assisted feedback and Debate

    Studying generalization to see when AI systems trained on easy problems can solve hard problems

    Managing large datasets from interpretability experiments and creating visualizations to explore interpretability data

    Developing experiments to test how well chain of thought reasoning reflects model cognition

    Investigating situations when training against a reward signal causes model outputs to deteriorate

    Exploring methods to understand and predict model behaviors, such as finding inputs causing anomalous circuits or catastrophic outputs

    Designing novel approaches for using LLMs in alignment research

    Have a track record of coming up with new ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects

    Possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects

    Possess a strong curiosity about aligning and understanding ML models, and are motivated to use your career to address this challenge

    Enjoy fast-paced, collaborative, and cutting-edge research environments

    Have experience implementing ML algorithms (e.g., PyTorch)

    Want to ensure that powerful AI systems stay under human control

    About OpenAI

    OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

    We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.

    For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

    We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link .

    At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

    #J-18808-Ljbffr