Software Engineer, Inference - San Francisco - algojobs

    algojobs
    algojobs San Francisco

    5 days ago

    $80,000 - $180,000 (USD) per year *
    Description

    About Anthropic

    Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

    About the role

    Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry's largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators. The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.

    You may be a good fit if you:

    • Have significant software engineering experience, particularly with distributed systems
    • Are results-oriented, with a bias towards flexibility and impact
    • Pick up slack, even if it goes outside your job description
    • Enjoy pair programming (we love to pair)
    • Want to learn more about machine learning systems and infrastructure
    • Thrive in environments where technical excellence directly drives both business results and research breakthroughs
    • Care about the societal impacts of your work

    Strong candidates may also have experience with:

    • Implementing and deploying machine learning systems at scale
    • Load balancing, request routing, or traffic management systems
    • LLM inference optimization, batching, and caching strategies
    • Kubernetes and cloud infrastructure (AWS, GCP)
    • Python or Rust

    Representative projects:

    • Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators
    • Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads
    • Building production-grade deployment pipelines for releasing new models to millions of users
    • Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage
    • Contributing to new inference features (e.g., structured sampling, prompt caching)
    • Analyzing observability data to tune performance based on real-world production workloads
    • Managing multi-region deployments and geographic routing for global customers

    Deadline to apply

    None. Applications will be reviewed on a rolling basis.

    Compensation

    The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.

    $300,000 - $485,000 USD

    Logistics

    • Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
    • Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. Some roles may require more time in offices.
    • Visa sponsorship: We do sponsor visas. If we make you an offer, we will make reasonable efforts to obtain a visa, with support from an immigration lawyer.

    How we're different


    We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on a few large-scale research efforts with a focus on impact and steerable, trustworthy AI. We value collaboration, communication, and empirical progress in AI research.


    #J-18808-Ljbffr
    * This salary range is an estimation made by beBee
  • Work in company

    Inference Engineer

    Only for registered members

    Mirai builds on-device inference layer for AI. · We enable model makers to run AI models directly on edge devices,Our stack spans low-level GPU kernels to high-level model conversion tools. · We're looking for engineers who bridge ML research and high-performance inference,You'll ...

    San Francisco

    1 month ago

  • Work in company

    Inference Engineer

    Only for registered members

    About Us Most AI is frozen in place - it doesn't adapt to the world. We think that's backwards. · ...

    San Francisco, CA

    1 month ago

  • Work in company

    AI Inference Engineer

    Only for registered members

    We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference. · Responsibilities ...

    San Francisco $90,000 - $170,000 (USD) per year

    2 days ago

  • Work in company

    Inference Engineering Manager

    Only for registered members

    We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs serving millions of users with state-of-the-art AI capabilities. · ...

    San Francisco

    1 month ago

  • Work in company

    Inference Engineering Manager

    Only for registered members

    We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, serving millions of users with state-of-the-art AI capabilities. · ...

    San Francisco Full time

    1 month ago

  • Work in company

    Inference Engineering Manager

    Only for registered members

    We are looking for an Inference Engineering Manager to lead our AI Inference team. · This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, · serving millions of users with state-of-the-art AI capabilities. · ...

    San Francisco $300,000 - $385,000 (USD)

    1 month ago

  • Work in company

    Software Engineer, Inference

    Only for registered members

    About the Team · OpenAI's Inference team powers the deployment of our most advanced models - including our GPT models, 4o Image Generation, and Whisper - across a variety of platforms. Our work ensures these models are available, performant, and scalable in production, and we par ...

    San Francisco $80,000 - $180,000 (USD) per year

    2 days ago

  • Work in company

    Software Engineering – Inference Engineer

    Only for registered members

    Location: San Francisco, CA (Onsite | Remote) · About Virtue AI · Virtue AI sets the standard for advanced AI security platforms. Built on decades of foundational and award-winning research in AI security, its AI-native architecture unifies automated red-teaming, real-time multim ...

    San Francisco

    2 days ago

  • Work in company

    Software Engineer, Inference

    Only for registered members

    Overview · Pulse is tackling one of the most persistent challenges in data infrastructure: extracting accurate, structured information from complex documents at scale. We have a breakthrough approach to document understanding that combines intelligent schema mapping with fine-tun ...

    San Francisco $80,000 - $180,000 (USD) per year

    2 days ago

  • Work in company

    Inference Engineering Manager

    Only for registered members

    We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, · serving millions of users with state-of-the-art AI capabilities.Lead and grow a high ...

    San Francisco, CA

    1 month ago

  • Work in company

    Cloud Inference Engineer

    Only for registered members

    + GPU inference optimization vLLM SGLang or TensorRT-LLM experience Distributed compute with GPUs is a super plus Deploy and tune models with optimizations like KV caching paged attention sequence packing etc Conducting model performance reviews Improve scheduler batcher autoscal ...

    San Francisco

    3 weeks ago

  • Work in company

    Inference Runtime, Engineering Manager

    Only for registered members

    We are looking for an engineering leader who wants to build and lead the world's leading AI systems and modeling engineers who take the world's largest and most capable AI models and optimize them for use in a high-volume, low-latency, and high-availability production and researc ...

    San Francisco $455,000 - $555,000 (USD) Full time

    1 week ago

  • Work in company

    Software Engineer, Inference Deployment

    Only for registered members

    About Anthropic · Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and busine ...

    San Francisco, CA

    20 hours ago

  • Work in company

    Platform / Inference Optimization Engineer

    Only for registered members

    We build and operate large-scale LLM inference and training infrastructure serving millions of users. · ...

    San Francisco $200,000 - $500,000 (USD)

    2 weeks ago

  • Work in company

    Inference Runtime, Engineering Manager

    Only for registered members

    We are looking for an engineering leader who wants to build and lead the worlds leading AI systems and modeling engineers who take the world's largest and most capable AI models and optimize them for use in a high-volume, low-latency, and high-availability production and research ...

    San Francisco $455,000 - $555,000 (USD)

    1 week ago

  • Work in company

    Software Engineer, Inference Deployment

    Only for registered members

    We're building the systems that make inference deployment continuous and unattended.About The RoleOur mandate is to make inference deployment boring and unattended. · ...

    San Francisco $320,000 - $485,000 (USD)

    3 weeks ago

  • Work in company

    Engineer - GPU Optimisation & Inference

    Only for registered members

    Job summary · Want to push GPU performance to its limits — not in theory, but in production systems handling real-time speech and multimodal workloads? · This team is building low-latency AI systems where milliseconds actually matter.The target isn't "faster than baseline." It's ...

    San Francisco

    2 weeks ago

  • Work in company

    Research Engineer, Infrastructure, Inference

    Only for registered members

    · Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.  · We are scientists, engineers, and buil ...

    San Francisco

    4 days ago

  • Work in company

    Founding Engineer, ML Inference

    Only for registered members

    We're looking for a Founding Engineer with deep expertise in high-performance ML engineering. · Drive our frontier position on real-time model performance for diffusion models · ...

    San Francisco Full time

    1 month ago

  • Work in company

    Applied AI Inference Engineer

    Only for registered members

    Baseten powers mission-critical inference for the world's most dynamic AI companies. We enable companies operating at the frontier of AI to bring cutting-edge models into production. · ...

    San Francisco $160,000 - $275,000 (USD)

    1 week ago

  • Work in company

    Applied AI Inference Engineer

    Only for registered members

    ABOUT BASETEN · Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies op ...

    San Francisco

    4 days ago

Jobs
>
Software engineer, inference
>
Jobs for Software engineer, inference in San Francisco