Senior Backend Engineer, Inference Platform - San Francisco - Together

    Together
    Together San Francisco

    1 week ago

    $160,000 - $250,000 (USD) per year
    Description

    Senior Backend Engineer, Inference Platform


    About the Team


    Together AI is building the Inference Platform that brings the most advanced generative AI models to the world. Our platform powers multi‑tenant serverless workloads and dedicated endpoints, enabling developers, enterprises, and researchers to harness the latest LLMs, multimodal models, image, audio, video, and speech models at scale.

    If you get a thrill from optimizing latency down to the last millisecond, this is your playground. You'll work hands‑on with tens of thousands of GPUs (H100s, H200s, GB200s, and beyond), figuring out how to fully utilize every FLOP and every gigabyte of memory.

    You'll collaborate directly with research teams to bring frontier models into production, making breakthroughs usable in the real world. Our team also works closely with the open‑source community, contributing to and leveraging projects like SGLang, vLLM, and NVIDIA Dynamo to push the boundaries of inference performance and efficiency.

    Some of What You'll Work On

    • Build and optimize global and local request routing, ensuring low‑latency load balancing across data centers and model engine pods.
    • Develop auto‑scaling systems to dynamically allocate resources and meet strict SLOs across dozens of data centers.
    • Design systems for multi‑tenant traffic shaping, tuning both resource allocation and request handling — including smart rate limiting and regulation — to ensure fairness and consistent experience across all users.
    • Engineer trade‑offs between latency and throughput to serve diverse workloads efficiently.
    • Optimize prefix caching to reduce model compute and speed up responses.
    • Collaborate with ML researchers to bring new model architectures into production at scale.
    • Continuously profile and analyze system‑level performance to identify bottlenecks and implement optimizations.

    What We're Looking For

    • 5+ years of demonstrated experience building large‑scale, fault‑tolerant, distributed systems and API microservices.
    • Strong background in designing, analysing, and improving efficiency, scalability, and stability of complex systems.
    • Excellent understanding of low‑level OS concepts: multi‑threading, memory management, networking, and storage performance.
    • Expert‑level programming in one or more of: Rust, Go, Python, or TypeScript.
    • Knowledge of modern LLMs and generative models and how they are served in production is a plus.
    • Experience working with the open‑source ecosystem around inference is highly valuable; familiarity with SGLang, vLLM, or NVIDIA Dynamo will be especially handy.
    • Experience with Kubernetes or container orchestration is a strong plus.
    • Familiarity with GPU software stacks (CUDA, Triton, NCCL) and HPC technologies (InfiniBand, NVLink, MPI) is a plus.
    • Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field, or equivalent practical experience.

    Why Join Us?

    • Shape the core inference backbone that powers Together AI's frontier models.
    • Solve performance‑critical challenges in global request routing, load balancing, and large‑scale resource allocation.
    • Work with state‑of‑the‑art accelerators (H100s, H200s, GB200s) at global scale.
    • Partner with world‑class researchers to bring new model architectures into production.
    • Collaborate with and contribute to the open‑source community, shaping the tools that advance the industry.
    • Enjoy a culture of deep technical ownership and high impact — where your work makes models faster, cheaper, and more accessible.
    • Competitive compensation, equity, and benefits.

    About Together AI


    Together AI is a research‑driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co‑designing software, hardware, algorithms, and models. We have contributed to leading open‑source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama.

    Compensation


    We offer competitive compensation, startup equity, health insurance, and other benefits. The US base salary range for this full‑time position is: $160,000 - $250,000 + equity + benefits. Our salary ranges are determined by location, level, and role. Individual compensation will be determined by experience, skills, and job‑related knowledge.

    Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.


    #J-18808-Ljbffr

  • Work in company

    Platform / Inference Optimization Engineer

    Only for registered members

    We build and operate large-scale LLM inference and training infrastructure serving millions of users. · ...

    San Francisco $200,000 - $500,000 (USD)

    2 weeks ago

  • Work in company

    Senior Backend Engineer, Inference Platform

    Only for registered members

    · About the Role · Together AI is building the Inference Platform that brings the most advanced generative AI models to the world. Our platform powers multi-tenant serverless workloads and dedicated endpoints, enabling developers, enterprises, and researchers to harness the late ...

    San Francisco

    3 days ago

  • Work in company

    Lead Engineer, Inference Platform

    Only for registered members

    We're looking for a Lead Engineer, Inference Platform to join our team building the inference platform for embedding models that power semantic search retrieval and AI-native features across MongoDB Atlas. · This role is part of the broader Search and AI Platform team and involve ...

    Palo Alto $137,000 - $270,000 (USD)

    1 month ago

  • Work in company

    Senior Software Engineer, Inference Platform

    Only for registered members

    We're looking for a Senior Engineer to help build the next-generation inference platform that supports embedding models used for semantic search, retrieval, and AI-native experiences in MongoDB Atlas. · Design and build components of a multi-tenant inference platform integrated d ...

    Palo Alto, CA

    1 week ago

  • Work in company

    Senior Software Engineer, Inference Platform

    Only for registered members

    We're looking for a Senior Engineer to help build the next-generation inference platform that supports embedding models used for semantic search retrieval and AI-native experiences in MongoDB Atlas. · We'll join the broader Search and AI Platform organization collaborate with ML ...

    Palo Alto, CA

    1 month ago

  • Work in company

    Senior / Principal Inference Engineer - ML Platform

    Only for registered members

    We're building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. · 4+ years of professional experience. · Bachelor's degr ...

    San Mateo $327,000 - $397,460 (USD) Full time

    1 month ago

  • Work in company

    Senior / Principal Inference Engineer - ML Platform

    Only for registered members

    Every day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators. · At Roblox, we're building the tools and platform that empower our ...

    San Mateo, CA $327,000 - $397,460 (USD) per year

    1 day ago

  • Work in company

    Senior / Principal Inference Engineer - ML Platform

    Only for registered members

    As a Senior / Principal Inference Engineer on ML Platform you will build the next generation of ML Ecosystem Tooling, specifically around model inference. ML Platform today supports billions of requests per day across our homepage, marketplace, economy, and more. We are looking f ...

    San Mateo, CA, United States

    1 week ago

  • Work in company

    Artificial Intelligence Engineer

    Only for registered members

    We're looking for an AI Infrastructure / Inference Engineer to build and scale systems powering real-time batch AI workloads in production. · ...

    San Francisco

    1 month ago

  • Work in company

    Product Lead

    Only for registered members

    A well-funded and fast-growing AI company is building a next-generation platform for safe, performant, and cost-efficient AI agent deployment across enterprise environments. · Owning the roadmap and execution for the infrastructure powering model deployment, orchestration, and us ...

    San Francisco, CA

    1 month ago

  • Work in company

    Junior Technical Product Manager

    Only for registered members

    This position involves managing AI inference platforms with expertise in technical product management. · Managing the product lifecycle. · Developing product strategies. · ...

    San Francisco

    1 month ago

  • Work in company

    Junior Technical Product Manager

    Only for registered members

    This is a full-time position for a Junior Technical Product Manager - AI Inference based in the San Francisco Bay Area. · ...

    San Francisco Bay Area

    1 month ago

  • Work in company

    Solutions Architect

    Only for registered members

    FriendliAI is seeking a Forward Deployed Engineer (FDE) to assist enterprises in deploying, scaling, and operating generative and agentic AI workloads on FriendliAI infrastructure. · ...

    San Francisco, CA

    1 week ago

  • Work in company

    Solution Architect

    Only for registered members

    FriendliAI is seeking a Forward Deployed Engineer (FDE) to assist enterprises in deploying, · scaling, and operating generative and agentic AI workloads on FriendliAI infrastructure.Design and implement large-scale deployment architectures for LLM and multimodal inference · ...

    San Francisco

    3 weeks ago

  • Work in company

    Founding Forward Deployed Engineer

    Only for registered members

    Gimlet Labs is building the foundation for the next generation of AI applications. As generative AI workloads rapidly scale, inference efficiency is becoming the critical bottleneck. Gimlet is redefining AI inference from the ground up, combining cutting-edge research with an int ...

    San Francisco $130,000 - $220,000 (USD) per year

    1 week ago

  • Work in company

    Senior Software Engineer

    Only for registered members

    Baseten powers mission-critical inference for the world's most dynamic AI companies. We enable companies operating at the frontier of AI to bring cutting-edge models into production. · ...

    San Francisco $200,000 - $270,000 (USD)

    2 weeks ago

  • Work in company

    Software Engineering – Inference Engineer

    Only for registered members

    Location: San Francisco, CA (Onsite | Remote) · About Virtue AI · Virtue AI sets the standard for advanced AI security platforms. Built on decades of foundational and award-winning research in AI security, its AI-native architecture unifies automated red-teaming, real-time multim ...

    San Francisco

    3 days ago

  • Work in company

    Senior Software Engineer

    Only for registered members

    We are growing quickly and recently raised our $150M Series D backed by investors including BOND IVP Spark Capital Greylock and Conviction Join us and help build the platform engineers turn to to ship AI products. · Design and architect scalable infrastructure systems for our ML ...

    San Francisco, CA

    1 month ago

  • Work in company

    Senior Software Engineer, Model Serving

    Only for registered members

    At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infr ...

    San Francisco, California

    3 days ago

  • Work in company

    Senior Software Engineer, Model Inference

    Only for registered members

    Join Apple Maps to help build the best map in the world. · In this role on ML Platform, you will help bring advanced deep learning and large language models into high-volume, low-latency, · highly available production serving, · improving search quality and powering experiences a ...

    San Francisco $181,100 - $318,400 (USD)

    2 weeks ago

  • Work in company

    Software Engineer

    Only for registered members

    P-1284 · About This Role · As a software engineer for GenAI inference, you will help design, develop, and optimize the inference engine that powers Databricks' Foundation Model API. You'll work at the intersection of research and production, ensuring our large language model (LLM ...

    San Francisco, California $75,000 - $140,000 (USD) per year

    3 days ago

Jobs
>
San Francisco