Software Engineering – Inference Engineer - San Francisco

Only for registered members San Francisco, United States

1 month ago

Default job background

Job summary

We are a well-funded, early-stage startup founded by industry veterans, and we're looking for passionate builders to join our core team.


Lorem ipsum dolor sit amet
, consectetur adipiscing elit. Nullam tempor vestibulum ex, eget consequat quam pellentesque vel. Etiam congue sed elit nec elementum. Morbi diam metus, rutrum id eleifend ac, porta in lectus. Sed scelerisque a augue et ornare.

Donec lacinia nisi nec odio ultricies imperdiet.
Morbi a dolor dignissim, tristique enim et, semper lacus. Morbi laoreet sollicitudin justo eget eleifend. Donec felis augue, accumsan in dapibus a, mattis sed ligula.

Vestibulum at aliquet erat. Curabitur rhoncus urna vitae quam suscipit
, at pulvinar turpis lacinia. Mauris magna sem, dignissim finibus fermentum ac, placerat at ex. Pellentesque aliquet, lorem pulvinar mollis ornare, orci turpis fermentum urna, non ullamcorper ligula enim a ante. Duis dolor est, consectetur ut sapien lacinia, tempor condimentum purus.
Get full access

Access all high-level positions and get the job of your dreams.



Similar jobs

  • Only for registered members San Francisco

    Mirai builds on-device inference layer for AI. · We enable model makers to run AI models directly on edge devices,Our stack spans low-level GPU kernels to high-level model conversion tools. · We're looking for engineers who bridge ML research and high-performance inference,You'll ...

  • Only for registered members San Francisco, CA

    About Us Most AI is frozen in place - it doesn't adapt to the world. We think that's backwards. · ...

  • Only for registered members San Francisco

    We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs serving millions of users with state-of-the-art AI capabilities. · ...

  • Only for registered members San Francisco, CA

    We are looking for an AI Inference engineer to join our growing team. · Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes.Responsibilities · Develop APIs for AI inference that will be used by both internal and external customers · Benchmark and address bot ...

  • Only for registered members San Francisco, CA

    We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, · serving millions of users with state-of-the-art AI capabilities.Lead and grow a high ...

  • Only for registered members San Francisco $200,000 - $350,000 (USD)

    We are looking for an AI Inference engineer to join our growing team. · ...

  • Only for registered members San Francisco, CA

    We're looking for a software engineer to help us serve OpenAI's multimodal models at scale. · This work is inherently cross-functional: you'll collaborate directly with researchers training these models and with product teams defining new modalities of interaction. · ...

  • Only for registered members San Francisco $300,000 - $385,000 (USD)

    We are looking for an Inference Engineering Manager to lead our AI Inference team. · This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, · serving millions of users with state-of-the-art AI capabilities. · ...

  • Only for registered members San Francisco Full time

    We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, serving millions of users with state-of-the-art AI capabilities. · ...

  • Only for registered members San Francisco

    We're looking for a software engineer to help us serve OpenAI's multimodal models at scale. · You'll be part of a small team responsible for building reliable, · high-performance infrastructure for serving real-time audio, · image and other MM workloads in production. ...

  • Only for registered members San Francisco

    Pulse is tackling one of the most persistent challenges in data infrastructure: extracting accurate, structured information from complex documents at scale. · ...

  • Only for registered members San Francisco

    + GPU inference optimization vLLM SGLang or TensorRT-LLM experience Distributed compute with GPUs is a super plus Deploy and tune models with optimizations like KV caching paged attention sequence packing etc Conducting model performance reviews Improve scheduler batcher autoscal ...

  • Only for registered members San Francisco, CA

    We are looking for an Inference Engineer to join our core team. · We understand that inference is a systems problem not just a model problem. · You will own how models are served in production making inferences fast stable observable and cost efficient under unpredictable workloa ...

  • Only for registered members San Francisco Full time

    We are looking for a Software Engineering – Inference Engineer to join our core team.Virtue AI sets the standard for advanced AI security platforms. · Serve and optimize LLM, embedding, and other ML models' inference across multiple model familiesDesign and operate inference APIs ...

  • Only for registered members San Francisco Full time

    We're looking for a Founding Engineer with deep expertise in high-performance ML engineering. · Drive our frontier position on real-time model performance for diffusion models · ...

  • Only for registered members San Francisco $200,000 - $500,000 (USD)

    We build and operate large-scale LLM inference and training infrastructure serving millions of users. · ...

  • Only for registered members San Francisco $320,000 - $485,000 (USD)

    We're building the systems that make inference deployment continuous and unattended.About The RoleOur mandate is to make inference deployment boring and unattended. · ...

  • Only for registered members San Francisco

    Job summary · Want to push GPU performance to its limits — not in theory, but in production systems handling real-time speech and multimodal workloads? · This team is building low-latency AI systems where milliseconds actually matter.The target isn't "faster than baseline." It's ...

  • Only for registered members San Francisco

    We are looking for an engineer who wants to take the world's largest and most capable AI models and optimize them for use in a high-volume, low-latency, and high-availability production and research environment. · ...

  • Only for registered members San Francisco, CA

    We are building low-latency AI systems where milliseconds actually matter. · ...