Inference Engineer - San Francisco
2 weeks ago

About Us
Mirai builds on-device inference layer for AI.We enable model makers to run AI models directly on edge devices,
Our stack spans low-level GPU kernels to high-level model conversion tools.
The Role
We're looking for engineers who bridge ML research and high-performance inference,You'll work implementing new model architectures,
Nobody knows everything. We'd rather you know one area deeply than everything superficially.If you're good at least in a couple of these areas, you're a great fit:
- JAX / Equinox / Pallas stack
- Rust systems programming with a focus on developer experience
- Writing Metal / Vulkan kernels
Job description
Lorem ipsum dolor sit amet
, consectetur adipiscing elit. Nullam tempor vestibulum ex, eget consequat quam pellentesque vel. Etiam congue sed elit nec elementum. Morbi diam metus, rutrum id eleifend ac, porta in lectus. Sed scelerisque a augue et ornare.
Donec lacinia nisi nec odio ultricies imperdiet.
Morbi a dolor dignissim, tristique enim et, semper lacus. Morbi laoreet sollicitudin justo eget eleifend. Donec felis augue, accumsan in dapibus a, mattis sed ligula.
Vestibulum at aliquet erat. Curabitur rhoncus urna vitae quam suscipit
, at pulvinar turpis lacinia. Mauris magna sem, dignissim finibus fermentum ac, placerat at ex. Pellentesque aliquet, lorem pulvinar mollis ornare, orci turpis fermentum urna, non ullamcorper ligula enim a ante. Duis dolor est, consectetur ut sapien lacinia, tempor condimentum purus.Get full accessAccess all high-level positions and get the job of your dreams.
Similar jobs
Inference Engineer
1 month ago
About Us Most AI is frozen in place - it doesn't adapt to the world. We think that's backwards. · ...
Inference Engineering Manager
3 weeks ago
We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs serving millions of users with state-of-the-art AI capabilities. · ...
AI Inference Engineer
1 month ago
We are looking for an AI Inference engineer to join our growing team. · Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes.Responsibilities · Develop APIs for AI inference that will be used by both internal and external customers · Benchmark and address bot ...
Inference Engineering Manager
3 weeks ago
We are looking for an Inference Engineering Manager to lead our AI Inference team. · This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, · serving millions of users with state-of-the-art AI capabilities. · ...
Inference Engineering Manager
3 weeks ago
We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, serving millions of users with state-of-the-art AI capabilities. · ...
Inference Engineering Manager
3 weeks ago
We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, · serving millions of users with state-of-the-art AI capabilities.Lead and grow a high ...
Software Engineer, Inference
1 month ago
We're looking for a software engineer to help us serve OpenAI's multimodal models at scale. · This work is inherently cross-functional: you'll collaborate directly with researchers training these models and with product teams defining new modalities of interaction. · ...
AI Inference Engineer
1 month ago
We are looking for an AI Inference engineer to join our growing team. · ...
Software Engineer, Inference
2 days ago
We're looking for a software engineer to help us serve OpenAI's multimodal models at scale. · You'll be part of a small team responsible for building reliable, · high-performance infrastructure for serving real-time audio, · image and other MM workloads in production. ...
Software Engineer, Inference
2 days ago
Pulse is tackling one of the most persistent challenges in data infrastructure: extracting accurate, structured information from complex documents at scale. · ...
Cloud Inference Engineer
1 week ago
+ GPU inference optimization vLLM SGLang or TensorRT-LLM experience Distributed compute with GPUs is a super plus Deploy and tune models with optimizations like KV caching paged attention sequence packing etc Conducting model performance reviews Improve scheduler batcher autoscal ...
Software Engineering – Inference Engineer
1 month ago
We are a well-funded, early-stage startup founded by industry veterans, and we're looking for passionate builders to join our core team. · ...
Software Engineering – Inference Engineer
1 month ago
We are looking for an Inference Engineer to join our core team. · We understand that inference is a systems problem not just a model problem. · You will own how models are served in production making inferences fast stable observable and cost efficient under unpredictable workloa ...
Software Engineering – Inference Engineer
1 month ago
We are looking for a Software Engineering – Inference Engineer to join our core team.Virtue AI sets the standard for advanced AI security platforms. · Serve and optimize LLM, embedding, and other ML models' inference across multiple model familiesDesign and operate inference APIs ...
Engineer - GPU Optimisation & Inference
2 days ago
Job summary · Want to push GPU performance to its limits — not in theory, but in production systems handling real-time speech and multimodal workloads? · This team is building low-latency AI systems where milliseconds actually matter.The target isn't "faster than baseline." It's ...
Founding Engineer, ML Inference
2 weeks ago
We're looking for a Founding Engineer with deep expertise in high-performance ML engineering. · Drive our frontier position on real-time model performance for diffusion models · ...
We build and operate large-scale LLM inference and training infrastructure serving millions of users. · ...
Software Engineer, Inference Deployment
1 week ago
We're building the systems that make inference deployment continuous and unattended.About The RoleOur mandate is to make inference deployment boring and unattended. · ...
Software Engineer, Model Inference
2 days ago
We are looking for an engineer who wants to take the world's largest and most capable AI models and optimize them for use in a high-volume, low-latency, and high-availability production and research environment. · ...
Applied AI Inference Engineer
6 days ago
We enable companies operating at the frontier of AI to bring cutting-edge models into production. · Develop and maintain software systems and product features using one or more general-purpose programming languages in a production-level environment. · Drive customer impact by des ...