Senior GenAI Algorithms Engineer — Model Optimizations for Inference - US, CA, Santa Clara
5 days ago

Job summary
NVIDIA is at the forefront of the generative AI revolution The Algorithmic Model Optimization Team specifically focuses on optimizing generative AI models such as large language models (LLM) and diffusion models for maximal inference efficiency using techniques ranging from quantization speculative decoding sparsity distillation pruning to neural architecture search streamlined deployment strategies with open-sourced inference frameworks Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs VLMs multimodal and diffusion models In this role you will design implement and productionize model optimization algorithms for inference and deployment on NVIDIA's latest hardware platforms The focus is on ease of use compute memory efficiency achieving the best accuracy–performance tradeoffs through software–hardware co-design
Responsibilities
- Design build modular scalable model optimization software platforms that deliver exceptional user experiences while supporting diverse AI models optimization techniques drive widespread adoption Explore develop integrate innovative deep learning optimization algorithms e.g. quantization speculative decoding sparsity into NVIDIA's AI software stack e.g. TensorRT Model Optimizer NeMo/Megatron TensorRT-LLM Deploy optimized models leading OSS inference frameworks contribute specialized APIs model-level optimizations new features tailored latest NVIDIA hardware capabilities Partner NVIDIA teams deliver model optimization solutions customer use cases ensure optimal end-to-end workflows balanced accuracy-performance trade-offs Conduct deep GPU kernel-level profiling identify capitalize hardware software opportunities efficient attention kernels KV cache parallelism strategies Drive continuous innovation deep learning performance strengthen platform integration expand market adoption across AI inference ecosystem
Job description
, consectetur adipiscing elit. Nullam tempor vestibulum ex, eget consequat quam pellentesque vel. Etiam congue sed elit nec elementum. Morbi diam metus, rutrum id eleifend ac, porta in lectus. Sed scelerisque a augue et ornare.
Donec lacinia nisi nec odio ultricies imperdiet.
Morbi a dolor dignissim, tristique enim et, semper lacus. Morbi laoreet sollicitudin justo eget eleifend. Donec felis augue, accumsan in dapibus a, mattis sed ligula.
Vestibulum at aliquet erat. Curabitur rhoncus urna vitae quam suscipit
, at pulvinar turpis lacinia. Mauris magna sem, dignissim finibus fermentum ac, placerat at ex. Pellentesque aliquet, lorem pulvinar mollis ornare, orci turpis fermentum urna, non ullamcorper ligula enim a ante. Duis dolor est, consectetur ut sapien lacinia, tempor condimentum purus.
Access all high-level positions and get the job of your dreams.
Similar jobs
Inference Frontend
1 week ago
Cerebras Systems builds the world's largest AI chip. Our novel wafer-scale architecture provides AI compute power of dozens of GPUs on a single chip. · ...
Inference Engineer
2 weeks ago
We're looking for engineers who can bridge the gap between ML research and high-performance inference. · JAX / Equinox / Pallas stack · Rust systems programming with a focus on developer experience · ...
Principal Engineer Inference Stack
1 week ago
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. · Develop techniques for optimizing scale-up and scale-out inference. · Develop methods and tooling to utilize dynam ...
AI Inference Engineer
4 weeks ago
We are looking for an AI Inference Engineer with a solid background in speech recognition and model inference. · In this role, you will develop state-of-the-art automatic speech recognition system and ship it to various Zoom products. You will work on the most cutting edge speech ...
Head of Inference Kernels
2 weeks ago
As a core member of the team, you will play a pivotal role in leading a high-performing team to build optimized kernels and implement highly optimized inference stacks for state-of-the-art transformer models. · Architect Best-in-Class Inference Performance on Sohu: Deliver contin ...
AI Inference Engineer
4 weeks ago
+Job summary · We are looking for an AI Inference Engineer with a solid background in speech recognition and model inference.Developing state-of-the-art speech services for Zoom products. · Optimizing ASR inference systems for production deployment. · +We are developing speech re ...
Solutions Architect, Inference Deployments
1 week ago
We're forming a team of innovators to roll out and enhance AI inference solutions at scale demonstrating NVIDIA's GPU technology and Kubernetes. · Help customers craft deploy and maintain scalable GPU accelerated inference pipelines on Kubernetes for large language models LLMs an ...
Inference Software Engineer
2 weeks ago
We are building the world's first AI inference system purpose-built for transformers - delivering over 10x higher performance and dramatically lower cost and latency than a B200. · Support porting state-of-the-art models to our architecture. · ...
Principal Engineer Inference Stack
1 week ago
We are looking for a strategic software engineering lead who is passionate about improving the performance of key applications and benchmarks.Join us as we shape the future of AI and beyond. · ...
AI Inference Engineer
4 weeks ago
+We are looking for an AI Inference Engineer with a solid background in speech recognition and model inference. · +Developing state-of-the-art speech services for Zoom products. · Optimizing ASR inference systems for production deployment. · + ...
Head of Inference Kernels
2 weeks ago
About Etched is building the world's first AI inference system purpose-built for transformers. As a core member of the team, you will play a pivotal role in leading a high-performing team to build optimized kernels and implement highly optimized inference stacks for state-of-the- ...
Inference Software Engineer
2 days ago
About Etched: building AI chips that are hard-coded for individual model architectures. · Contribute to the architecture and design of the Sohu host software stack · ...
AI Inference Engineer
1 month ago
TheAI Inference Engineer at Quadric will port AI models to Quadric platform; optimize model deployment for efficient inference; profile and benchmark model performance. · Bachelor's or Master's in Computer Science and/or Electric Engineering. · 5+ years of experience in AI/LLM mo ...
Engineering Manager, Deep Learning Inference
1 month ago
NVIDIA is seeking an exceptional Manager, Deep Learning Inference Software, to lead a world-class engineering team advancing the state of AI model deployment. · ...
Manager, Large Language Model Inference
2 weeks ago
We're seeking a highly skilled and driven Engineering Manager to take the lead in developing the next generation of LLM/VLM/VLA inference software technologies that will define the future of AI. At NVIDIA, we aren't just powering the AI revolution—we're accelerating it. · The Ten ...
Engineering Manager, Deep Learning Inference
4 weeks ago
NVIDIA is seeking an exceptional Manager, Deep Learning Inference Software, to lead a world-class engineering team advancing the state of AI model deployment. · ...
Manager, Large Language Model Inference
1 month ago
NVIDIA is accelerating the AI revolution by developing cutting-edge deep learning models on every GPU. · We're seeking a highly skilled Engineering Manager to lead in developing the next generation of LLM inference software technologies. · Your work will be collaborative, interfa ...
Engineering Manager, Deep Learning Inference
1 month ago
NVIDIA seeks an exceptional Engineering Manager to lead its Deep Learning Inference Software team. You will shape software powering AI systems on NVIDIA GPUs. · ...
Boson AI is an early-stage startup building large audio models for everyone to enjoy and use. · ...
Manager, Large Language Model Inference
1 month ago
We are seeking a highly skilled and driven Engineering Manager to take the lead in developing the next generation of LLM/VLM/VLA inference software technologies that will define the future of AI. · MS, PhD or equivalent experience in Computer Science/Computer Engineering/AI or re ...
Manager, Large Language Model Inference
1 week ago
+We're seeking a highly skilled and driven Engineering Manager to take the lead in developing the next generation of LLM/VLM/VLA inference software technologies that will define the future of AI. · +Lead and grow a team responsible for specialized kernel development, runtime opti ...