Senior Software Development Engineer, AI/ML, AWS Neuron, Model Inference - Cupertino, CA

Only for registered members Cupertino, CA, United States

1 month ago

Default job background
DESCRIPTION · The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium. · The AWS Neuron SDK, developed by ...
Lorem ipsum dolor sit amet
, consectetur adipiscing elit. Nullam tempor vestibulum ex, eget consequat quam pellentesque vel. Etiam congue sed elit nec elementum. Morbi diam metus, rutrum id eleifend ac, porta in lectus. Sed scelerisque a augue et ornare.

Donec lacinia nisi nec odio ultricies imperdiet.
Morbi a dolor dignissim, tristique enim et, semper lacus. Morbi laoreet sollicitudin justo eget eleifend. Donec felis augue, accumsan in dapibus a, mattis sed ligula.

Vestibulum at aliquet erat. Curabitur rhoncus urna vitae quam suscipit
, at pulvinar turpis lacinia. Mauris magna sem, dignissim finibus fermentum ac, placerat at ex. Pellentesque aliquet, lorem pulvinar mollis ornare, orci turpis fermentum urna, non ullamcorper ligula enim a ante. Duis dolor est, consectetur ut sapien lacinia, tempor condimentum purus.
Get full access

Access all high-level positions and get the job of your dreams.



Similar jobs

  • At NVIDIA, we aren't just powering the AI revolution—we're accelerating it. The TensorRT inference platform is the backbone of modern AI, delivering the industry's fastest and most efficient deployment of cutting-edge deep learning models on every NVIDIA GPU. With demand for AI e ...

    Santa Clara $184,000 - $356,500 (USD)

    1 week ago

  • Work in company

    Manager, Large Language Model Inference

    Only for registered members

    NVIDIA is accelerating the AI revolution by developing cutting-edge deep learning models on every GPU. · We're seeking a highly skilled Engineering Manager to lead in developing the next generation of LLM inference software technologies. · Your work will be collaborative, interfa ...

    Santa Clara $184,000 - $356,500 (USD)

    1 month ago

  • Work in company

    Manager, Large Language Model Inference

    Only for registered members

    We're seeking a highly skilled and driven Engineering Manager to take the lead in developing the next generation of LLM/VLM/VLA inference software technologies that will define the future of AI. At NVIDIA, we aren't just powering the AI revolution—we're accelerating it. · The Ten ...

    Santa Clara, CA

    4 weeks ago

  • Work in company

    Manager, Large Language Model Inference

    Only for registered members

    At NVIDIA, we aren't just powering the AI revolution—we're accelerating it. The TensorRT inference platform is the backbone of modern AI, delivering the industry's fastest and most efficient deployment of cutting-edge deep learning models on every NVIDIA GPU. With demand for AI e ...

    US, CA, Santa Clara $184,000 - $287,500 (USD) per year

    1 week ago

  • Work in company

    Bilingual Large Model Inference Acceleration Engineer

    Only for registered members

    We are seeking an experienced AI model optimization engineer specializing in large model inference acceleration. · Design and optimize large model inference pipelines for low-latency and high-throughput production deployments · Benchmark and profile deep learning models to identi ...

    San Francisco Bay Area

    3 weeks ago

  • We are seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, multimodal and diffusion models. · ...

    Santa Clara, CA

    4 weeks ago

  • NVIDIA is at the forefront of the generative AI revolution The Algorithmic Model Optimization Team specifically focuses on optimizing generative AI models such as large language models (LLMs) and diffusion models for maximal inference efficiency using techniques ranging from quan ...

    Santa Clara

    1 month ago

  • Work in company

    Multimodal Model Training and Inference Optimization Engineer

    Only for registered members

    The Vision-Applied Research team focuses on applied research in Generative AI and CV/Multimodal Understanding, · and delivering intelligent solutions to ByteDance products. · Optimize large model training pipelines to improve efficiency,speed,and scalability. · Benchmark and prof ...

    San Jose $136,800 - $359,720 (USD)

    2 weeks ago

  • Work in company

    Multimodal Model Training and Inference Optimization Engineer

    Only for registered members

    We are seeking an experienced Multimodal Model Training and Inference Optimization Engineer with expertise in optimizing AI model training and inference, · including distributed training/inference and acceleration.The ideal candidate will work at the cutting edge of AI efficiency ...

    San Jose, CA

    2 weeks ago

  • + Develop AWS Neuron · + Lead team of expert AI/ML engineers to onboard state-of-the-art open-source LLMs · + Drive improvements in model enablement speed and experience ...

    Cupertino, California, USA

    1 week ago

  • Work in company

    Multimodal Model Training and Inference Optimization Engineer

    Only for registered members

    We are seeking an experienced Multimodal Model Training and Inference Optimization Engineer with expertise in optimizing AI model training and inference, · Optimize large model training pipelines to improve efficiency speed scalability. · Develop distributed training strategies s ...

    San Jose $136,800 - $359,720 (USD)

    2 weeks ago

  • +This role will help lead the efforts in building distributed inference support for PyTorch in Neuron SDK.This role will tune these models to ensure highest performance and maximize efficiency of them running on customer AWS Trainium and Inferentia silicon servers. · Design deve ...

    Cupertino

    4 weeks ago

  • The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron. · ...

    Cupertino, CA

    4 weeks ago

  • The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, · the software development kit used to accelerate deep learning · and GenAI workloads on Amazon's custom machine learning accelerators, · Inferentia and Trainium.We combine · deep hardware knowledge with ML e ...

    Cupertino $129,300 - $223,600 (USD)

    4 weeks ago

  • + Develop AWS Neuron, the complete software stack for Trainium. · + Optimize LLMs such as Llama and GPT-OSS to run really fast on Trainium. · + Lead a team of expert AI/ML engineers to onboard and optimize state-of-the-art open-source and customer LLMs, · + Drive improvements in ...

    Cupertino, CA

    1 month ago

  • NVIDIA is at the forefront of the generative AI revolution The Algorithmic Model Optimization Team specifically focuses on optimizing generative AI models such as large language models (LLM) and diffusion models for maximal inference efficiency using techniques ranging from quant ...

    US, CA, Santa Clara $152,000 - $287,500 (USD) per year

    3 days ago

  • Work in company

    Multimodal Model Training and Inference Optimization Engineer

    Only for registered members

    We are seeking an experienced Multimodal Model Training and Inference Optimization Engineer with expertise in optimizing AI model training and inference, including distributed training/inference and acceleration. The ideal candidate will work at the cutting edge of AI efficiency, ...

    San Jose, CA

    2 weeks ago

  • This role will lead efforts in building distributed inference support for PyTorch in the Neuron SDK. · Design develop and optimize machine learning models frameworks for deployment on custom ML hardware accelerators. · Participate in all stages of the ML system development lifecy ...

    Cupertino, CA

    3 weeks ago

  • This role will help lead the efforts in building distributed inference support for Pytorch in the Neuron SDK. This role will tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium and Inferentia silicon and servers ...

    Cupertino, CA

    4 weeks ago

  • Work in company

    Research Scientist, Latent State Inference for World Models

    Only for registered members

    We are seeking a forward-thinking Research Scientist to focus on inferring latent state representations from sensor data,powering world models,and supporting rigorous policy evaluation for autonomous vehicles.This role spans raw perception and structured representations,enabling ...

    Los Altos $176,000 - $264,000 (USD)

    1 month ago

  • Work in company

    Research Scientist, Latent State Inference for World Models

    Only for registered members

    We are seeking a forward-thinking Research Scientist to focus on inferring latent state representations from sensor data, powering world models, and supporting rigorous policy evaluation for autonomous vehicles. · ...

    Los Altos, CA

    1 month ago