Jobs
>
Jackson

    Lead Software Engineer, TensorRT Inference Workflows - Jackson, United States - NVIDIA

    Default job background
    Description
    Lead Software Engineer, TensorRT Inference Workflows page is loaded

    Lead Software Engineer, TensorRT Inference Workflows

    Apply

    locations

    US, CA, Santa Clara

    time type

    Full time

    posted on

    Posted 7 Days Ago

    job requisition id


    JR
    We are now looking for a Lead Software Engineer for TensorRT Inference Workflows Would you like to make a big impact in Deep Learning by helping build a state-of-the-art inference framework for NVIDIA GPUs? We are seeking a technical lead in the TensorRT Inference Workflows software team

    What you'll be doing:
    Develop components of TensorRT, NVIDIA's SDK for high-performance deep learning inference.


    Use C++, Python and CUDA to build graph parsers, optimizers, compute kernels and tools for effective deployment of trained deep learning models.

    Collaborate with teams of deep learning experts, GPU architects and DevOps engineers across diverse teams.


    What we need to see:
    BS, MS, PhD or equivalent experience in Computer Science, Computer Engineering.

    10+ years of software development experience

    Proficiency in C++

    Strong grasp of Machine Learning concepts.

    Excellent communication skills, and an aptitude for collaboration and teamwork.

    Ways to stand out from the crowd:
    Familiarity with advanced C++11/C++14 language features.

    Experience developing System Software.

    Experience in shipping complex software packages.

    Proficiency in Python.

    Experience in GPU kernel programming using CUDA or OpenCL.

    Background in software performance benchmarking, profiling, and optimizations.

    Experience in compiler development

    Background in working with TensorRT, PyTorch, TensorFlow, ONNX Runtime or other ML frameworks.


    Intelligent machines powered by Artificial Intelligence computers that can learn, reason and interact with people are no longer science fiction.

    GPU Deep Learning has provided the foundation for machines to learn, perceive, reason and solve problems.

    NVIDIA's GPUs run AI algorithms, simulating human intelligence, and act as the brains of computers, robots and self-driving cars that can perceive and understand the world.

    Increasingly known as "the AI computing company", NVIDIA wants you Come, join our TensorRT Inference Architecture team, where you can help build real-time, cost-effective computing platforms driving our success in this exciting and rapidly growing field.

    The base salary range is 220,000 USD - 419,750 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

    You will also be eligible for equity and benefits .

    NVIDIA accepts applications on an ongoing basis.
    NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer.

    As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

    Similar Jobs (5)

    Senior Software Engineer, Deep Learning Inference Workflows

    locations

    US, CA, Santa Clara

    time type

    Full time

    posted on

    Posted 7 Days Ago

    Senior Software Engineer, TensorRT Inference

    locations

    6 Locations

    time type

    Full time

    posted on

    Posted 7 Days Ago

    Senior Software Engineer, Generative AI Research

    locations

    US, CA, Santa Clara

    time type

    Full time

    posted on

    Posted 7 Days Ago

    #J-18808-Ljbffr